This might be a dumb question but are there any current or foreseeable practical applications of this kind of result (like in the realm of distributed computing) or is this just pure mathematics for its own sake?
Confused why the article author believes this is a surprise. The foundations of mathematics and computer science are basically the same subject (imho) and dualities between representations in both fields have been known for decades.
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
I think this is pretty common for Quanta, and it might be sticking out more because it's a field we're familiar with.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
Initially I too thought - but we try to approximate infinity in CS all the time.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
I studied math for a long time.
I’m convinced math would be better without infinity. It doesn’t exist. I also think we don’t need numbers too big . But we can leave those
I haven't studied math beyond what was needed for my engineering courses.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
Is this a joke or are you deeply interested in some ZFC variant that im unaware of? We absolutely need infinity to make a ton of everyday tools work, its like saying we dont need negative numbers because those dont exist either.
A ZFC variant without infinity is basically just PA. (Because you can encode finite sets as natural numbers.) Which in practice is plenty enough to do a whole lot of interesting mathematics. OTOH by the same token, the axiom of infinity is genuinely of interest even in pure finitary terms, because it may provide much simpler proofs of at least some statements that can then be asserted to also be valid in a finitary context due to known conservation results.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
There’s tons of variants of ZFC without the “infinity”. Constructivism has a long and deep history in mathematics and it’s probably going to become dominant in the future.
Numbers don't exist, either. Not really. Any 'one' thing you can point to, it's not just one thing in reality. There's always something leaking, something fuzzy, something not accounted for in one. One universe, even? We can't be sure other universes aren't leaking into our one universe. One planet? What about all the meteorites banging into the one, so it's not one. So, numbers don't exist in the real world, any more than infinity does. Mathematics is thus proved to be nothing but a figment of the human imagination. And no, the frequency a supercooled isolated atom vibrates at in an atomic clock isn't a number either, there's always more bits to add to that number, always an error bar on the measurement, no matter how small. Numbers aren't real.
Wouldn't it follow that those "things" we're pointing to aren't really "things" because they're all leaking and fuzzy? Begging the question, what ends up on a list of things that do exist?
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
Arguably, type theory is more influential, as it seems to me all the attempts to actually formalize the hand-wavy woo mathematicians tend to engage in are in lean, coq, or the like. We've pretty much given up on set theory except to prove things to ourselves. However, these methods are notoriously unreliable.
Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.
This might be a dumb question but are there any current or foreseeable practical applications of this kind of result (like in the realm of distributed computing) or is this just pure mathematics for its own sake?
Confused why the article author believes this is a surprise. The foundations of mathematics and computer science are basically the same subject (imho) and dualities between representations in both fields have been known for decades.
It's not a surprise that math and CS are related. It's a surprise that this particular subject in math and CS are so intimately related.
Is it?
The notion of algorithms and computer science were invented to discuss the behavior of infinite sequences: examining if there’s descriptions of real numbers. This was extended by connecting complicated infinite structures like the hyperreals with decisions theory problems.
That descriptions of other infinite sets also corresponds to some kind of algorithm seems like a natural progression.
That it happens to be network theory algorithms rather than (eg) decision theory algorithms is worth noting — but hardly surprising. Particularly because the sets examined arose from graph problems, ie, a network.
Finally - we can calculate infinity.
Been a long way towards it. \o/
If you’re in the Bay Area next month then The Dillinger Escape Plan are bringing the Calculating Infinity circus to town(s):
https://www.theregencyballroom.com/events/detail/?event_id=1...
This is enormously off topic but if one person sees this and ends up not missing the show then it was worth mentioning. I think there’s a reasonable crossover between math, discrete math, hacking, and mathcore :)
Took me an embarrassingly long time to realize you're talking about a music event not a math lecture
>Finally - we can calculate infinity.
And Beyond!
Fairly trivial, in haskell:
let x = x in x
Completely encapsulates a countable infinity.
This has indeed taken forever /s
It's cons'es all the way down.
> computer science with the finite
um... no... computer science is very concerned with the infinite. I'm surprised quanta published this. I always think highly of their reporting.
I think this is pretty common for Quanta, and it might be sticking out more because it's a field we're familiar with.
I'm really torn about this, because I think they're providing a valuable service. But their general formula doesn't diverge a whole lot from run-of-the-mill pop-science books: a vague, clickbaity title and then an article that focuses on personalities and implications of discoveries while glancing over a lot of important details (and not teaching much).
Initially I too thought - but we try to approximate infinity in CS all the time.
But I have come to think, well actually, approximate is doing some heavy lifting there AND I have never used infinity for anything except to say "look I don't know how high this should go, so go as far as you can go, and double that, which is really saying, you are bound by finite boundaries, you'll have to work within them, and the uncountable thing that I was thinking about is really finite.
Edit: Think of it like this
We know that it's most likely that the universe is infinite, but we can only determine how big it is by how far we can see, which is bounded by the speed of light, and the fact that we can only see matter emitting light (I'm being careful here, if the big bang theory is right, and I am understanding it correctly, there is a finite amount of matter in the universe, but the universe itself is infinite)
Another glaring example:
> Set theorists use the language of logic, computer scientists the language of algorithms.
Computer science doesn’t use logic? Hello, Booleans.
So lazy, especially when you can ask an AI to tell you if you’re saying something stupid.
[dead]
That’s nothing, ‘node_modules’ has been linking the math of infinity to my filesystem for years.
I studied math for a long time. I’m convinced math would be better without infinity. It doesn’t exist. I also think we don’t need numbers too big . But we can leave those
I haven't studied math beyond what was needed for my engineering courses.
However, I also am starting to believe that infinity doesn't exist.
Or more specifically, I want to argue that infinity is not a number, it is a process. When you say {1, 2, 3, ... } the "..." represents a process of extending the set without a halting condition.
There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
There is no infinity'th prime number. There is a process by which you can show that a bigger primer number must always exist.
> There is no infinity at the end of a number line. There is a process that says how to extend that number line ever further.
Sure, but ordinal numbers exist and are useful. It's impossible to prove Goodstein's theorem without them.
https://en.wikipedia.org/wiki/Ordinal_number
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
The statement and proof of the theorem are quite accessible and eye-opening. I think the number line with ordinals is way cooler than the one without them.
How would you handle things like probability distributions with infinite support?
I think we can stop at 8.
7, if the Extremely Strong Goldbach Conjecture holds. [1]
[1] https://xkcd.com/1310/
Is this a joke or are you deeply interested in some ZFC variant that im unaware of? We absolutely need infinity to make a ton of everyday tools work, its like saying we dont need negative numbers because those dont exist either.
There are a couple philosophies in that vein, like finitism or constructivism. Not exactly mainstream but they’ve proven more than you’d expect
https://en.wikipedia.org/wiki/Finitism
https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...
A ZFC variant without infinity is basically just PA. (Because you can encode finite sets as natural numbers.) Which in practice is plenty enough to do a whole lot of interesting mathematics. OTOH by the same token, the axiom of infinity is genuinely of interest even in pure finitary terms, because it may provide much simpler proofs of at least some statements that can then be asserted to also be valid in a finitary context due to known conservation results.
In a way, the axiom of infinity seems to behave much like other axioms that assert the existence of even larger mathematical "universes": it's worth being aware of what parts of a mathematical development are inherently dependent on it as an assumption, which is ultimately a question of so-called reverse mathematics.
There’s tons of variants of ZFC without the “infinity”. Constructivism has a long and deep history in mathematics and it’s probably going to become dominant in the future.
Numbers don't exist, either. Not really. Any 'one' thing you can point to, it's not just one thing in reality. There's always something leaking, something fuzzy, something not accounted for in one. One universe, even? We can't be sure other universes aren't leaking into our one universe. One planet? What about all the meteorites banging into the one, so it's not one. So, numbers don't exist in the real world, any more than infinity does. Mathematics is thus proved to be nothing but a figment of the human imagination. And no, the frequency a supercooled isolated atom vibrates at in an atomic clock isn't a number either, there's always more bits to add to that number, always an error bar on the measurement, no matter how small. Numbers aren't real.
Wouldn't it follow that those "things" we're pointing to aren't really "things" because they're all leaking and fuzzy? Begging the question, what ends up on a list of things that do exist?
Some people have this view, but I don't get why.
> All of modern mathematics is built on the foundation of set theory, the study of how to organize abstract collections of objects
What the hell. What about Type Theory?
Type theory is actually a stronger axiomatic system than ZFC, and is equiconsistent with ZFC+ a stronger condition.
See this mathoverflow response here https://mathoverflow.net/a/437200/477593
> Type theory is actually a stronger axiomatic system than ZFC
I'm stating the obvious, but it depends which type theory.
Is there a collection of type theory axioms anywhere near as influential as ZF or ZFC?
Sure, but is discarding Type Theory and Category Theory really fair with a phrase like "All of modern mathematics"? Especially in terms of a connection with computer science.
Arguably, type theory is more influential, as it seems to me all the attempts to actually formalize the hand-wavy woo mathematicians tend to engage in are in lean, coq, or the like. We've pretty much given up on set theory except to prove things to ourselves. However, these methods are notoriously unreliable.
Regardless of the name, descriptive set theory does not seem to have all that much to do with "set theory" in a foundational sense; it can be recast in terms of types and spaces with comparative ease, and this can be quite advantageous. The article is a bit confusing in many ways; among other things, it seems quite obviously wrong to suggest that recasting a concept that seems to be conventionally related to mathematical "infinities" in more tangible computational terms is something deeply original to this particular work; if anything, it happens literally all the time when trying to understand existing math in type-theoretic or constructive terms.
"the study of how to organize abstract collections of objects" is not really a great explanation of set theory. But it is true that the usual way (surely not the only way) to formalize mathematics is starting with set-theoretic axioms and then defining everything in terms of sets.
"Usual", "most common by far" etc. are all great phrases, but not "all of mathematics", esp when we talk about math related to computer science
Math related to CS is typically formalized starting with set theory, just like other branches of math.
What's important to note is that this is just a matter of convention. An historical accident. It is by no means a law of nature that math need be formalized with ZFC or any other set theory derivative, and there are usually other options.
As a matter of fact, ZFC fits CS quite poorly.
In ZFC, everything is a set. The number 2 is a set. A function is a set of ordered pairs. An ordered pair is a set of sets.
In ZFC: It is a valid mathematical question to ask, "Is the number 3 an element of the number 5?" (In the standard definition of ordinals, the answer is yes).
In CS: This is a "type error." A programmer necessarily thinks of an integer as distinct from a string or a list. Asking if an integer is "inside" another integer is nonsense in the context of writing software.
For a computer scientist, Type Theory is a much more natural foundation than Set Theory. Type Theory enforces boundaries between different kinds of objects, just like a compiler does.
But, in any case, that ZFC is "typical" is an accident of history, and whether or not it's appropriate at all is debatable.
In CS, types are usually a higher level of abstraction built on top of more fundamental layers. If you choose to break the abstraction, you can definitely use an integer as a string, a list, or a function. The outcome is unlikely to be useful, unless your construct was designed with such hacks in mind.
When I did a PhD in theoretical computer science, type theory felt like one niche topic among many. It was certainly of interest to some subfield, but most people didn't find it particularly relevant to the kind of TCS they were doing.
Sure, but it fits the rest of mathematics "poorly" for exactly the same reasons. No working mathematician is thinking about 3 as an element of 5.
The reason ZFC is used isn't because it's a particularly pedagogical way of describing any branch of math (whether CS or otherwise), but because the axioms are elegantly minimal and parsimonious.
the empirical modern mathematics are build on set theory, type and category theory are just other possible foundations
Most modern mathematicians are not set theorists. There are certain specialists in metamathematics and the foundations of mathematics who hold that set theory is the proper foundation -- thus that most mathematical structures are rooted in set theory, and can be expressed as extensions of set theory -- but this is by no means a unanimous view! It's quite new, and quite heavily contested.