Let's discuss Mathematics

Do big numbers exist?

Apparently Ultrafinitists believe big numbers like 10^100 do not exist, and the fact that our maths goes all the way up to infinity is why we have such problems with quantum physics and/or gravity. I do not understand it at all, and neither does Sabine Hossenfelder in the below video.
Spoiler Youtube :
 
I will eventually look at the video, but Sabine has a rather bad reputation for taking contrarian views deliberately. Infinity clearly exists and is formally manipulable as a notion, and while it may (?) not be a cosmic reality (which matters when your subject is in the realm of science), it still wouldn't make sense to take it out of math.
By the way, it was a huge struggle to bring it into math. And I don't mean something as late as Cantor, but Archimedes, who for years had to hide his method (Archimedian proto-calculus).
 
Do big numbers exist?

Apparently Ultrafinitists believe big numbers like 10^100 do not exist, and the fact that our maths goes all the way up to infinity is why we have such problems with quantum physics and/or gravity. I do not understand it at all, and neither does Sabine Hossenfelder in the below video.
Spoiler Youtube :

I watch a few of Hossenfelder's videos each week. Most are interesting, but many months ago she came out with one about a study on consciousness that made it clear that she does not understand in the least the essence of the "hard problem of consciousness."

As for the ultrafinitist position, it's only shot at coherence seems to reside within a universe of discourse that is restricted to the physical realm. The problems plaguing the foundations of physics these days are the responsibility of physicists, and should not be passed on to the mathematicians. 10^100 certainly exists, otherwise there must exist an integer that, when 1 is added to it, results in something that is not an integer. Obvious twaddle.

Physicists' relationship with mathematics is problematic in my view. Though I have a PhD in mathematics, I find the way physicists "do" mathematics to be very difficult to follow. Part of it is extremely antiquated notations: physicists do not modernize their notations except in niche areas, which makes their work inscrutable to experts outside their little bubble who might otherwise be able to identify their errors. This traces to the very beginning of a physicists' education. Take a look at a calculus textbook and compare it to a basic physics textbook. Look at the hand-wavey, non-rigorous way the physics book uses "differentials" that dates back to the way analysis was done in the 1700s or earlier, and which ultimately precipitated a crisis in calculus-based mathematics that didn't get set right until the mid-1800s. You'll also likely note that the physics textbook does not even bother to explain, in a chapter 0 or an appendix, precisely how one is to treat a differential like dx or dy for purposes of proving anything logically. So, dx is just a "very tiny distance," or whatnot. But it's not until a physics student finally progresses to a relativity theory course as a senior undergrad that the real trauma occurs: differentials are used all over, but now they are the differentials of grown-up differential manifold theory, which possess a ton of underlying machinery and delicate nuances. Does the relativity textbook unpack this? No! After all, hasn't the physics student been using differentials since their Physics 101 days? And yet a whole week might be devoted to learning about Einstein summation notation, which is not really that clever.

I will agree with Hossenfelder, though, when she muses that the singularities and infinities that arise in physics (such as relativity theory) may have something to do with using continuous mathematical models to describe an apparently discrete physical reality. The way physicists "do" mathematics needs a close examination and thoroughgoing revision, but old habits die hard.

As for the reason why reality appears to be discrete in nature, well, that's entirely due to the "maximum resolution" of our senses, whether aided by instrumentation or not. After all, our instruments and measuring apparatus are also things we only experience with our senses. Being a metaphysical idealist, Ultimate Reality is, in my view, entirely mental in nature, and we are all "psychological alters" of a single experiencing subject that unfolds and acts instinctively rather like Arthur Schopenhauer's "will."
 
Last edited:
What is the definition of simplest form when it comes to Algebraic Fractions?

I have googled, and most definitions seem circular to me (eg. "In mathematics, the simplest form refers to the most reduced or simplified representation of a fraction.") This site is better than most, and says "In mathematical algebra, the simplest form is the least attainable fraction of a number or a linear equation." but I do no know the strict definition of attainable.

They give an example, but it just leaves me less sure. They say simplify this:


And the answer is:

VkBIHN5.png


But I do not understand how the simplest form can have an expression that can be further factorised on the top and an expression with brackets on the bottom. Depending on the definition of simple, surely the answer has to be one of these:

I'd say that a rational expression (or "algebraic fraction," though I see that as a more general term) is in simplest form when the numerator and denominator have no common zeros (or no factors in common). Since the last two forms have a numerator with zeros 0 and -2, and a denominator with zero -1, either could be said to be in simplest form.
 
What is the definition of simplest form when it comes to Algebraic Fractions?

I have googled, and most definitions seem circular to me (eg. "In mathematics, the simplest form refers to the most reduced or simplified representation of a fraction.") This site is better than most, and says "In mathematical algebra, the simplest form is the least attainable fraction of a number or a linear equation." but I do no know the strict definition of attainable.

They give an example, but it just leaves me less sure. They say simplify this:


And the answer is:

VkBIHN5.png


But I do not understand how the simplest form can have an expression that can be further factorised on the top and an expression with brackets on the bottom. Depending on the definition of simple, surely the answer has to be one of these:

Looking around, it seems this link is the most pertinent here:

I note the article is rather vague about whether or not to leave the numerator or denominator factored. Having taught basic algebra many times in the past, it seems the consensus among textbooks is that it doesn't matter (I would accept either of the final expressions above). That is, it doesn't matter until Little Johnny encounters a standardized multiple-choice exam created by a committee of individuals committed to a particular dogma!
 
Hey guys, I wish to ask if there is any loss in generality or some other fault with my own answer to a problem (highschool math, geometry). What follows is (first) the book's answer, and then mine. They are similar, but crucially the book sets an entirely stable segment (the triangle's height from A) as the linked stable property, while I chose a nominally variable - but defined and confined otherwise - (the base of the triangle formed by two arbitrary Ms as midpoints of the other sides). Both answers rest on the same two theorems. Thanks for any help :)

1762442765305.png

(don't mind the shading, it means nothing and is an artifact)

(edit: at least going by the main math subreddit, my way is ok too)
 
Last edited:
10^100 certainly exists, otherwise there must exist an integer that, when 1 is added to it, results in something that is not an integer.

Can a number exist if no one, nor sufficiently reliable machine, has ever calculated it? One could suppose we don't have proof of structure or number, until someone has constructed it.
 
Spoonwood:

I'd say it exists as potentiality (at least in the imagination of mathematicians) as a logical consequence of a self-consistent set of axioms, the axioms themselves being actualized constructions in thought. To deny the existence of 10^100 is to deny the potential to directly experience it in some way, even in principle, given enough time and space, which makes little sense to me in a universe estimated to be comprised of 10^80 particles.

In any case if 10^100 didn't exist, then we must declare log(10^100) = 100 to be a false statement, and we must set what seems to me to be a fairly strict and arbitrary limit on the possibilities of physical existence (such as the multiverse concept), and even mind itself. Would that not be a sad thing?

Just my opinion, of course!

Edit: look up the number tree(3). 10^100 is nothing compared to it, and yet it is the number of ways to grow certain graphs (sets of lines and points) called trees subject to certain rules.
 
Last edited:
Spoonwood:

I'd say it exists as potentiality (at least in the imagination of mathematicians) as a logical consequence of a self-consistent set of axioms, the axioms themselves being actualized constructions in thought. To deny the existence of 10^100 is to deny the potential to directly experience it in some way, even in principle, given enough time and space, which makes little sense to me in a universe estimated to be comprised of 10^80 particles.

In any case if 10^100 didn't exist, then we must declare log(10^100) = 100 to be a false statement, and we must set what seems to me to be a fairly strict and arbitrary limit on the possibilities of physical existence (such as the multiverse concept), and even mind itself. Would that not be a sad thing?

Just my opinion, of course!
I have to point out that the 10^100 is my pick of the examples that Sabine gave, which could cause a loss in translation. Here is what wikipedia says:

Thus some ultrafinitists will deny or refrain from accepting the existence of large numbers, for example, the floor of the first Skewes's number, which is a huge number defined using the exponential function as exp(exp(exp(79))), or
.
{\displaystyle e^{e^{e^{79}}}.}

The reason is that nobody has yet calculated what natural number is the floor of this real number, and it may not even be physically possible to do so. Similarly,2(↑↑↑)6 ( Knuth's up-arrow notation) would be considered only a formal expression that does not correspond to a natural number. The brand of ultrafinitism concerned with physical realizability of mathematics is often called actualism.
 
I have to point out that the 10^100 is my pick of the examples that Sabine gave, which could cause a loss in translation. Here is what wikipedia says:

Thus some ultrafinitists will deny or refrain from accepting the existence of large numbers, for example, the floor of the first Skewes's number, which is a huge number defined using the exponential function as exp(exp(exp(79))), or
.
{\displaystyle e^{e^{e^{79}}}.}

The reason is that nobody has yet calculated what natural number is the floor of this real number, and it may not even be physically possible to do so. Similarly,2(↑↑↑)6 ( Knuth's up-arrow notation) would be considered only a formal expression that does not correspond to a natural number. The brand of ultrafinitism concerned with physical realizability of mathematics is often called actualism.
Well, y'know, I just don't see the limitations of the physical realm (or our intellects) as being something that mathematics—essentially a realm of pure thought—must answer for, or be circumscribed by. Ultimately mathematics is founded on axioms and definitions. Do the real numbers exist? Certainly: there are numerous (pun intended?) ways to define them. The complex numbers do not physically exist at all, yet many physical problems can only be reasonably solved with their use.

I guess what I'm trying to say is that I find the ultrafinitist viewpoint rather silly. To steer around the singularities that arise in current physical theories I'd say discretization at the Planck scale would be something to consider, and maybe this is what ultrafinitists have in mind, but to say 10^100 doesn't exist mathematically (if that's what they really say) is absurd.
 

The biggest controversy in maths could be settled by a computer


For over a decade, mathematicians have failed to agree whether a 500-page proof is actually correct. Now, translating the proof into a computer-readable form may finally settle the matter.

One of the most controversial debates in mathematics could be settled with the aid of a computer, potentially ending a bitter argument about a complex proof that has raged for more than a decade.

The trouble began in 2012, when Shinichi Mochizuki at Kyoto University in Japan stunned the mathematical world with a sprawling 500-page proof for the ABC conjecture, an important unsolved problem that strikes at the very heart of what numbers are. The proof used a highly technical and abstruse framework invented by Mochizuki, called inter-universal Teichmüller (IUT) theory, which appeared impenetrable even to most expert mathematicians seeking to understand it.

The ABC conjecture, which is now more than 40 years old, involves a seemingly simple equation of three whole integers, a + b = c, and dictates how the prime numbers that make up these numbers must relate to one another. As well as giving deep insights into the fundamental nature of how addition and multiplication interact, the conjecture has implications for other famous mathematical conjectures, such as Fermat’s Last Theorem.

These potential ramifications made mathematicians initially enthusiastic about verifying the proof, but early efforts faltered and Mochizuki bemoaned that more effort had not been made to digest the work. Then in 2018, two prominent German mathematicians, Peter Scholze at the University of Bonn and Jakob Stix at Goethe University Frankfurt, announced they had located a possible chink in the proof’s armour.

But Mochizuki rejected their argument and, with no grand adjudicating body to rule on who was right or wrong, the validity of IUT theory froze into two camps: on one side, most of the mathematical community; on the other, a small group of researchers loosely affiliated with Mochizuki and the Research Institute for Mathematical Sciences in Kyoto, where he is a professor.

Now, Mochizuki has proposed a possible solution to the stalemate. He has suggested translating the proof from its current form, in a mathematical notation designed for humans, to a programming language called Lean, which could be automatically checked and verified by a computer.

This process, called formalisation, is an ongoing area of research that could completely change the way mathematics is done. Formalising Mochizuki’s proof has been suggested before, but this is the first time he has indicated a desire to move forward with the project.

Mochizuki didn’t respond to a request for comment for this article, but in a recent report, he argued that Lean would be well suited to untangling the sorts of disagreements between mathematicians that have prevented the widespread acceptance of his proof: “[Lean] is the best and perhaps the only technology… for achieving meaningful progress with regard to the fundamental goal of liberating mathematical truth from the yoke of social and political dynamics,” writes Mochizuki.

According to Mochizuki, he was convinced of formalisation’s merits after attending a recent conference on Lean in Tokyo in July, in particular after seeing its ability to handle the sorts of mathematical structures he says are essential for his IUT theory.

This is a potentially promising direction for helping to break the impasse, says Kevin Buzzard at Imperial College London. “If it’s written down in Lean, then it’s not crazy, right? A lot of the stuff in the papers is written in a very strange language, but if you can write it down in Lean, then it means that at least this strange language has become a completely well-defined thing,” he says.

“We want to understand the why [of IUT], and we’ve been waiting for that for more than 10 years,” says Johan Commelin at Utrecht University in the Netherlands. “Lean would be able to help us understand those answers.”

However, both Buzzard and Commelin say that formalising IUT theory would be a mammoth undertaking and would involve translating reams of mathematical equations that currently only exist in human-readable form. This project would be on par with some of the largest formalisation efforts that have ever been completed, which often involve teams of expert mathematicians and Lean programmers, taking months or years.

This daunting prospect may be an unattractive proposition for the small handful of mathematicians qualified to take on the project. “People are going to have to make a big judgement call as to whether they want to sink a lot of their time into working on a project that ultimately might turn out to be a failure,” says Buzzard.

But even if mathematicians do manage to complete the project, and the Lean code shows that Mochizuki’s theorem has no contradictions, mathematicians including Mochizuki himself could still fight over its meaning, says Commelin.

“Lean can have a lot of impact and put an end to the controversy, but only if Mochizuki really sticks to his new resolution to formalise his work,” he says. “If he walks away after four months, saying, ‘Okay, I tried this, but Lean is just too stupid to understand my proof’, then it’s just a new chapter in a very long series of chapters where we’re still stuck with a social problem.”

And, for all the optimism that Mochizuki shows towards Lean, he also agrees with his critics that interpreting the code’s meaning could lead to further disagreements, writing that Lean “does not appear at the present time to constitute any of sort of “magical cure” for the complete resolution of social and political issues.”

However, Buzzard is hopeful that a successful formalisation might, at least, move the decade-long saga on, especially if Mochizuki succeeds. “You can’t argue with the software,” he says.
 
New Scientist has a thing on Ultrafinitists

There are some good quotes:

Infinity may or may not exist, God may or may not exist, but there is no need for either in mathematics

For most purposes, Zermelo-Fraenkel set theory combined with the axiom of choice (ZFC) works very well – but, shockingly, a giant question mark has hung over its validity for almost a century. In 1931, mathematician Kurt Gödel showed that it is impossible to prove that the axioms of ZFC are consistent within the framework itself. “Nobody’s showed it’s inconsistent, but there’s no deep sense in which we’ll ever convince ourselves that it’s consistent,” says Clarke-Doane.

But 30 years after Gödel placed a bomb at the heart of mathematics, an unexpected character refused to simply wait until it exploded. Instead, Alexander Esenin-Volpin, a Russian mathematician, poet and dissident (see “The rebel mathematician”, below), claimed to have outlined a programme for proving the consistency of ZF theory. While only a subset of the ZFC rulebook, this programme still stood a chance of solidifying contemporary mathematics’ bones with an audacious trick: abandoning infinity.

Other mathematicians picked up the ultrafinitist torch. In 1971, Rohit Parikh at the City University of New York wrote a paper that cleared up some of the murkiness, showing that the idea of a “small number”, though hard to define precisely, can be embedded in a useful theory. He developed a mathematical theory where all numbers were kept smaller than a certain largest number, such as 2 “tetrated” to 1000, which is equal to 2 raised to the power of 2 raised to the power of 2 and so on 1000 times. While this is far larger than the 1080 atoms in the universe, it could still be deemed “feasible” within Parikh’s theory. By requiring that proofs within his framework must also be kept to a feasible length, Parikh showed that it could remain internally consistent. While unable to fully replace standard mathematics, it was the first successful attempt at a truly ultrafinitist way to do proofs.

What makes a number, or a proof, feasible? This question is at the heart of the ultrafinitist project. Though the issue connects to age-old paradoxes, such as exactly how many grains of sand you have to put together to make a pile, for Parikh, the key concern is to avoid losing track of mathematics’ connection to humanity. “You have to draw a line somewhere. Things have to be related to human activity,” he says. In his view, the ultrafinitist way of thinking orients researchers towards our experience, and he says that, while this approach is still incomplete, “an incomplete approach is better than nothing”.

Others draw inspiration from elsewhere. For Zeilberger, a computer scientist, the fact that computers can only ever approximate infinity – and so are unable to use the fuzzy “very large number” concept that humans rely on – is an argument for doing away with it. His affinity for ultrafinitism started when he first learned calculus, which uses infinitely large or small numbers rather heavily, to his distaste. The rise of calculus in the 17th century cemented infinity’s place in mathematics, but Zeilberger sees this as a historical fluke, a consequence of computers not having been developed earlier, and says that he would love to teach his students calculus without it.

Even non-ultrafinitists concern themselves with the limits of computation – indeed, there is an entire field dedicated to it, called computational complexity. Dean sees ultrafinitism and computational complexity as two sides of the same coin, one more philosophical and the other more practical.

One famous example of computational complexity theory at work is the P versus NP problem, often called the most important problem in theoretical computer science. It captures the difficulty of determining how much computational effort is required to solve certain types of mathematical problem, and whether those solutions can be easily checked.

In the 1980s, building on the work of pioneers such as Parikh, Sam Buss at the University of California, San Diego, developed “bounded arithmetic”, a set of tools for linking mathematical and computational limits when evaluating whether problems can be solved. Using these tools, he was able to identify some problems that are easy to solve and have solutions that are easy to verify. Characterising such matchups as generally as possible is at the core of what it will take to resolve the P versus NP conundrum. “This continues to be a fairly big deal and a central aim of complexity theory,” says Dean. Buss says this work has only become more important with the growth of buzzy new technologies like artificial intelligence and quantum computing, which are raising new questions about the limitations of computation.
 
What makes a number, or a proof, feasible? This question is at the heart of the ultrafinitist project. Though the issue connects to age-old paradoxes, such as exactly how many grains of sand you have to put together to make a pile, for Parikh, the key concern is to avoid losing track of mathematics’ connection to humanity. “You have to draw a line somewhere. Things have to be related to human activity,” he says. In his view, the ultrafinitist way of thinking orients researchers towards our experience, and he says that, while this approach is still incomplete, “an incomplete approach is better than nothing”.
I am perplexed by this quote, because if anything, the notion of infinity is by definition (at least) human (it may be cosmic or may not be, but obviously it is human regardless). Why does something mathematical have to be linked specifically to "activity"? Archimedes famously considered his mechanical creations to be infinitely inferior to his more pure math.
 
Back
Top Bottom