1=.999999...?

Then show the definition we cannot find. Also, don't say from first principals. This discussion is about first principals.

I can say from speaking to university Mathematics instructors that the definition you gave for one and addition are deemed unsatisfactory but most mathematicians. They prefer to leave the terms undefined.

J
 
The alternative is that 0.999... = (sum(9*10^-n) for all n>0)

This has no meaning. None at all. You complain about there not being a real definition, but in this case NO ONE has ever given a definition of sum(9*10^-n) for all n>0 besides being equal to the limit for n -> infinity. It is the only definition of an infinite sum.

Fake edit :

Then show the definition we cannot find. Also, don't say from first principals. This discussion is about first principals.

J

Are you talking about a definition of addition ? Or of 0.99999... ? If the later :

It is the limit when n -> infinity of the sum for k from 1 to n of 9^(-k). Which is 1.
 
This has no meaning. None at all. You complain about there not being a real definition, but in this case NO ONE has ever given a definition of sum(9*10^-n) for all n>0 besides being equal to the limit for n -> infinity. It is the only definition of an infinite sum.
What I am suggesting is that you need to re-examine your assumptions. The bolded turn of phrase is the issue. Limit is a term of art which basically means close enough. Does an infinite sum necessarily equal its limit? In any event, using limit in its rigorous form presupposes the conclusion. Since you assume the conclusion, I understand why you think the discussion is off base.

J
 
An infinite sum is not rigorously defined in any other way than by a limit.
 
Regardless of used/current definitions, intuitively it seems to make much more sense to claim that 0.9999.... is ever approaching a limit
As was said close to the beginning of the thread, people have a very hard time understanding (and even more manipulating) the concept of "infinity".
Hence why this thread constantly pop up (and hence why some people manage to never get it).
 
As was said close to the beginning of the thread, people have a very hard time understanding (and even more manipulating) the concept of "infinity".
Hence why this thread constantly pop up (and hence why some people manage to never get it).

I know that poor people cannot agree on it, eg Cantor with Dedekind, but as usual we are way above that in CFC OT.
 
This is the key phrase. Or, perhaps, approaching is the key word. One approaches a limit. One does not reach it. However, that is the claim if 0.999... = 1. It is all another way of saying close enough.

J

0.999... = 1 in the same way that 0.111... = 1/9 and 0.333... = 1/3. I've never seen anyone try to argue that 0.333... != 1/3, but that is exactly the same claim as 0.999... != 1/1. Repeating decimals are a shorthand way of expressing a fraction as a geometric series.

You can certainly talk about infinitesimals if you want, but 1-0.999... is not an infinitesimal, it's just 0 (i.e. 0.000...), according to the conventional notation. Infinitesimals certainly have mathematical properties that are not quite the same as zero: in fact, they are integral to calculus. ;)
 
Then show the definition we cannot find. Also, don't say from first principals. This discussion is about first principals.

I can say from speaking to university Mathematics instructors that the definition you gave for one and addition are deemed unsatisfactory but most mathematicians. They prefer to leave the terms undefined.

J

Everything in the mathematical systems we use does rest on a small set of axioms. This does not mean that things like addition are "not defined". It just means that you can't derive mathematical systems from nothing (as far as we have been able to work out so far anyway)

If addition was not defined then you couldn't do anything with it, because there would be no valid mathematical syntax and method for you to use the + operator.
 
0.999... = 1 in the same way that 0.111... = 1/9 and 0.333... = 1/3. I've never seen anyone try to argue that 0.333... != 1/3, but that is exactly the same claim as 0.999... != 1/1. Repeating decimals are a shorthand way of expressing a fraction as a geometric series.

You can certainly talk about infinitesimals if you want, but 1-0.999... is not an infinitesimal, it's just 0 (i.e. 0.000...), according to the conventional notation. Infinitesimals certainly have mathematical properties that are not quite the same as zero: in fact, they are integral to calculus. ;)
Exactly. Calculus is all about limits. Close enough is the order of the day. It works. What it does not do is define 0.999... = 1. It defines the limit of a converging series as 1. Subtly different.

BTW Was the pun on integral calculus intentional?

Everything in the mathematical systems we use does rest on a small set of axioms. This does not mean that things like addition are "not defined". It just means that you can't derive mathematical systems from nothing (as far as we have been able to work out so far anyway)

If addition was not defined then you couldn't do anything with it, because there would be no valid mathematical syntax and method for you to use the + operator.
Not quite. It rests on a set of axioms, definitions, and deliberately undefined terms. We do not define point, line, and plane but we have geometry.

J
 
Exactly. Calculus is all about limits. Close enough is the order of the day. It works. What it does not do is define 0.999... = 1. It defines the limit of a converging series as 1. Subtly different.

BTW Was the pun on integral calculus intentional?
The point is that 0.333... is defined as the limit as n --> infinity of the sum from 1 to n of 3/10^n . Repeating decimals are defined as a limit at infinity. The limit is implied by the repeating decimal, which is why we can say 0.333... = 1/3 and 0.999... = 1.

The ;) allows you to differentiate between an intentional and an unintentional pun. ;)
 
Exactly. Calculus is all about limits. Close enough is the order of the day. It works. What it does not do is define 0.999... = 1. It defines the limit of a converging series as 1. Subtly different.

BTW Was the pun on integral calculus intentional?


Not quite. It rests on a set of axioms, definitions, and deliberately undefined terms. We do not define point, line, and plane but we have geometry.

J

All the things you claim are undefined are actually defined.

If they weren't defined we couldn't use them. Everything you use in math has to have a solid footing and be derived in some way from those basic axioms.
 
We don't even try to define the number 'one' or the operation 'plus'. Undefineds are not just permitted, they are inescapable. Refer to Gödel, Escher, Bach: An Eternal Golden Braid, by Douglas Hofstadter.
I have a copy at home, please state where he says that, it was quite a while ago that I read that monster but I don't think he ever made that statement. I think you're likely making an over-generalization of Godel's theorems here. Refer to Gödel's Theorem: An Incomplete Guide to Its Use and Abuse, by Torkel Franzen.

This sort of problem is ancient. Euclid had five axioms. The sixth would have been that there is only one line through a point parallel to any line not containing the point. He tried and failed to prove it. The problem is the definition of plane. We now use two major forms of non-Euclidean geometry because we accept three types of planes.
The Parrallel Postulate is Euclid's fifth axiom.

You do have a point in noting that the fifth axiom is optional and if you get rid of it you can come up with non-euclidean geometry. But do note that having it is also quite useful because you end up with the extremely useful system of Euclidean geometry.

As I see it we can set up our axioms in three ways regarding .999...

1. Set up our axioms such that .999... = 1. If done correctly this provides an internally consistent system that allows one to express all rational numbers in decimal notation and perform arithmetic operations on them. I think that is pretty neato.
2. Set up our axioms such that .999... = something other than 1. I haven't really seen a consistent system that really does anything interesting in anything approaching an elegant manner. For instance if .999... = 1 - ε where ε is some sort of infinitesimal then how do you express things like 1 - ε*10 or 1-ε^2 or 1-sqrt(ε)? I give people participation points for trying to make things workable
3. Set up our axioms such that .999... is undefined. That's quitter talk!


The alternative is that 0.999... = (sum(9*10^-n) for all n>0) =/= 1.
You get a participation point for trying coming up with an alternative but you fizzle out by nakedly saying (sum(9*10^-n) for all n>0) =/= 1 by fiat. Seems to me that's something you should have to prove!

Now, I don't know the rules of your made-up system but it seems to me that the following should be the case:

(sum(9*10^-n) for all n>0) = .9 + (sum(9*10^-n) for all n>1)

now subbing in m = n - 1
(sum(9*10^-n) for all n>0) = .9 + (sum(9*10^-(m+1) for all m>0) = .9 + (sum(9*(10^-m)/10 for all m>0) = .9 + .1*(sum(9*10^-m for all m>0)

assuming (sum(9*10^-n) for all n>0) = (sum(9*10^-m for all m>0)
(sum(9*10^-n) for all n>0) = .9 + .1 (sum(9*10^-n) for all n>0)

subtracting .1 (sum(9*10^-n) for all n>0) from both sides:
.9 (sum(9*10^-n) for all n>0) = .9

and thus:
(sum(9*10^-n) for all n>0) = 1
0.999... = 1
 
Last edited:
Which brings you to the point of the discussion. It's a thought experiment like the original tortoise and Achilles race. Convergence is not identity. Modern math says it's close enough. Look back on the discussion and see how often limits are used. The definition of limit is just rigorous mathspeak for close enough.

J
Let me try an argument against your position on this common ground of accepting the existence of limits as "convergence" (if certain math terms come out funky in translation, I apologize; too lazy to look them up in English). The following may point out the obvious, but I've tried to make the argument accessible to every reader.

Define the sequence (a_n) := (0.9, 0.99, 0.999, 0.9999, ...). All of those numbers, of which there are evidently as many as natural numbers (since you can count the trailing digits) must have a finite amount of digits after the floating point, even though said amount can grow ridiculously large.

Now my aim is to draw a circle around 1 so that infinitely many of these numbers fit inside it.

Imagine the real number line. I draw a circle of radius r < 0.1 around it. I miss out on fitting 0.9 inside the circle -- but I get all the others.

Next, I draw a circle of radius r < 0.01 around it. I miss out on fitting 0.9 and 0.99 inside the circle -- but I get all the others. In this fashion, I can make the radius of my circle arbitrarily small, and yet infinitely many terms of the sequence will still fit within it. From among those, invariably, we can pick out a number of the kind 0.99999...9 (finitely many digits). Surely you will agree that 0.99999...9 < 0.9999... (otherwise your argument is 0.99999...9 == 0.999999... for a given finite amount of nines, anyway), so by some miracle we shall also always find 0.9999... within our grasp, no matter how tightly we girdle our 1.

You have two options:

a) Define 0.99999... := lim (n to infty) a_n. Then 0,9999... = 1, because lim a_n = 1, see above.

b) If you do not accept this definition, you must define 0.99999... by something other than a limit, e.g. 0.99999... := 1 - d where d is a number such that for all real numbers x it holds that 0 < d < x. As the set of rationals lies densely within R, this is rendered absurd unless you define real numbers differently as well, or assert that d is not a real number, in which case you must define an alternative algebraic structure in which the expression "1 - d" makes any sense. Since R is the unique Archimedean ordered field in which all Cauchy sequences converge, defining it differently must lose at least one of these properties. In any case I'd be interested in hearing your suggestion.

Edit: I bet 0.99999... dollar (sic) on this argument being featured, in some form, on every other page in this thread. But the point must be made clear: the assertion that 0.99999... != 1 cannot be reconciled with 0.99999... being a real number (because then d = 1 - 0.99999... would have to be real -- but this leads to a contradiction).
 
Last edited:
Let me try an argument against your position on this common ground of accepting the existence of limits as "convergence" (if certain math terms come out funky in translation, I apologize; too lazy to look them up in English). The following may point out the obvious, but I've tried to make the argument accessible to every reader.

Define the sequence (a_n) := (0.9, 0.99, 0.999, 0.9999, ...). All of those numbers, of which there are evidently as many as natural numbers (since you can count the trailing digits) must have a finite amount of digits after the floating point, even though said amount can grow ridiculously large.

Now my aim is to draw a circle around 1 so that infinitely many of these numbers fit inside it.

Imagine the real number line. I draw a circle of radius r < 0.1 around it. I miss out on fitting 0.9 inside the circle -- but I get all the others.

Next, I draw a circle of radius r < 0.01 around it. I miss out on fitting 0.9 and 0.99 inside the circle -- but I get all the others. In this fashion, I can make the radius of my circle arbitrarily small, and yet infinitely many terms of the sequence will still fit within it. From among those, invariably, we can pick out a number of the kind 0.99999...9 (finitely many digits). Surely you will agree that 0.99999...9 < 0.9999... (otherwise your argument is 0.99999...9 == 0.999999... for a given finite amount of nines, anyway), so by some miracle we shall also always find 0.9999... within our grasp, no matter how tightly we girdle our 1.

You have two options:

a) Define 0.99999... := lim (n to infty) a_n. Then 0,9999... = 1, because lim a_n = 1, see above.

b) If you do not accept this definition, you must define 0.99999... by something other than a limit, e.g. 0.99999... := 1 - d where d is a number such that for all real numbers x it holds that 0 < d < x. As the set of rationals lies densely within R, this is rendered absurd unless you define real numbers differently as well, or assert that d is not a real number, in which case you must define an alternative algebraic structure in which the expression "1 - d" makes any sense. Since R is the unique Archimedean ordered field in which all Cauchy sequences converge, defining it differently must lose at least one of these properties. In any case I'd be interested in hearing your suggestion.

Edit: I bet 0.99999... dollar (sic) on this argument being featured, in some form, on every other page in this thread. But the point must be made clear: the assertion that 0.99999... != 1 cannot be reconciled with 0.99999... being a real number (because then d = 1 - 0.99999... would have to be real -- but this leads to a contradiction).
This is very good. It involves a level of topology beyond my casual competence. That said, to say that Q is dense on R is to say that closure(Q) = R, where closure (Q) is Q union the set of limit points for Q. 0.999... is a limit point of 1. If you expand Q to the algebraic numbers, it still works. 0.999... is in every neighborhood of 1. From this, we can infer that 0.999... is transcendental.

J
 
This is very good. It involves a level of topology beyond my casual competence.
I suppose you are either blissfully kind and youthful, or outright mocking me, and will assume the former because you don't seem like the mocking kind.

But really -- my post, in itself, does not. Where you're proposing to go, however... More in a minute.
0.999... is in every neighborhood of 1.
Let's accept that definition: Take the metric space (X, d) with X = R and d being any metric. You are saying that for all epsilon > 0 it holds that d(0.999..., 1) < epsilon. This implies that d(0.999..., 1) = 0, thus 1 = 0.999... is proven by definition of a metric, which includes the criterion d(a, b) = 0 <=> a = b. Your only recourse is to introduce the "infinitesimal" number (defined as greater than zero, but smaller than any positive real number, and necessarily not real in itself since you could just plug it in as the epsilon otherwise, apart from the problems I've already mentioned). Then the fun begins -- you must construct your own version of analysis on an algebraic structure that must differ from R in at least one major property:

a) the Archimedean axiom does not hold; or
b) it is not Cauchy-complete; or
c) it is not ordered; or
d) it is not a field.

Amusingly enough, the construction that you're heading towards by axing the first property (hyper-real numbers), whether you know this or not, incorporates another "close enough" promoted to a relation of equivalence, so as to ensure that the resulting structure is a field (which necessitates ab = 0 <=> a == 0 or b == 0), by identifying the sequences (a_n) and (b_n) with each other iff |{n in N | a_n != b_n}| < infinity. Which is precisely what you're arguing against. The idea is to extend R into a field of sequences by defining r := (r, r, r, ...) for any real number r, and then identify the sequences that converge to infinity with "hyper-reals" which grow larger than any real number.

(Pure semantics: even then, 0.999... == 1 stays defined as the limit of the convergent series etc. as usual, but the sequence (0.9, 0.99, 0.999, ...) is identified with precisely what you're proposing.)

(If we accept the infinitesimal number, henceforth called "i" -- not like that letter is reserved for anything important -- as real, then against all intuition, the sequence (1, 1/2, 1/3, 1/4, ...) = (1/n) would not converge on the usual metric space (R, d), where d(a, b) = |a - b|. Nor, in fact, could any sequence converge except those with a constant subsequence. -- Proof: Let i be defined such that 0 < i < r for any positive real number r. Let (a_n) be a sequence for which there exists no n_0 such that for all n > n_0: a_n = c. Let epsilon := i. Then for any n_0 and any prospective limit x, there exists an n > n_0 for which d(a_n, x) = |a_n - x| is real, thus > i > 0. -- This is one of the consequences of infinitesimal numbers: metric spaces no longer suffice for analysis; you need to know topology to survive. Other "common-sensical" notions, such as the existence of a supremum/infimum on all sets bounded from above/below, which the real numbers provide for, do not apply to the hyperreals either.)

In a nutshell, it is possible to define such a system, even fruitful -- but you must acknowledge that you're abandoning the analysis of real numbers in favour of something decidedly more counter-intuitive and absurdly inaccessible to anyone but mathematicians, unlike real analysis. Within the realm of real analysis, 0.999... == 1 directly follows from the definition of a series as the limit of its partial sums, as has been stated.

Unfortunately, most people who aggravate themselves against mathematics in school will never even hear the term "series". This is the true problem. I presume that for many such students, the recourse is not to head towards non-standard analysis, but rather to shut themselves off from any effort to understand mathematics, in frustration at something perceived as arbritrary, against common sense, "paradoxical" etc.
 
Last edited:
Trust me, I am capable of mocking when it suits. However, I have never formally studied measure spaces. Most of my topology is from an overview seminar.

Euclid thought it nonsensical to think of a line and a distinct point with no parallel line. I think it's nonsensical to think of an open segment having no endpoint. Sometimes it's a paradigm shift. In any event, it's chewy candy for the mind.

J
 
Back
Top Bottom