1=.999999...?

What makes the mathematicians so positive they are absolutely correct about this 0.999=1 stuff?

The fact that it can be proved. It follows from the definition of the symbols.

It's like the fact that red is a colour. That's part of the definition of the word "colour", whatever that may be.

You probably know that physics differs from maths in relying on observation. Maths on the other hand relies on the definitions and deduction. Surely it's possible that all the mathematicians have been mistaken. If you entertain that possibility though, you should also think, for example, if you've just learnt the English language the wrong way, and what you actually said above means "I wanna ride hippopotamus to work". It's possible, but unless given any reason to think so, it's vain to entertain the idea that it would be true.

As for proof, here it is recapped:

Check the definition of the real numbers here. Any set that satisfies the field-, order- and completeness-axioms is the real numbers.

The decimal numbers aren't defined as an axiom, but here's the definition that any mathematician would use, and that is used in any book that bothers (mutatis mutandis):

A) A decimal number with integer part 0.
0.p1p2p3p4p5...
is the sum:
p1/10+p2/100+p3/1000+p4/10000+p5/1000000+...

B) All the rest in an obvious way from that, for example 4.123123... = 4 + 0.123123...

For that definition you need to know what is an infinite sum (a.k.a. series). That is the limit of the finite partial sums:
\sum_{k=1}^{\infty} a_k := lim_{n\to\infty} \sum_{k=1}^n a_k

The limit of a sequence (an ) is defined to be the real number a for which holds:
for every e>0 there is a ne such that |an - a| < e whenever n>ne,
if such number a exists.

Now, to show that 0.9999... =1 it is sufficient to show that 1 is the limit of the partial sums. Choose an e > 0 and n0 in N such that 10^{-n0} < e. Then for the n:th partial sum S_n it is:
|S_n -1| = 10^{-n} < 10^{n0} < e whenever n>n0.
Since for every positive e there exists such n0, this proves that 1 is the limit of the partial sums (this I say only because there are people in the thread who don't understand the logic in epsilon proofs).

There.
 
Brennan, didn't you do a physics degree? I'm not saying that confers any special authority to Brennan over, say, Atticus or Leifmik (who, as I understand, are pure maths guys), but I don't think it's fair to accuse Brennan of ignorance or of lacking an education in maths.

For this kind of thing, yeah, you need to have taken a bit more math than I believe most physics degrees require. Or at least to have remembered more of your actual math classes than most physics people needed to. (Where I got my degree all that epsilon/delta stuff would actually have been covered in the first year of undergrad math, if I recall correctly, so the physics guys would also have to take it, but they would have ample opportunity to forget it later.)

Incidentally I don't really qualify as a pure maths guy - my degree was in that but it's been 17 years and I've only worked with computers since then, so a bit rusty. Atticus is better suited to answer real questions.
 
Obvious objection: this relies on an open ended definition of a decimal (it is an infinite series itself), which is precisely what I am questioning. Is an alternative definition impossible? It seems contradictory to say that the integer portion is finite but the decimal portion can be infinite.
 
The fact that it can be proved. It follows from the definition of the symbols.

It is difficult to be more wrong. First, what definitions? The number 1 is undefined. It and the operation plus (+), also undefined, are used to construct the other numbers. Kurt Godel proved that such undefined terms are necessary, because complex logical structures, such as mathematics, always loop.

Second, from our understanding of the undefined term 1, it can be and is disproven.

J
 
@Brennan:
I don't know what you mean by open ended. An infinite series, or a sum of such, is still just a real number.

On the finiteness, both the integer and decimal portion are finite numbers, just a clarification of the terminology.

What you're objecting against is that the decimal part could be notated with infinite amount of digits, right? It's of course impossible to write them all down, but if you have a rule for the digits, then you don't have to. When you write 0.999..., the ellipses at the end means that each pi = 9. Thus that notation does have an unambiguous definition.

That same goes with any decimal number that has a recurring sequence of digits. They are sometimes written differently though, like under- or overlining the recurring part.

However, you're right in that sense, that some numbers can't be written like that. For example you can't write 3.1415.... instead of pi, since there's no rule how the digits continue. Thus, there are numbers that can't be written down as decimals. (This however isn't so bad thing, since no matter what way humans want to write numbers, there are always numbers they can't write. Those numbers are actually the vast majority of the real numbers).

EDIT:
It is difficult to be more wrong. First, what definitions? The number 1 is undefined. It and the operation plus (+), also undefined, are used to construct the other numbers. Kurt Godel proved that such undefined terms are necessary, because complex logical structures, such as mathematics, always loop.

The definition of the real numbers provided two times already in this thread.

1 isn't undefined. It is defined to be the neutral element of the multiplication.

Second, that + isn't defined doesn't matter. What matter is that there must be such (commutative etc.) operation in a set for it to be the real numbers. After that, if you want the real numbers to have names such as 2 or 3, they are customarily defined to be 1+1, 1+1+1 etc.

It would still be a good idea to study these things instead of yelling out things that just happen to pop into your head.
 
The definition of the real numbers provided two times already in this thread. 1 isn't undefined. It is defined to be the neutral element of the multiplication.

Second, that + isn't defined doesn't matter. What matter is that there must be such (commutative etc.) operation in a set for it to be the real numbers. After that, if you want the real numbers to have names such as 2 or 3, they are customarily defined to be 1+1, 1+1+1 etc.

It would still be a good idea to study these things instead of yelling out things that just happen to pop into your head.

I suppose it depends on your system. Defining 1 as the multiplication identity makes sense. Peano hypothesized 1 as the number with no precessors.

Some of them do not add up. That's OK, since add is undefined, but still there are problems with semantics. Bertrand Russell, who was a smart guy, thought that all mathematics could be defined. Then Godel took a shotgun and blew a hole in the bottom of Russell's boat. Now the idea of truly defining the real numbers does not float.

What you are dealing with is our inability to even comprehend continuity. We have an intuitive feel for it, because we live in an analog world. However, when closely examined, intuition fails, so we simplify. For any number, there is an uncountable set of real numbers which are indistinguishable for most practical purposes. Most, but not all. The exceptions can be very useful. For example f(x) = 1/x, near x=0. We say that the function is undefined and open at 0. We also say that the two curves have no endpoints, but it would be more correct tot say that the endpoints are undefined. It would be more correct to say that the endpoints o

Here is the thing about studying mathematics. The more you study, the more questions arise. One such question is why do we claim that 0.999... = 1 when it is intuitively so false. The answer is that we are using a simplification. The algebraic numbers are dense on the real numbers. It works for our purposes. In other words, it's close enough.

Multiple Nobel physics prizes have been awarded to those who investigated the "trivial" solutions to equations. I suspect they were also told the area was already settled.

J
 
This is reason why in work I rather use 2/3 than 0.666. I dont know how applications calculate 0.666 but I feel better using fractions in formulas.
 
I suppose it depends on your system. Defining 1 as the multiplication identity makes sense. Peano hypothesized 1 as the number with no precessors.

Peano's 1 is the 1 of the natural numbers. It's a priori a different thing than the 1 of the real numbers, but you can regard them as the same as there is a mapping from naturals to real that preserves addition.

That's OK, since add is undefined, but still there are problems with semantics. Bertrand Russell, who was a smart guy, thought that all mathematics could be defined. Then Godel took a shotgun and blew a hole in the bottom of Russell's boat.

That's not what Gödel did, and I don't believe that's what Russell thought. I'd think that some things can't be defined is a fairly obvious thing and that Russell had no illusions about it. (I could be wrong about this though).

Gödel proved that there are truths that can't be proven. That's a different thing, and it blew a hole in Hilbert's boat, not Russell's.

Now the idea of truly defining the real numbers does not float.

Why not? What's wrong with the definition there is?


Here is the thing about studying mathematics. The more you study, the more questions arise. One such question is why do we claim that 0.999... = 1 when it is intuitively so false. The answer is that we are using a simplification. The algebraic numbers are dense on the real numbers. It works for our purposes. In other words, it's close enough.

You're using false intuition. You probably believe that numbers are the decimals or something like that. The cure to this would be to properly study the basics of maths.

Also, I believe your notion of algebraic number isn't right. Why would you think that 0.999... isn't algebraic or that the concept is of any relevance here?

Multiple Nobel physics prizes have been awarded to those who investigated the "trivial" solutions to equations. I suspect they were also told the area was already settled.

Really? Even if so, for every previously misunderstood Nobel winner there are thousands of students who have shoddy understanding of the basics. If we should consider the possibility that you're right, surely you should also entertain the idea that you're wrong.
 
Thanks Lucy; I'm glad it is not an actual headache. But wait, if 0.99999... = 1, then it isn't an approximation and my headache is real!!! Or have I got that turned around? Uppi, do I have a headache?

Yes. This thread causes real headaches, not approximate ones.

Magnetic field strength (H) is a vector quantity that tells you the magnitude and direction of the magnetic field. Magnetic flux density (B) is a vector quantity that tells you the magnitude and direction of the magnetic field. A conference decided at some point that these two identical things were different. Nobody seems to be able to explain why and most of the talk pages on wiki on the subject is various experts disagreeing vehemently about the matter. It seems to me that they could just as well have decided that they were exactly equivalent and done away with one (H most likely as it originated fron the notion of the field as originating from dipoles instead of current loops).

H and B differ by the magnetic permeability. In a vacuum, there is no difference except for a prefactor, but in a medium and especially at the interface to other media they behave differently. Most of the time you can get away with just the B-field, but especially when dealing with different media the calculation can become much simpler if you use the H-field.

Multiple Nobel physics prizes have been awarded to those who investigated the "trivial" solutions to equations. I suspect they were also told the area was already settled.

That is the difference between physics and math. In physics nothing is ever truly settled (but the space for new physics can become vanishingly small). Once a math proof is correct, that matter has been settled for all eternity.
 
How has this thread even reached page 15 when MadViking solved it in post 6???

Here's my even more simple formula:

1/3 = 0.3333333333... x 3 = 0.9999999999.

Hurr durr.
Obviously, I'd think, if you say that 1 is in fact not quit 0.999 you have to say the same for 1/3 and 0.333... So this is as much a non-solution as there ever was.
That is the difference between physics and math. In physics nothing is ever truly settled (but the space for new physics can become vanishingly small). Once a math proof is correct, that matter has been settled for all eternity.
I have to disagree.
Quantity and the relation of different quantities is not just an abstract concept.

To say that 1 = 0.999.... is not just a mathematical proposition. It at the same time is also a proposition about the physical reality regarding the quantity of an amount of 1 and an amount 0.999... We just have no way to test this physical reality, because our testing instrument lack infinite precision.
However, it is certainly possible that the rules of mathematics which would have 0.999=1 do not actually reflect this untestable actual physical reality of the relation of quantities, and thus it would be wrong in my book and the mathematical rules that say it wasn't would be flawed.

Regarding the actual prove:
It has already been suggested - that 0,999.... is not so much an actual number but a series of numbers. It doesn't represent an actually fixed quantity. It just tries to get infinitely close to the fixed quantity it is supposed to be. And that is not quit the same as being that quantity. Simple as that. If you can't explain to me how that was the same, I don't really care about your proof, since if that proof was any good, it should be able to tell you how it does not matter, IMO. Otherwise it is just convenient mathematical definitions, I believe, and meaningless for the question of weather 1 = 0.999... since it may as well just prove weather we managed to create consistent mathematical rules which allow for it.
 
However, it is certainly possible that the rules of mathematics which would have 0.999=1 do not actually reflect this untestable actual physical reality of the relation of quantities, and thus it would be wrong in my book and the mathematical rules that say it wasn't would be flawed.

Maths doesn't claim that it reflects physical reality. It's claim is "if these axioms hold, then these theorems are also true".

If people came convinced that the axioms don't reflect the physical reality, another ones would be proposed, they would be researched etc. It would however not affect in any way the things proven from the previous axioms. They would be as true as ever before. They just wouldn't be as applicable any more.

As an example, the Euclidean geometry is still as true as it was 2000 years ago, even though physicists (AFAIK, correct me if I'm wrong) think that noneuclidean describes the spacetime better.

EDIT:
It has already been suggested - that 0,999.... is not so much an actual number but a series of numbers. It doesn't represent an actually fixed quantity. It just tries to get infinitely close to the fixed quantity it is supposed to be.

Why would you think that is true? What is the difference between a number and a "series of numbers"? Is 123 a series of numbers or a number? Is 0.22 a number or a series of numbers? What is a number in your opinion? How does a series of (supposedly) inanimate objects "try" to get close somethin? What it means to be infintely close to something?

If you first answer these questions satisfactorily, we can continue with the rest of your post.

Also, the misthought seems to be that 0.999.... somehow would "be the same as"
0.9, 0.99, 0.999, 0.9999,...
Why would that be?
Is 1.5 similar to
1, 1.5?
Is it similar to
1, 1.5, 1.5, 1.5,... ?
Why not
1, 1.5, 1.6, 1.5, 1.5, 1.5.... ?

If you reject the definitions that are used in maths, can you give some coherent account on what are numbers, and how they should be interpreted?
 
H and B differ by the magnetic permeability. In a vacuum, there is no difference except for a prefactor, but in a medium and especially at the interface to other media they behave differently. Most of the time you can get away with just the B-field, but especially when dealing with different media the calculation can become much simpler if you use the H-field.
H and B are related by permeability, yes. But the electric field E is attenuated by permittivity and we don't suddenly use an extra term for the modified field and give it its own unit and declare at international metrology conferences that the two fields are fundamentally different phenomena.

Calculations for the H field are only used for preference in materials because the derivation is easier, not because it is a radically different entity - how could it be when the difference is a mere constant?

I could also wonder why we also need to supplement B and H with a Magnetising field. And sometimes also a de-magnetising field.

The field is a mess.
 
I don't know if this is the "gotcha" that I think it is, but is 0.000... = 0? Or is 0.000... a procedure that ultimately generates the number 0 as an approximation?

I'm sure you'll say that 0.0 = 0; and 0.00 = 0; and 0.000 = 0. But we're not talking about any of those numbers, we're talking about 0.000.... So who cares what the intermediate numbers are? You can surely guess where it's going, and you can take a limit, and you can do all the same clever maths that people have done for 0.999.... But Brennan (et al)'s argument is that all that maths stuff doesn't imply that 0.999... is actually a number, much less that it actually equals 1. So applying the same logic to 0.000..., surely there is no way of proving that 0.000... actually equals 0 either?
 
Every iteration of the series would return the same result: 0. Not really a process if it isn't changing.
 
So? The process is identical to 0.999..., except it's a different number on the top of the fraction. Instead of 9/10 + 9/100 + 9/1000 + ... it's 0/10 + 0/100 + 0/1000 + ... . Just because the result isn't changing at each step doesn't mean that it isn't in fact a set of steps. I mean, this whole time, you've basically just been saying that the ... denotes a process of generating new numbers. So what if all the numbers are zeros instead of nines? That doesn't change any of the rest of your argument. At least, it shouldn't. You've already said that your argument is generalisable for all numbers 1 through 9. Why not 0? Is this not a counterexample to your argument? Your argument doesn't work if the number on the top of the fraction happens to be a zero, after all. So the argument is flawed, no?
 
Back
Top Bottom