Talk:0.999...

Page contents not supported in other languages.

0.999... is a featured article; it (or a previous version of it) has been identified as one of the best articles produced by the Wikipedia community. Even so, if you can update or improve it, please do so.
This article appeared on Wikipedia's Main Page as Today's featured article on October 25, 2006.
Article milestones
DateProcessResult
May 5, 2006Articles for deletionKept
October 10, 2006Featured article candidatePromoted
August 31, 2010Featured article reviewKept
Current status: Featured article

Yet another anon

Moved to Arguments subpage

The article seems incorrect, at the beginning

Firstly, saying ".9 repeating equals 1" is not necessarily true. It is true for real numbers, but not necessarily for hyperreal numbers. I think the statement should be clarified. If .9 repeating holds for real numbers but not elsewhere, why could I not say 3 = 1, because 3 = 1 mod 2 and 1 = 1 mod 2? I believe the article should add "In standard analysis" before "This number is equal to 1.", due to the fact that it is not necessarily true.

--The big parsely (talk) 15:59, 18 January 2022 (UTC)The Big Parsely

Could you say which sentence exactly you are unhappy with? Gesturing at hyperreals isn't enough, because repeating decimals give no way of coherently picking out hyperreals with nonstandard parts. The article already has 0.999...#Infinitesimals, which I think you are aware of. — Charles Stewart (talk) 18:49, 18 January 2022 (UTC)

Firstly, saying ".9 repeating equals 1" is not necessarily true. It is true for real numbers, but not necessarily for hyperreal numbers.

Hyperreal numbers are not only nonstandard and not necessary to account for in the lead section. If we took this approach here, to be logically consistent we would have to provide alternative definitions of the limit and derivative (which are not elegantly expressed using limits with a hyperreal framework) for every possible construction of math in every article where they are defined and in every section (even lead sections). Just to list 2 basic concepts that would be affected, not to mention series and all related articles on them. It's enough for these constructions to be explained in a subsection, in my view. Also, by the transfer principle statements of this kind (first order) such as .9~ = 1 that are true in the real numbers must also be true in the hyperreals.

why could I not say 3 = 1, because 3 = 1 mod 2 and 1 = 1 mod 2?

That's an incorrect statement, in modular arithmetic we would say 3 is congruent to 1 (modulo 2), not that they are equal. Freeze4576 (talk) 17:13, 2 April 2022 (UTC)
".9 repeating equals 1" is not necessarily true. It is also not necessarily false. It is simply nonsensical if the definition of ".9 repeating" is not given or if the context where this definition applies is not given. The article describes clearly the meaning of the notation .999... (this is nothing else as a notation) and proves that the number represented by this notation is the number 1. It is never said that the notation is used in other contexts. So there is definitively nothing incorrect there. D.Lazard (talk) 20:55, 2 April 2022 (UTC)
It is also not true that the population of the Earth is 7.9 billion, if we e.g. don't use short scale, or if we use undecimal. It's sensible, however, to assume the standard meaning af words and notations, at least for a start (i.e. in the lead) - if relevant, we can get into the more exotic things further down in the article.-- (talk) 11:14, 3 April 2022 (UTC)

"Liking" the subject

I don't "like" the subject of this article. I'm used to the idea that "a number can be represented in one and only one way by a decimal", so it's uncomfortable, but I accept it because I know it's true.

Several authors are cited in the "Skepticism in education" section. Does any of them also talk about people who "dislike" the concept but still accept it, for example Tall's case studies? If so, could anything be added to the "Cultural phenomenon" section? A scholarly sentence/paragraph about popular opinion would be at least as good in this section as the current remarks about UseNet and World of Warcraft. 49.198.51.54 (talk) 02:19, 4 April 2022 (UTC)

I think the introductory statement on the number represented by 0.999... will be mostly ignored or not adequately understood by the typical reader. I recommend adding the following to the Wikipedia page, particularly somewhere above where the proofs start.

There's a recurring complaint by editors who don't like to see highly technical content represented in Wikipedia. They say that the only people who can understand it don't need it anyway.
Most of the time they're just wrong about that. There is lots of technical content in Wikipedia that is not readable without some fairly meaty background, but that is very useful to those who have the background.
For the purposes of this article, though, the complaint has some merit. If you understand the real numbers rigorously, the identity of 0.999... and 1.000... is a trivial fact of minor interest (the topological implications can be a little more interesting, but this would not be the right title for that article).

Representation of a real number

To understand the equality, and its proofs, it is necessary to know exactly what is meant by the standard real number 0.999... Failure to understand what 1 = 0.999... means, often results from a failure to understand what the number 0.999... means.

This representation of a real number is unlike more basic representations of 2, or 2.1, or 1/3, or ${\displaystyle {\sqrt {2}}}$, each of which can be understood in more immediate and intuitive terms. By contrast, 0.999... must be understood in terms of a limit of a sequence.

Note that, the statement 1 = 0.999... claims that the number represented by 1 is the same as the number represented by the limit 0.999... The fact that a single number can have many representations is not unique to limits. In fact the number 1 can be represented as ${\displaystyle 2-1}$ or ${\displaystyle 2\div 2}$, or by the English phrase "the least integer greater than zero". The fact that a real number can be represented by the decimal sequence 1.000... and the decimal sequence 0.999..., or that it may be represented in any number of other ways, is not problematic or unique to numbers.

To understand which number is represented by 0.999... consider the sequence of numbers, 0.9, 0.99, 0.999, and so on. Put intuitively, "the limit of this sequence" is the number which the sequence becomes close to, eventually. So by saying that 1 = 0.999..., this means the same as saying that 1 is the limit of the sequence 0.9, 0.99, 0.999... And then by saying that 1 is the limit of this sequence, this means that 1 is the number which the sequence becomes close to, eventually.

More rigorously stated, the number 0.999... is defined as the following limit.

${\displaystyle \lim _{n\to \infty }\sum _{n=1}^{\infty }{\frac {9}{10^{n}}}}$

Therefore the statement 1 = 0.999... is the same as the statement 1 = ${\displaystyle \lim _{n\to \infty }\sum _{n=1}^{\infty }{\frac {9}{10^{n}}}}$.

Addemf (talk) 17:23, 11 September 2022 (UTC)

In fact, as I scan the arguments sub-page, almost every single argument against 1 = 0.999... seems to stem, not from any part of any of the proofs. They all come from simply misunderstanding what is expressed by 0.999... Addemf (talk) 17:43, 11 September 2022 (UTC)
Yes, but part of that is a misunderstanding of limits. The article deliberately avoids language like "becomes close to, eventually" as it is liable to be misinterpreted by the ignorant to mean that 0.999... is close to but not exactly 1. Hawkeye7 (discuss) 18:46, 11 September 2022 (UTC)
I'm open to finding a better way of expressing the meaning of 0.999... I'm not committed to how I did it here. But I do think a section that emphasizes
(1) most people assume various meanings of 0.999... which are not the ones used in the claim 1 = 0.999... and
(2) the correct definition is [insert here]
would do a LOT more for resolving confusion, than analysis proofs for people who almost certainly are unequipped to read an analysis proof. Addemf (talk) 20:49, 11 September 2022 (UTC)
It's all in section 0.999...#Infinite series and sequences. - DVdm (talk) 21:24, 11 September 2022 (UTC)
Alright, if you think that's enough, so be it. Just seems like precisely zero people are going to get what's going on there, except people who already know analysis. Addemf (talk) 23:17, 11 September 2022 (UTC)
I agree. The last paragraph of that section is i.m.o. the most — and perhaps the only — important part of the subject and it's hidden way too deep in the remainder of the article. I think it deserves a more prominent place, but consensus seems to have sort of hidden it a bit . - DVdm (talk) 08:17, 12 September 2022 (UTC)

Why are algebraic proofs only listed as "arguments"?

1/3 = 0.333..., so multiplying both sides by 3 we get 1 = 0.999...

This is the simplest way to show this identity, and I think it is very important for this article. Why is it only listed as an "argument" and not as a "proof"? Are there problems with this proof?

If there are indeed problems with this proof, I think we should include this proof and explain its problems in this article, because I think it's important for this topic. Cooper2222 (talk) 04:42, 31 October 2022 (UTC)

It lacks the rigor of a true formal mathematical proof. We include it for pedagogical reasons. As the article states, it is easily understood, and satisfies most readers. Hawkeye7 (discuss) 05:31, 31 October 2022 (UTC)
Another answer that may (or may not) better address User:Cooper2222s doubts: The argument based on 1/3 is not wrong as such, but how do you know that 0.333... * 3 = 0.999...? It's true, but is it completely obvious, with no room for doubt? How? One can prove that usual algorithms for arithmetic operations with decimal numbers are correct for terminating decimals, but do we know they are valid for non-teminating ones too? The proper proof of 0.999... = 1, as given in the article, goes to first principles, therefore we know that 0.999... = 1, but it is satisfying and reassuring that less rigorous arguments like the one based on 1/3 get it right too. -- Note that you could easily construct a "proof" that 0.999... < 1, based on principles that are valid for terminating decimals, but the proof from first principles show this to be wrong.-- (talk) 08:27, 31 October 2022 (UTC)
The claim 0.333... * 3 = 0.999... is going to be true under pretty much any interpretation. The harder step is actually 0.333... = 1/3. People accept that because they're used to it and it's the answer that comes out of short division, but what if 1/3 just doesn't have a decimal representation, and short division just doesn't give an exact answer here?
That would be the situation, for example, working in Fred Richman's "decimal numbers" (which don't allow subtraction or negative numbers).
But as I said above, we should try to explain the situation without requiring explaining limits, which are a big step for readers who need this article. The Archimedean principle is easier to explain, I think. --Trovatore (talk) 19:02, 1 November 2022 (UTC)
User:Trovatore, you argue from the point of view of someone knowing a lot about mathematics and different number constructions. I try to argue from the point of view of the reader we are trying to help here. It is - such a reader would say - common knowledge that 1/3 = 0.333... (and it is true, too, even if the arguments are complicated and only valid for some constructions of the numbers). Allthough most number constructions that include 0.333... also agree that 0.333... * 3 = 0.999..., it is not trivial.-- (talk) 08:24, 10 November 2022 (UTC)
Well, it's more trivial than the other equality involved, namely 1/3 = 0.333.... --Trovatore (talk) 07:02, 23 November 2022 (UTC)

We should include them and explain why they are not rigorous

If these arguments are not rigorous, I think we should include them and explain clearly why they are not rigorous: what assumptions are implicitly made or what steps are omitted. A lot of people may have the same doubts and want to know. So it's important to include these in this article. Cooper2222 (talk) 02:27, 1 November 2022 (UTC)

We already provide several rigorous proofs. Hawkeye7 (discuss) 02:30, 1 November 2022 (UTC)
But those don't explain why the algebraic proofs are not rigorous, do they? Cooper2222 (talk) 02:33, 1 November 2022 (UTC)
I agree with User:Cooper2222. Another thing is how to accomplish this in a clear way, and once it is there, how to keep math nerd away from edting it to satisfy themselves rather than the lay reader.-- (talk) 08:28, 10 November 2022 (UTC)
Rigorous proofs are provided. The simple arguments are provided because they are more readily understood by the some readers. I would object to their removal. The pedagogical point is that they more easily accept that 1/3 = 0333... that 1 = 0.999... That is because (1) they more easily accept 1/3 as a mathematical construct than 1, which is more familiar and has religious implications and/or (2) they see 1/3 and 0.333... as processes (verbs) but 1 as a number (noun). As noted above, a proof would be required that 1/3 = 0.333... To do that we would use one of the techniques we use below to show 1 = 0.999... Hawkeye7 (discuss) 09:19, 10 November 2022 (UTC)
User:Hawkeye7, I'm not aware of anyone suggesting removing them; the question is how to present them. And once they are there, how to make the reader understand the distinction between proofs and arguments.-- (talk) 13:20, 10 November 2022 (UTC)
I have added an introduction to section § Algebraic arguments for explaining why they are not proofs. Nevertheless, I cannot understand why infinite decimals are so often taught before the distinction between a (rigorous) proof and an (informal) convincing argument: infinite decimals are a concept that is rarely used in applications as well as in pure mathematics, while the concept of a proof is the core of mathematics. D.Lazard (talk) 12:39, 11 November 2022 (UTC)
As for why infinite decimals are taught early on, the obvious reason is that once long division has been taught, the case of 1/3 is impossible to avoid. After all, maths is not taught primarily to future matematicians, who might benefit from a rigorous approach, but to the common population, who need a basic idea about what holds true and how to manipulate numbers. (talk) 12:11, 22 November 2022 (UTC)
I agree that elementary maths course are not taught for future mathematicians. This is the reason for which infinite decimals should not be taught too early, as only mathematician use them. So, for students that are not future mathematicians, it is must more useful to put emphasis on approximations than on infinite decimals. After all, finite decimals were not considered before the second half of the 19th century, and before that, finite decimal were sufficient for everybody. D.Lazard (talk) 12:35, 22 November 2022 (UTC)
If you have a bright student in class, there's no way you can escape from saying something about long division for 1/3 (and other repeating decimals). Teachers at that levet are often not very knowledgable about more advanced maths, so what they choose to say about such things may be a problem. (And even if you had an expert matematician to teach, he/she might still be tempted to say something that is not fitting.) But yes, saying that any finite decimal in these cases is an approximation, and leaving it at that, would be appropriate at an elementary level. Though ... my interest in maths was not least piqued by teachers hinting at things beyond what they were supposed to be teaching at the given level (as well as by books by e.g. Martin Gardner, also not too concerned about rigour). (talk) 12:59, 22 November 2022 (UTC)
It just isn't true that "only mathematicians" use infinite decimal expansions. I was introduced to them in the second grade, when we read A Wrinkle in Time, and I don't think I was a mathematician at that point. It might be true that only mathematicians use them in ways such that the infinitude of digits is actually essential to the use, but that's a different claim, and betrays what I consider an overly austere mindset. --Trovatore (talk) 18:39, 22 November 2022 (UTC)
Same here. I also remember my mind being blown by my fourth grade text, which said: "a circle is a set of points". Whoa! My classmates could not see what my problem was, because they thought of a point as being a blob of ink on a page, and having a size. But what I don't recall from high school is being taught about proofs. Instead teacher would write proofs up on the board. (Byers says this too - see p. 363) Hawkeye7 (discuss) 20:11, 22 November 2022 (UTC)

It might be worth to note that using a not quite rigorous argument or using infinite decimals loosely where it fits is kinda mimicking the historical development. Practical, intuitive use and "proof" usually preceded a rigorous analysis (and famous mathematician arrived at famous result with "dubious" arguments form a rigorous perspective). Learning math in the earlier stages often mimicks historical development to degree and that's why such things occur in math education before a rigorous base/definition is given/available.

As far as our article is concerned the algebraic proofs definitely should be mentioned as they are widespread in (educational) literature, but their issues of course should be mentioned as well. In that sense the current version seems reasonable to me.--Kmhkmh (talk) 16:20, 22 November 2022 (UTC)