0.99999999....... (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 27, 2024, 04:44:42 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  General Discussion
  Religion & Philosophy (Moderator: Okay, maybe Mike Johnson is a competent parliamentarian.)
  0.99999999....... (search mode)
Pages: [1]
Poll
Question: Does it equal 1?
#1
Yes
 
#2
No
 
Show Pie Chart
Partisan results

Total Voters: 55

Author Topic: 0.99999999.......  (Read 20578 times)
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« on: November 10, 2009, 03:23:02 PM »

One area of discrepancy is that the symbol "1" can be used in different contexts. jmfcst's example of its use in floating point arithmetic is one such example. But even in abstract mathematics it can have different meanings.

As a real number the symbol 1 refers to the number that can be represented by any convergent sequence that becomes arbitrarily close to that specified value. Decimal numbers are one form of describing a convergent sequence of a sum of fractions:

1.000... = 1/1 + 0/10 + 0/100 + 0/1000 + ...
0.999... = 0/1 + 9/10 + 9/100 + 9/1000 + ...

Since these sequential sums converge to the same value they represent the same real number. Other sequential sums also represent the real number 1:

1/2 + 1/4 + 1/8 + 1/16 + ...
4/5 + 4/25 + 4/125 + 4/625 + ...

And all these sequences are equal as real numbers.

However, the symbol 1 can also refer to the integer 1, which it might be if use for instance as the simplest counting number. In that case decimal or other fractional sequences can not equate to 1, since they are not themselves only integers. The statement 1 (integer) = 0.999... (real) is either confusing or meaningless since the two sides of the equation are different entities.

When a question like the one for this thread is posted, I assume that the questioner intends for the equals sign to be sensible and relate two items of the same sort. Therefore I answered yes, but I recognize that there is ambiguity in the use of the symbol 1.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #1 on: November 11, 2009, 12:25:25 AM »


Epic math fail. Three divided by three is one. Ridiculous to suggest that three thirds is anything but one...srsly...

Precisely. Three divided by three is one. Therefore, three divided by three is 0.9999999...

The difference of opinions could be attributed to a difference in which set of numbers to consider. Describing a relationship like 3/3 = 1 could be a statement about the rational numbers of which 1/3, 3/3, and 345/86 are all examples. These are the numbers we are most comfortable with and results in statements like pi is approximately 22/7. The floating point numbers on a computer are a finite subset of the rational numbers.

If a person who uses 3/3 = 1 is grounded in the rational numbers, then equating either 3/3 or 1 to 0.999... would find the statement confusing or meaningless since 0.999... is not a rational number. It's only when 3 and 1 are used as real numbers that the relationship 3/3 = 0.999... is sensible. Since 3, 3/3 and 1 have more commonplace uses as rational numbers their ability to serve a real numbers doesn't often come into play.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #2 on: November 11, 2009, 09:15:06 AM »

I do not dispute that 0.9999999... is indeed extremely close to one, but so say that it equals one make absolutely no sense whatsoever (but when has mathematics ever done that).  The difference will indeed be very (understatement) small, incalculable in fact, but no matter how much I look at it and think about it, it still strikes me as less than 1.  That said, if I don't need to be too precise when displaying the results of a calculation, I'll just treat it as one all the same (considering that when rounded you get 1), but I'll never be able to view it as equal to one though, that is, exactly equal.  I will look into this further.

As I posted earlier, the numbers 0.999... and 1.000... are just decimal representations of the real number 1. Real numbers as a set are defined by convergence, and since both 1.000... and 0.999... converge to the same quantity they represent the same number. The fact that 1.000... converges faster than 0.999... doesn't change the application of the definition, both still converge.

The notion of convergence is critical to the calculus. It shows up in the use of limits. However, it doesn't get covered much except by mathematics majors in college, so the concept is not strongly part of most school curricula.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #3 on: November 11, 2009, 12:59:22 PM »

Actually for all intents and purposes this IS mathematical fact. It's universally agreed upon by mathematicians, probably more so than evolution is agreed upon by those in that field.

With mathematics and the sciences it's generally safer to assume that things like this are not solid facts, only well supported theories, as there's always the potential that something else could come along and convincingly contradict the matter in question.  However I don't doubt that this matter agreed upon more by mathematicians than evolution is by biologists.  But still, for example, no matter how obvious it is in our day to day lives, gravity is still a theory, if you catch my drift.

Mathematics and science are fundamentally different in this respect. Scientific views change in light of new facts based on measurements and observations. Scientists views can differ based on interpretation of the data.

Mathematics is based on a set of definitions and axioms that lead to logical proofs. When I specify a set of definitions then I can draw inescapable conclusions. There may be other definitions, but within the confines of one set of definitions there is complete agreement as to the conclusions drawn form those definitions. At times there are mathematical conjectures not yet proved or disproved, but that is different from a difference in interpretation that one sees in science.

In this case of real numbers we are talking about proved statements. If you wish to claim use only of the definitions for rational numbers then your skepticism about this thread's subject is based on that different use of definitions. It doesn't affect the conclusions drawn for real numbers.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #4 on: December 21, 2009, 03:24:29 PM »

Question for the math and number geeks out there.

Could 1 - .9999(repeating) = *, be regarded as true?
* defined as an infinitesimal

I assume you mean positive infinitesimal. I'm not a mathematician, even of the amateur variety, but here's how I would approach your question:

On the real number line, zero is the only infinitesimal number. Thus, we cannot define 0.999... to be such that 1 - 0.999... is positive infinitesimal, for the simple reason that positive infinitesimal does not exist.

The neglected hyperreal number line does recognize non-zero infinitesimals, both positive and negative. But here, we would seem to encounter the opposite problem: "1 - 0.999... is positive infinitesimal" is ambiguous, because there are an infinite number of positive infinitesimals. Thus, 0.999... would refer not to a hyperreal number, but to a range of hyperreal numbers—all hyperreals smaller than 1 for which the standard part is 1.

Anyway, that's my layman's take. Accept it or reject it.

It can't be 1.999...8 because you can't have a digit after an infinite amount.

As intuitive as that rule is, it strikes me as ultimately an arbitrary one.

In terms of real number analysis, an infinitesimal is not a number, but instead is symbolic of a quantity that is as close to zero as one needs. It's a concept essential to changing from explicit statements of limits to the notation of calculus. Since it's not a real number it can't be the result of the subtraction of two real numbers.

As I described earlier in the thread, we often confuse real numbers with rational numbers when they are quite different objects. Part of the confusion is that there is a subset of real numbers that are equivalent to the rational numbers. Rational numbers can be expressed as a fraction of integers, and a mathematical proof of the type involving multiplication and division relies on the behavior of rational numbers.

First it is useful to note that rational numbers often use two different symbols for the same value. For instance, 1/2 is the same value as 3/6. This doesn't seem to bother
most people, so let's recognize that number systems can have duplicate ways of expressing a value.

Real numbers are constructed as infinite series of rational numbers that converge to a specific value. That value may be a rational number like 1 or 8/3 or it may be an irrational number like pi or the square root of 2.

There is more than one way to write a sequence of numbers that converge to the same value. For instance 5, 4, 3, 2, 1, 1, 1, 1,  ... and 0, 1/2, 2/3, 3/4, 4/5, 5/6, ... both converge to 1. If I go out enough steps both can get as close to 1 as I like. The fact that the first sequence converges to 1 in a finite number of steps doesn't change the fact the second sequence also converges to 1. In terms of convergence they are equivalent and the sequences represent the same real number 1.

Decimal notation is one way the write a representation of the infinite series, so for instance pi = 3.14159 ...  can represent the sequence 3, 31/10, 314/100, 3141/1000, 31415/10000, 314159/100000, ... . When I use 1 as a real number I really mean 1.0000 ..., because that indicates a particular sequence. The sequence represented by 0.9999 ... converges to the same value just not in a finite number of steps in the sequence.

Where the equivalence of 1 and 0.9999... often bothers people is the expectation that two distinct decimal representations each represent different numbers. It's true that the integers each have a unique symbol, but I showed earlier that the rational numbers do not have unique symbols to represent them. The real numbers are based on sequences, and I've already shown that two sequences can converge to the same value. One should accept that real numbers in their decimal representations, like rational numbers, also may have duplicate representations of the same value.

/lecture (it's Christmas break after all Smiley )
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #5 on: December 21, 2009, 04:45:07 PM »

So zero is not considered "infinitesimal" for purposes of the real number line? I stand corrected.

But how does your reasoning transfer over to the hyperreal number line? Granted, the series 9/10, 90/100, 900/1,000, 9,000/10,000, ... converges to 1, but isn't the entire question whether we should define the series in terms of the real number it converges to?

BRTD,

It seems to me that there can be a "last" 9 in 0.999..., so long as an infinite number of 9's precedes it.

For this discussion I'm avoiding the hyperreals of nonstandard analysis. At this point they have mostly been a convenient shorthand to reach conventional proofs of real analysis. I wouldn't confuse them with numbers that have specific values, which I find to be the essence of this thread.

To your comment to BRTD, there cannot not be a "last" 9. The nature of an infinite series is that it never ends, therefore it is meaningless to say there is a last value in the sequence. It would be just as meaningless to ask for the largest integer. There is always another one higher.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #6 on: December 21, 2009, 06:15:18 PM »

Looks like we only disagree on my reply to BRTD, then. As I see it, speaking of a "last" 9 does not imply an end to the sequence if it is preceded by an infinite succession of 9's. It is figurative rather than literal. (Of course, for consistency it would be necessary to write 0.99, with the bar over the first 9. Multiplication of that value by two could then be "intuitively" paired with 0.98, with the bar over the first nine.)

The trick is that there is not even an end to the figurative sequence. The appearance is deceptive because it looks like a string of 9's, but that is just notation. The real sequence is 9/10, 99/100, 999/1000, ... . In its real form, there is not even a last element that would make sense figuratively.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #7 on: December 24, 2009, 10:23:52 PM »

If you needed ten billion dollars, and had $9,999,999,999.00 do you have ten billion dollars? Smiley

I think the answer "infinitely close to one in the base ten system" might be the best.

One problem I could see is looking at a situation where X has to be a number equal to or greater than one, X >/= 1.  0.9999999999... would not be greater than or equal to x in that case.

     It would be more like if you had $9,999,999,999.99999.... The problem with your example is that you are short by an amount that is defined as $1, a non-zero value. In the topic's question, 0.99999999... is short by an infinitely small amount, which is mathematically defined as 0. You see what I mean?

I question if yo can really define this as zero, anymore than can define it as 42. 

For example: X is defined the amount of molecules of a substance needed for a reaction, and the amount of X needed is 1 unit.  0.999999999... units would not be enough.

So long as X isn't a minimum amount, the definition would work.  As soon as it becomes a minimum value, there is a definitional problem.

Your example is consistent with finite mathematics, but there is a definitional difference when dealing with real algebra. The root problem is in considering decimal expansions of a value using the ellipsis notation. That ellipsis really doesn't work in finite math, but allows one to take rational expressions like 1/3 and write it as a decimal value 0.333 ... . Real numbers which include rational and irrational numbers are defined in terms of infinite sequences and are incompatible with the definitions of finite math. In a sense one is converting the rational value 1/3 into a real analog 0.333 ... .

The ancient Greeks recognized that finite math and rational expressions could not explain all the numeric values they knew. Though they could prove it to be so, their number system lacked the tools to express these irrational values. The Arabic number system eventually provided those tools, which led to decimal expansions of all real numbers, rational and irrational. Yet even with this powerful tool there are cases where a finite rational expression like 1 will do fine, even though 0.999 ... is equal as a real number.

The definitional problem you cite exists, but is solved by adopting the definitions of real algebra. If you confine your math to the finite and rational you can avoid the problem. But like the ancient Greeks, you'll be in a bind if you want to express irrational numbers in a manner consistent with rational numbers, eg. to be able to express both 1/3 and the square root of 2 in decimal notation. The definition that takes care of that inconsistency creates the question raised by this thread.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,802


« Reply #8 on: December 26, 2009, 05:56:21 PM »



Your example is consistent with finite mathematics, but there is a definitional difference when dealing with real algebra. The root problem is in considering decimal expansions of a value using the ellipsis notation. That ellipsis really doesn't work in finite math, but allows one to take rational expressions like 1/3 and write it as a decimal value 0.333 ... . Real numbers which include rational and irrational numbers are defined in terms of infinite sequences and are incompatible with the definitions of finite math. In a sense one is converting the rational value 1/3 into a real analog 0.333 ... .

The ancient Greeks recognized that finite math and rational expressions could not explain all the numeric values they knew. Though they could prove it to be so, their number system lacked the tools to express these irrational values. The Arabic number system eventually provided those tools, which led to decimal expansions of all real numbers, rational and irrational. Yet even with this powerful tool there are cases where a finite rational expression like 1 will do fine, even though 0.999 ... is equal as a real number.

The definitional problem you cite exists, but is solved by adopting the definitions of real algebra. If you confine your math to the finite and rational you can avoid the problem. But like the ancient Greeks, you'll be in a bind if you want to express irrational numbers in a manner consistent with rational numbers, eg. to be able to express both 1/3 and the square root of 2 in decimal notation. The definition that takes care of that inconsistency creates the question raised by this thread.


No, I'm saying that 1 = 1, when one is used to represent a "threshold number."  In case where the numeral "1" (or any other numeral, including 1/3 or square root of 2) is used to represent a concept involving a minimum amount, as it can be, "1" cannot be an infinitesimal amount smaller than one.



When you talk about a threshold number, such as your fission example for PiT, you are talking about the use of numbers for measurement. Measurements are never made with infinite precision, so real numbers would not be used, a terminating decimal is just fine. Even though the theoretical ratio of a a circle's area to it's diameter is pi, we could not measure it to be exactly pi. I can't measure anything with a value of 0.999 ...  or even 0.333 ..., so I wouldn't use them for an experimental threshold.

Real numbers are a concept with their own definition. When one speaks of 1 as a real number like pi it is well-defined, but not in a way that is useful for measurement such as with a threshold.
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.045 seconds with 13 queries.