Some numbers do not have finite representation in normal decimal notation. For example1/3 has a representation like 0.33333... where the list of threes never ends. Irrational numbers like pi also continue on forever, but without the repetition.
I think most imaginative students when they are introduced to this fact, immediately wonder: what about the other direction? These are numbers where the representation to the right is infinite, but what about if the representation to the left were infinite? I’m going to start by explaining why the question isn’t really mathematically interesting, then I’ll explore the mathematics of such numbers.
Why the Question Doesn’t Matter
The question seems to be based on the idea that there is something mathematically significant about the way that numbers are represented in decimal notation, but that’s not really the case. Numbers that repeat endlessly to the right were not discovered by mathematicians playing around with the decimal representation, but rather by mathematicians discovering limitations in decimal notation.
There are other ways to represent numbers than by decimal notation. You can represent numbers with tick marks so that the first ten whole numbers (integers greater than 0) would be represented like this:
That’s pretty unwieldy and it’s hard to read larger numbers so similar number systems put a slash through each group of five. The Romans used to use a more complex method that was even easier to read. They represented the first ten whole numbers like this: I, II, III, IV, V, VI, VII, VIII, IX, X.
Decimal notation is just another way to represent numbers. It is a notation where you can represent a number x by a procedure that calculates S(x) --a sequence of digits that represents x. In the following, DIV(x,y,i,j) takes two numbers x and y. It sets i to the number of times that y goes into x and sets j to the amount left over. In other words, i is the integer result of x/y and j is the remainder. Here is how you calculate S(x) for a whole number. It writes the number from left to right:
- Find d, the smallest power of 10 that is greater than x.
- Set d = d/10
- i is guaranteed to be between 0 and 9 so write the digit i
- Set x = j
- If d > 1 go to 2
When d = 1 in step 6 we are done. At that point we are guaranteed that x = 0 because for a whole number x, x/1 = x with no remainder. This means that we have represented the entire number with complete accuracy and did it in a finite number of steps.
If we want to represent other numbers besides the whole numbers, then we need to add some steps:
- If x > 0 write a dot
- if x = 0 then we are done
- x = x*10
- i is guaranteed to be between 0 and 9 so write the digit i
- Set x = j
- Go to 8
We switched techniques here. For the whole part of the number, we divide by ever smaller powers of 10. When we get to the fraction, we start multiplying the fraction by 10 each time through the loop and dividing by 1. It is possible to do the whole procedure with a single technique but I’ll leave that as an exercise for the reader.
With this modified procedure, we keep going as long as x is not 0. But now there is no guaranteed stopping condition. The procedure will only stop if the remainder of the division goes to 0. The remainder may never to go 0 and that is why the representation is said to be “infinite” to the right, because the procedure for generating the representation never stops.
So the numbers that have a representation that are infinite to the right aren’t “defined” by decimal notation that is infinite to the right --that is just an accident of the notation, a failure of the notation, actually because it shows that there are numbers that can’t be represented using decimal notation. But there is nothing especially interesting about a number that can’t be represented by a finite S(x).
The algorithm above uses 10 as the base, but that is essentially an arbitrary choice because any other base could be used. And for each rational number, there are bases where the number could be represented in a finite string of digits for a rational number x, an infinite S(x) is just a consequence of the mathematical relationship between x and the base. It doesn’t tell you anything interesting about x on its own.
For irrational numbers, they are interesting because they can’t be written as any ratio of two whole numbers. The fact that they can’t be represented in decimal notation with a finite sequence is just a minor consequence of this far more important fact.
So the fact that a number is infinite to the right in decimal notation is not interesting. All it does is point out the limitations of decimal notation.
A Theory of Numbers that are Infinite on the Left
The fundamental misunderstanding that leads to the question about numbers that are infinite to the left is the idea that decimal notation is an interesting way to create or define numbers rather than merely a convenient representation. Here I’ll investigate that intuition. It’s simpler to stick with whole numbers so I’ll do that in the following. The extension to numbers that can be infinite in both directions is left as an exercise for the reader.
Given a sequence of digits s, we can calculate D(s), the number that the digits represent by reversing the procedure for S(x):
- Set x=0
- Set d=1
- Set i=the rightmost digit in s
- Set x = x + d * i
- Set d = d*10
- If there are no more digits to the left then we are done.
- Set i = the next digit to the left
- Go to 4.
So let’s say that we have an infinite sequence of digits that extends infinitely to the left rather than the right. A sequence of digits is not a number but we can try to make it represent a number by applying the procedure above for D(s). The problem is that the procedure for generating D(s) will never terminate if s is infinite.
That doesn’t necessarily stop us because there are techniques for dealing with infinite sequences. In the algorithm for D(s) there is a variable x that takes on a successive sequence of values that eventually become the answer in the case of a finite sequence. So we can say that if x ever stops changing, then that is the answer. If we get to a point in the sequence where every single digit to the left is a 0, then x will never change again and we can make that the answer. We have to do this so that, for example, if s is sequence ...00000001 with an infinite sequence of 0s to the left then D(s)=1. Since these finite sequences produce finite numbers, we will call them finitoid.
Other infinite sequences are infinitoid. An infinitiod sequence is one such that no matter where you are in the sequence, there is a digit to the left that is not 0. For an infinitoid sequence, the x in the D(s) procedure has no upper limit so we say that D(s) is infinitoid.
In other mathematical contexts, we might say that the limit of x is infinity or that D(s) is infinite but that does not lead to an interesting theory of infinitoid numbers. The reason is that in normal mathematics, two infinite numbers are always the same (well, there are different orders of infinity, but that doesn’t apply here because all infinitoid numbers are the same order).
So let’s not say that all infinitoid numbers are just the same value, “infinity”, let’s say that they are all different.
You can skip this section if mathematical logic frightens you
There is precedence for this in formal logic. I would give you the Wikipedia entry for Herbrand Universes or various consistency results but it would just confuse things if you don’t know mathematical logic, so let’s just say that when you want to define a set of mathematical objects that may or may not exist, you can get by with the assumption that they exist, as long as you have something to map them to.
In other words, we don’t have any reason to think that infinitoid numbers exist, but we know that infinitoid sequences exist so we can set up a correspondence between the sequences and the numbers. As long as everything we have to say about infinitoid numbers can be mapped to a statement about infinitoid sequences we know that we are saying something consistent. I’m not going to actually worry about this, but just note that the possibility of doing this in principle justifies talking about infinitoid numbers even if we can’t actually say what they are.
The point of the above digression is that it is safe to assume that we could in principle come up with a consistent theory of infinitoid numbers. Just not, as I’ll show, a consistent theory that looks anything like numbers.
On that ominous note, let us see if we can extend the addition operation to work on infinitoid numbers. We start by defining a function on sequences of digits: if s1 and s2 are two sequences of digits, then plus(s1,s2) is the sequence of digits that you get by applying the normal addition algorithm that we all learned in school. Keep in mind that this function operates on sequences of digits, not on numbers. But we can use this function to define addition on infinitary numbers: if x and y are numbers and at least one of them is an infinitary number, then x+y is defined as D(plus(S(x),S(y))). In other words, we convert both numbers to sequences of digits and then apply our schoolroom addition algorithm.
We can define multiplication and subtraction in a similar way but not division or comparison. The division algorithm starts at the left, and in an infinitoid sequence there is no left-most digit. Similarly, to compare two numbers as digit sequences, you have to compare the leftmost digit of the shorter sequence. So we can compare a finitoid number to an infinitoid number; the infinitiod number is always larger. But we can’t compare two infinitiod numbers because we need to get to the leftmost non-zero digit which does not exist.
This is a serious problem. One of the essential properties of numbers is that for any two numbers x and y, either x=y or x<y or x>y. This property doesn’t seem to hold for infinitoid numbers and I can’t think of any consistent way to define comparison to make it hold (except to take the usual mathematical approach and say that all infinitoid numbers are equal to each other).
There is another very important theorem that infinitoid numbers violate: the theorem that if x>0 then x+y>y. That is, if you add a number greater than 0, then the result is increased. That does not hold for infinitoid numbers. Let s9 be the infinitoid sequence that consists of an infinite sequence of 9s. Then
D(s9)+1 = plus(“...9999”,”...00001”) = D(“...00000”) = 0.
Since D(s9)>0, the answer should be greater than 1, but 0<1. And since 1>0, the answer should be greater than D(s9) but 0<D(s9).
This is a very serious problem. I can’t think of any way around it except by restricting all sequences to prevent an infinite sequence of 9s to the left at any point in the sequence. This may seem at first sight to be analogous to the fact that in normal decimal notation infinite sequences of 9s are equivalent to another number. For example 1.9999... with an infinite sequence of 9s is equal to 2.
As I say, the situations may seem analogous at first sight but I don’t think they are. In the case with infinite 9s to the right, the issue is that there is no finite difference between 1.9999... and 2 so they are the same. The situation with infinite 9s to the left is that it allows us to get around an infinite process by induction --that is, we can finesse the never-terminating algorithm and find out what it would return after an infinite number of steps-- and doing so reveals an inherent inconsistency in the process which was there anyway.
Conclusion? Sequences of numbers that are infinite to the left make no sense mathematically and the question itself is based on a misunderstanding.