Zeno's paradox
An ancient puzzle leads ultimately to a remarkable observation on the malleable nature of infinite sums
Zeno's paradoxes of motion
The Greek philosopher Zeno of Elea (c. 490–430 BC) argued in antiquity that all motion is impossible. It is simply impossible to walk through town or even across the room, to go from here to there. What? We know, of course, that this is possible—we walk from here to there every day. And yet, Zeno offers us his proof that this is an illusion—we simply cannot do it.
Zeno argued like this. Suppose it were possible for you to move from some point A to another distinct point B.
Before you complete the move from A to B , however, you must of course have gotten half way there.
But before you get to this half-way point, of course, you must get half way to the half-way point! And before you get to that place, you must get half way there.
And so on, ad infinitum.
Thus, to move from A to B , or indeed anywhere at all, one must have completed an infinite number of tasks—a supertask. It follows, according to Zeno, that you can never start moving—you cannot move any amount at all, since before doing that you must already have moved half as much. And so, contrary to appearances, you are frozen motionless, unable to begin. All motion is impossible.
Is the argument convincing? On what grounds would you object to it? Do you think, contrary to Zeno, that we can actually complete infinitely many tasks? How would that be possible?
It will be no good, of course, to criticize Zeno's argument on the grounds that we know that motion is possible, for we move from one point to another every day. That is, to argue merely that the conclusion is false does not actually tell you what is wrong with the argument—it does not identify any particular flaw in Zeno's reasoning. After all, if it were in fact an illusion that we experience motion, then your objection would be groundless.
Rather, to truly criticize Zeno's argument, one must engage more directly with it. What exactly might be the problem with his argument?
Achilles and the Tortoise
Consider the allegory of Achilles and the Tortoise. The Tortoise has challenged Achilles to a race around the stadium. How laughable! How could the slow Tortoise win against the swift warrior? Nevertheless, the confident Tortoise insists he can win, if given only a modest head start, say, one quarter of the way around. Achilles accepts, and the race begins. They line up at the start, and the Tortoise starts off running at a brisk pace (for a tortoise), while Achilles waits patiently for him to reach the quarter-way-around mark at point A , at which time Achilles takes off at a fast clip from the start.
Why does the Tortoise think he will win? Between steps, panting heavily, the Tortoise explains his reasoning like this. He argues that Achilles shall never actually overtake him. Why? Well, when Achilles had started running, the Tortoise had already had his head start, a quarter way around the stadium to point A. But by the time Achilles gets to that point A, the Tortoise will naturally have moved on to some further point B. So in order to catch him, Achilles also will need to move to this further point B. But again by the time Achilles gets to point B, the Tortoise will have moved on to some further point C. And so on, ad infinitum. Every time Achilles catches up to where the Tortoise had been, the Tortoise has moved on to a new further location still beyond Achilles. And so, the Tortoise reasoned, Achilles shall never catch up!
An infinite sum with a finite value
Does it make sense ever to add up infinitely many numbers? Consider the following sum:
Since we are adding up infinitely many positive numbers, should one expect the sum to be infinite?
No, in fact this infinite sum adds up altogether to a finite value. What I claim is that this summation sums to 1.
To see this, consider a line segment of unit length 1 , as below:
Let us take the first half, like so:
And next we take also half of what remains, which will add a quarter more:
And then an eight and sixteenth more:
At each stage, we take half of what remains, which will be half as much as we had just added at the previous step, and in this way, we exhaust more and more of the original unit interval.
At each stage, the current sum remains less than 1 , but the difference between it and 1 decays to zero. The total sum, which means the limit value of the finite partial sums, will therefore be exactly 1. The length of the original interval is exhausted by the infinite sum, which therefore sums to 1.
So we have an infinite sum with a finite value.
A different perspective on Zeno
Perhaps this analysis provides an alternative perspective on Zeno's paradox. Let us consider a forward-oriented version of his argument. That is, let us argue like Zeno that in order to get from one point to another, we must first traverse half the distance, and then after that traverse half the remaining distance, and then half the still remaining distance, and so on ad infinitum. This is exactly the process we had used in the figures above to analyze the 1/2 + 1/4 + 1/8 + ··· and so on. We can imagine an alternative Zeno arguing that it is impossible to arrive at the destination, since one must traverse separately all these infinitely many disjoint intervals. Such a forward-oriented version of Zeno's argument seems to have much in common with Zeno's original ideas. And yet, meanwhile, the paradox seems to be at least partly addressed by our analysis that although there are infinitely many distinct segments, nevertheless the total length is finite, and so one can indeed sometimes traverse infinitely many segments in a finite time. This observation seems to pour some cold water on the idea that it is inherently impossible to traverse infinitely many distinct intervals. Does this lead the way to a resolution of Zeno's paradox?
An alternative illustration
Here is an alternative way to analyze the same sum. Consider the unit square shown here, with area 1 . We have successively divided it into smaller pieces: a rectangle of area 1/2, a square of area 1/4 , a rectangle of area 1/8, and so on. Each rectangle is followed by a square of half the size, and each square is followed by a rectangle of half the size. So the sum of all the pieces is
and since they exhaust the unit square in the limit, the sum is 1, as claimed.
Such a sum is known as a geometric series—each successive term exhibits the same ratio with the previous term, getting half as large at each step.
The most contested equation in middle school
Which equation has spurred countless hours of contested argumentation and debate in our middle schools and far beyond? I refer, of course, to the identity:
It is true? Is 0.99999··· fully equal to 1, or is there some minute or infinitesimal difference between them?
Yes, indeed, it is true. Let us prove it. Here is one common argument. Let x = 0.999999···, whatever value that may be. It follows that 10x = 9.999999··· and we may compute the difference 10x - x simply by lining up the decimal and performing the subtraction:
On the left we have 9x, and on the right, all the 9s in the decimal parts cancel, leaving just 9. So 9x = 9, which implies x = 1. And so we seem to have proved that indeed 0.9999999999··· is equal to 1.
Here is another proof. Many of us know the decimal expansion of one third to be point three repeating, like this:
This can be easily seen if one simply performs long division. But in this case,
which shows the desired identity.
Perhaps both of these proofs can be criticized, however, on the grounds that they assume 0.99999··· and 0.3333··· are meaningful expressions in our real number system. The first argument was really of the form: if 0.9999··· is a meaningful expression, then its value is 1.
But is it a meaningful mathematical expression? Yes, it turns out that these expressions are both instances of the geometric series. So let us discuss that before returning to this topic.
What our decimal notation really means
Let me explain. Our familiar placed-based decimal notation for the natural numbers provides a means of describing every number in terms of the number of ones, tens, hundreds, thousands, and so on. Thus, the number 7547 is written that way precisely because it is the value of:
7547 = 7 thousands + 5 hundreds + 4 tens + 7 ones.
In other words,
The decimal notation for the real numbers works similarly, except that fractions of a unit are considered. So the number 5.385 means:
But now the point is that this same idea applies to infinitary decimal representations, whether they are repeating or not. Every infinite decimal expression represents an infinite sum. The decimal number 5.171717··· is simply a way of writing the infinite sum:
The number π = 3.14159265358979323846··· is a notation for certain sum:
To have digit d in the kth place after the decimal point means that we are adding d/10k to the sum.
From this point of view, the expression 0.99999··· refers to the infinite sum:
This is an instance of what is called a geometric series.
Geometric series
A geometric series is an infinite summation whose successive terms all stand in the same ratio to the previous term. That is, you get the next term always by multiplying by the same factor. For example, the series we considered earlier is a geometric series
because each term is half as large as the previous term. In general, a geometric series has the form
where a is the starting value and r is the constant ratio of successive terms. Each term is obtained by multiplying the previous term by r .
Let us consider the simple case a = 1, which is nevertheless general enough, since every geometric series is obtained from this case by multiplying through every term by the factor a. So we consider
Such an infinite series is said to converge to a limit value L, if the sequence of finite partial sums
gets as close as desired to L as n increases. That is, for any positive degree of accuracy ε > 0, there is a number N of terms in the series, such that for all larger n ≥ N the finite partial sum is within ε of L. Such a kind of limit-value analysis is a core conception of the calculus, and some of my readers may be familiar with that.
It is clear that if r ≥ 1, then the series cannot converge, since the individual terms are at least 1 and the sum will be infinite. A similar analysis works if r is negative, but with magnitude |r| ≥ 1.
So let us assume that r is smaller than 1 in magnitude, meaning |r| < 1. We shall calculate the exact value of the finite partial sums. Let x be the finite partial sum
taking the terms up to rn for some fixed n. Observe that
And so x + rn+1 = 1 + rx, which can be easily solved for x to obtain
So this is the value of the terms in the geometric series up to rn. The thing to notice about this expression is that because |r| < 1, it follows that the rn+1 term becomes increasingly small toward zero. And so the limit of the expression is exactly 1 / (1-r). Thus, we have found the value of the geometric series!
In the general case, we have
Application to our initial examples
In the special case of our initial geometric series, we have a = 1/2 and r = 1/2, leading to
which agrees with our earlier analysis of the value of that geometric series.
The case of 0.9999··· is the geometric series with a = 9/10 and r = 1/10, and so
which again agrees with our earlier analysis.
And similarly, the case of 0.3333··· is the geometric series with a = 3/10 and r = 1/10, and so
as expected.
Harmonic series diverges
And what about the following series, known as the harmonic series:
Does it converge? The individual terms 1/n are vanishing to zero, which is a necessary requirement for convergence of the series. But is there a finite limit value for this series?
No, in fact there is not. The harmonic series diverges to infinity. To see this, suppose that we have considered the terms of the series up to 1/n, like this:
Let us consider doubling the number of terms, like this:
Notice that we have added n extra terms in that second grouped expression, and each of them is at least as large as the final term 1/2n. So the extra amount added by the second group is at least n · (1/2n) = 1/2.
In other words, we can always add value 1/2 more to a finite partial sum simply by doubling the number of terms appearing in it. By doing this twice (taking four times as many terms), we can add 1 more. By doing that twice, we can add 2 more to the value. And so on. We can make the value as big as desired, simply by doubling the number of terms we take a sufficient number of times. This series therefore diverges to infinity.
Alternating harmonic series
A curious variation on the divergent harmonic series is the alternating harmonic series, whose terms are just like in the harmonic series, except that they alternate in sign from positive to negative to positive again.
Does it converge? Yes, indeed it does. This can be seen by considering the nature of the graph of the finite partial sums as the series progresses.
The points on this graph represent the value of the running tally as we add successive terms to the sum as indicated. Notice how the sum jumps up when we have just added a positive term and down when we have just added a negative term. And because these jumps also become smaller as the series progresses, it means that the upper sums are progressively descending and the lower sums are progressively ascending. Furthermore, those approximations are getting closer to each other, as close as desired, because the magnitude of the jumps is 1/n, which tends to zero. And so the series will converge to some limit value, which is simply the infimum of the upper approximations, which is the same as the supremum of the lower approximations. In fact one can show using methods from calculus that the limit value is precisely ln 2, the natural logarithm of 2, which is about 0.693 . So the alternating harmonic series converges to ln 2.
Riemann rearrangements
The alternating harmonic series is an example of a conditionally convergent series. Namely, it is convergent, with limit value ln 2 as we have mentioned, but if one makes all the terms positive, then it becomes the harmonic series, which diverges to infinity.
Riemann proved a remarkable theorem about such conditionally convergent series. Namely, for any conditionally convergent series, we can rearrange the terms of the series so as to make the limit sum whatever value we desire. Yes, you read that correctly—rearranging the terms of a conditionally convergent infinite series can cause the final sum to have a different value, and indeed, suitable rearrangements can realize any desired sum. In this respect, the basic arithmetic of infinite sums can be quite different from what we used to with finite sums, which are of course invariant under rearrangement.
As an instance of the rearrangement theorem, we can rearrangement the terms of the alternating harmonic series to make the limit value whatever we want. In the usual order, we have mentioned that the sum is ln 2, which is about 0.693. Suppose we take a new target value of 0.9 for a rearranged version. Can we realize it?
Yes, indeed we can. The idea of the proof is that we build the rearrangement by taking positive terms only until they meet or exceed the target, and then negative terms until it drops below the target, after which we go back to positive terms until it reaches or exceeds the target again.
Notice how several times we needed to take positive values twice in succession in order to reach the new higher target value. The graph shows that for this target value we should use the rearranged alternating harmonic series beginning with
This is no longer strictly an alternating series—it doesn’t simply alternate from positive terms to negative to positive again, but rather uses positive terms at a little greater frequency than the negative terms. There will later on sometimes also be negative values twice or more in succession.
The main point is that the rearrangement procedure will produce a rearrangement realizing exactly the target value as the desired sum. Since the positive parts sum to infinity, and the negative parts to minus infinity, we will always be able to zig zag back and forth across the target as the construction proceeds. And since the individual terms in the series are becoming very small, they provide a bound for how far the finite partial sums will be from the target. So the rearranged sum converges exactly the target, as desired.
The argument is fully general and works with any conditionally convergent series and any desired target value, whether positive or negative. Simply by taking additional positive values when the current partial sum is below the target and negative values when the current sum is above the target, one can in this way attain any desired target value as the limit of the rearranged series. We can also realize an infinite sum, or negative infinity, by the same method, as well as a series whose partial sums oscillate wildly. It is truly remarkable.
Questions for further thought
Post your answers, comments, and questions below!
Provide an analysis of Zeno's paradoxes of motion. What is the flaw, if any, in Zeno's argument that all motion is impossible? Will Achilles overtake the Tortoise? If so, what is the flaw in the Tortoise's argument?
Is it possible for us actually to complete a supertask, a task involving infinitely many separate steps?
Do the limit conceptions of calculus, in particular, the ideas surrounding the convergence and divergence of series, provide an adequate resolution of Zeno's paradox? Why or why not?
Can you use the ideas of Riemann’s theorem to prove that every positive real number r is the value of a suitable subseries of the harmonic series? That is, for every r > 0 there is a set A ⊆ ℕ for which
\(r=\sum_{n\in A}\frac 1n\)Is A unique for a given r? If not, how many A are there?
Similarly, can you prove a generalization of Riemann’s rearrangement theorem, showing that if a series is conditionally convergent, then for any target value r there are many different rearrangements of the series having sum r?
Further reading
Huggett, Nick. 2019. Zeno’s Paradoxes, Winter 2019 edn. In The Stanford encyclopedia of philosophy, ed. Edward N. Zalta. Metaphysics Research Lab, Stanford University. A summary philosophical account of Zeno's paradoxes of motion.
Credits
The red-orange figure showing the sum of the geometric series 1/2 + 1/4 + ··· was adapted from my book, Proof and the Art of Mathematics, MIT Press, 2020.
I don't think limits in calculus resolve Zeno's paradox by themselves. The formal definition of limits avoids any notion of an "actual" infinite sum, instead using the notion of arbitrary closeness. To have "actual" infinite sums, one would need something like measure theory. Also, Zeno's argument concludes that we can't even begin our series, much less end it. So we can't even progress past the first element in our converging series.
I think the best way to resolve the paradox is to interrogate the assumption that we can't complete an infinite series of tasks. Perhaps Zeno would argue that we can't complete an infinite series because to complete a series of tasks, one needs to do the last task in the series, but infinite series have no last task. But what one needs to do to complete an infinite series is to complete all tasks in it. The corresponding mathematical principle here is transfinite induction: to prove something for a limit ordinal, one must prove that it follows from proving it for all lesser ordinals.
1. Provide an analysis of Zeno's paradoxes of motion. What is the flaw, if any, in Zeno's argument that all motion is impossible?
-----------------------------------------------------------------------------------------------------------------
In Zeno's day there were two opposing views: (1) that the whole is constructed from the parts and (2) that the parts are derived from the whole. Zeno was in the parts-from-whole camp, and he developed the paradoxes to show the flaw of the whole-from-parts view. [REF.1]
Consider parts (points) being fundamental. You can endlessly glue point after point but at no step will you have assembled the points (having no length) into a line (having length). Motion/extension is not achieved at any step. You are left with nothing.
If the whole (e.g. a line) is fundamental, you can endlessly cut it into line segments but at no step will you be left with just points. ‘Nothingness’ is impossible. At first glance, this seems like the more appealing perspective.
In your reasoning based on partial sums, the whole is fundamental (after all, you start with the unit line). But in the final step (where it is concluded that infinite series sum to a number), the fundamental object is switched to be the part (after all, you conclude that you can add parts to construct the unit line). Is this swap of fundamental objects justified? Is it helpful?
We know Achilles will pass the tortoise because we consider the big picture (the whole) - we look at their equations of motion and can see where the curves intersect. The tortoise's argument (and those in Zeno's other paradoxes) is flawed because he assumes that the instant (a point in time) is fundamental, such that time flows from one point in time to the next. But this is impossible since there's no 'next point' on a continuum. And with no ‘next instant’, motion is impossible.
[REF.1] Quote attributed to Zeno: "My writing is an answer to the partisans of the many and it returns their attack with interest, with a view to showing that the hypothesis of the many, if examined sufficiently in detail, leads to even more ridiculous results than the hypothesis of the One.”]
2. Is it possible for us actually to complete a supertask, a task involving infinitely many separate steps?
-----------------------------------------------------------------------------------------------------------------
With the whole-from-parts view we are forced to believe in supertasks, otherwise we are left with nothing. We need to believe that after assembling a big enough infinity of points (having no length) we'll produce a line (having length).
With the parts-from-whole view there is no need to believe in supertasks. We start with motion/extension (perhaps a wave function of the universe?), and we occasionally make observations/measurements along the way (like how you had cut the unit line in half to put a point at 0.5). There’s no need to assume that existence is a mere collection of measured moments. In fact, if it were, motion would be impossible (Quantum Zeno effect).
Can you sensibly describe a completed supertask without referring to the whole?
3. Do the limit conceptions of calculus, in particular, the ideas surrounding the convergence and divergence of series, provide an adequate resolution of Zeno's paradox? Why or why not?
-----------------------------------------------------------------------------------------------------------------
Limits are described as *potentially* infinite processes (e.g. as x approach c) but in the end concluded to describe *actually* infinite objects (e.g. a summation of infinite terms equaling a number). It seems that a big leap in logic is required to go from potential infinity to actual infinity here and it's not clear to me why this is necessary, or helpful.
Thanks!