Theorem 22: Szemeredi’s theorem

There’s a sense in which this is a follow-up to van der Waerden’s theorem, but I’d like to emphasise that it’s very interesting in its own right!

Let’s remind ourselves what van der Waerden‘s theorem says.  (If you haven’t come across it before, you might like to read that blog post first.)  I’ll phrase it very slightly differently from last time, so that I use the standard terminology, but hopefully you’ll see that it’s the same thing.  Imagine that N is a large positive integer, and colour each of the numbers from 1 to N using one of r colours.  The theorem tells us that if N is large enough (depending on r and k) then there are k evenly spaced numbers that are the same colour.  The fancy name for this is a monochromatic arithmetic progression of length k.

One thing mathematicians like to do is to get an understanding of why a theorem is true by testing whether the conditions are all really necessary.  It seems reasonable to try to explore what’s really going on with van der Waerden’s theorem.  Is there some important feature that occurs because we’re colouring the numbers with r colours, or is it really just that if we colour the numbers with r colours then there must be quite a lot of numbers with the same colour?  It turns out that the latter is true (although one could perhaps argue that the former is also true — but I shan’t dwell on that).

In 1936, Erdős and Turán made a conjecture along these lines.  I’ll call it a theorem, since it now is, and once I’ve stated it I’ll carry on telling the story!

Theorem (Szemerédi) Let k be an integer greater than or equal to 2, and let \delta be a positive number less than 1.  If N is large enough (depending on k and \delta), then any subset A of \{1, 2, \dotsc, N\} with size at least \delta N must contain an arithmetic progression of length k.

Let’s try to make sense of this by seeing how this relates to van der Waerden’s theorem.  If we colour the numbers from 1 to N using r colours, then there must be some colour that’s used on at least N/r of them.  Let’s say it’s blue, and let’s say the blue numbers form the subset A of \{1, 2, \dotsc, N \}.  Then A has size at least \delta N where \delta = 1/r.  Szemerédi’s theorem tells us that if N is big enough, then A must contain an arithmetic progression of length k. That is, there must be k evenly spaced blue numbers.  So Szemerédi’s theorem implies van der Waerden’s theorem: the latter is a special case of the former.  This is what I meant by saying that the colouring somehow wasn’t important — what mattered is that if we use only a few colours, then there must be lots of numbers with the same colour.

I hope you’ll agree that it’s an interesting theorem in its own right, but that’s not the whole picture: this theorem is actually pretty important, for reasons to do with the story of its proof.  So I’ll go back to that story.

As I said, this was a conjecture in 1936.  As far as I know, the first progress was in 1953, when Roth proved the theorem for k = 3: he proved that a dense subset of the integers must contain an arithmetic progression of length 3.  That might not sound like much, but it was a big step forwards.  Roth’s proof used a technique called Fourier analysis.  At first sight (to those who have met Fourier analysis) it might seem that this technique has nothing to do with the problem, but Roth was able to deploy it very effectively.

The next big development was in about 1975, when Szemerédi was able to prove the conjecture for all values of k, using a combinatorial argument about which I know nothing except that it involved the Szemerédi regularity lemma, which is now a major tool in combinatorics.  At this point, it was officially a theorem, and so you might expect mathematicians to leave the subject alone and move on to something else.  But the story had hardly got started…

In 1977, Furstenberg gave a totally new proof, using ideas from ergodic theory (another area that seems at first sight to have nothing to do with a combinatorial problem).   In particular, he developed the Furstenberg correspondence principle, and this and related ideas have led to many proofs of other results, including some that seemed much harder to prove using combinatorial techniques.

OK, so two different proofs.  Surely that’s enough?  No!  The statement of the theorem talks about N being big enough.  One might very reasonably ask how big that is.  I’m not quite sure whether Szemerédi’s proof gave any information about this, but if it did then it wasn’t very helpful.  The nature of Furstenberg’s proof means that it gives no quantitative information: it says there is a large enough N, but says nothing about how large it is.  Roth’s proof for progressions of length 3 does give a bound (it might be that a smaller number would do, but his proof gives a number that certainly works).  So perhaps an approach using Fourier analysis for longer progressions would lead to quantitative information about how large N must be?

Unfortunately, it turns out that one cannot use ordinary Fourier analysis for longer progressions.  Gowers (who also has a blog) was able to demonstrate that it’s impossible.  However, he was able to develop a “higher-order Fourier analysis” that does lead to a proof for longer progressions, and in 2001 he published another proof of Szemerédi’s theorem.  This has also led to a great deal of other work: trying to understand this higher-order Fourier analysis, working out the best way to think about it, exploring other applications, and so on.  The GreenTao theorem about arithmetic progressions in the primes is a notable example, but there are many others.  Gowers’s proof gives the best known bounds for Szemerédi’s theorem.

But still the story goes on.  A few years later, several people (such as Nagle-Rödl-Schacht and Gowers) found another approach, using a “hypergraph regularity lemma”.  And one can also deduce the theorem from the density Hales-Jewett theorem, and so on…  Tao has a list of approaches on his blog.

Why bother to keep finding new proofs of the same result, especially when most of them don’t lead to improved bounds?  I find it remarkable what a range of mathematics is used in these proofs (combinatorics, Fourier analysis, analytic number theory, graph theory, hypergraph theory, ergodic theory, …).  Many of them have led on to other major work.  It’s also fascinating to explore the connections and differences between the various approaches.  Even proofs that use totally different mathematics can have elements in common, and exploring this is a very fruitful way of gaining understanding.

Further reading

I’ve made some suggestions earlier.  That blog post of Tao has some links.  The Wikipedia page is a bit thin.  A much better place to look would be this Scholarpedia article by Green and Tao.  Furstenberg’s book Recurrence in Ergodic Theory and Combinatorial Number Theory describes his approach (as do many other books).  Additive Combinatorics by Tao and Vu contains a lot of material about Szemerédi’s theorem.


4 Responses to “Theorem 22: Szemeredi’s theorem”

  1. theoremoftheweek Says:

    If you’re interested in seeing some more details, you might like to know that Terry Tao has put some course notes on Roth’s theorem on his blog.

  2. Theorem 23: Waring’s problem « Theorem of the week Says:

    […] Theorem of the week Expositions of interesting mathematical results « Theorem 22: Szemeredi’s theorem […]

  3. Pete Says:

    On the vague possibility that it might be interesting…

    Szemeredi’s argument made use of (a version of) his regularity lemma, which gives (unavoidably) tower-type bounds. It also used Van der Waerden’s theorem itself, which if memory serves had at the time worse bounds (the current record is much better than tower-type, held by Gowers, but the proof consists of the above mentioned improvement on the bounds in Szemeredi’s theorem and is not really ‘Ramsey-theoretic’ as one might hope for a proof of Van der Waerden).

    Gowers has also not shown that Fourier analysis is as such non-useful for proving Szemeredi’s theorem. He has demonstrated the existence of a significant obstacle (namely that low-density Fourier-quasirandom sets need not contain 4-APs): but to my knowledge there is no demonstration that it is not possible to carry out the density incrementation technique via standard Fourier analysis with some (perhaps significant) care: it is not hard to distinguish Fourier-quasirandom sets which do and do not contain 4-APs. In fact, I would conjecture that such an argument does exist (on the slightly specious grounds that it is possible to iterate weak versions of regularity lemmas in order to obtain stronger versions ‘for free’), but I suspect that such a proof would be only a curiosity (it would likely provide bounds comparable to the hypergraph regularity method, or even worse).

  4. theoremoftheweek Says:

    Thanks for the information and clarification!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: