My first-year students are busily revising Probability at the moment (amongst other things). I thought I’d write about how some of the probabilistic ideas they’ve learned can be applied to a rather different area of maths (one that at first sight seems to have nothing to do with probability).
This area of maths is called Ramsey theory, named after Frank Ramsey. Van der Waerden’s theorem, the topic of a previous theorem of the week, is a result in this area. That theorem says (roughly) that if we colour numbers red or blue then we can always find a long string of evenly spaced numbers with the same colour (a monochromatic arithmetic progression). This week I’m going to write about something with a similar flavour, but where we colour graphs rather than numbers.
So, what is a graph? I don’t mean a graph like a bar graph or something. This is a different sort of graph! This sort of graph is made up of some points (called vertices), and some lines between some of those points (called edges). Here are some examples of graphs.
Note that we don’t allow there to be more than one edge between two vertices, and we don’t allow a vertex to be joined to itself (no loops). The graph on the right, which has all the possible edges, is called the complete graph on four vertices, written . The complete graph on
vertices is written
.
What’s the analogue of colouring numbers and looking for arithmetic progressions? We take the complete graph and colour each edge red or blue. That’s the colouring; what are the patterns? We could look for monochromatic triangles, perhaps. If we colour the edges of
red or blue, must there be a red triangle or a blue triangle?
Here’s an example of a colouring of that has no red triangle and no blue triangle.
Is there a colouring of with no red triangle and no blue triangle? I encourage you to consider this for yourself. Either try to find such a colouring, or prove that it’s not possible…
Having thought about finding monochromatic triangles, where do we go next? A triangle is a complete graph on three vertices, , so one natural generalisation would be to look for monochromatic
for larger
.
It turns out (and this isn’t obvious, although I think it’s plausible), that for any , there is some
so that however we colour the edges of
using red and blue, there is always a monochromatic
(there is always a red
or a blue
). We call the smallest such
the Ramsey number
. The fact that this number
exists is called Ramsey’s theorem. I’m not going to prove it here, because I want to move on to something else, but it has a very nice proof and I might come back to it in a future week.
Being mathematicians, once we know that exists we of course want to know how big it is. As you might have discovered already (if you thought about the example above),
. That is, we can colour
using two colours in such a way that we don’t get a monochromatic triangle (we saw such a colouring above), but however we assign the colours red and blue to the edges of
, we get a red triangle or a blue triangle. That’s not too hard. However, computing Ramsey numbers gets very hard very quickly. The value of
is known to be 18, but that’s the last one that’s known exactly! There’s a table of what’s known about small Ramsey numbers on Wikipedia. (The Ramsey numbers we’ve discussed are down the diagonal of that table.)
OK, so finding Ramsey numbers exactly is hard. But as I’ve discussed on this blog before (for example here), that doesn’t mean the end of the story. It would still be interesting to have bounds (upper and lower) for these numbers. It turns out that the proof of Ramsey’s theorem gives some sort of upper bound, although not the best possible. But here I’d like to talk about a proof of a lower bound. It’s not the best known lower bound, because it’s quite old and has since been improved, but it’s not bad (compared with what’s now known) and the proof is significant because it illustrates a technique called the probabilistic method. This proof was first given by Erdős in 1947 in this paper, and I think it’s very elegant. It’s this week’s theorem.
Theorem (Erdős) The Ramsey number
is bigger than
.
Let’s remind ourselves what we’re doing. We want a lower bound for the Ramsey number . That is, we want some
(ideally as large as possible) so that we can somehow colour the edges of
using red and blue in such a way that we get no red
and no blue
(just as above we saw an example of a colouring of
with no red triangle and no blue triangle). We don’t actually need to see an example of such a colouring, we just need to know that one exists. (This might seem like a slightly odd idea the first time you come across it!) The method we’ll use will prove that such a colouring exists, but will give us no information about how to find it. It is non-constructive.
Here’s the idea (with details to follow afterwards). We’ll pick a random colouring of the edges of . Then we’ll get an upper bound for the probability of a monochromatic
. We’ll show that if we choose
carefully, we can make that probability less than 1. If the probability is less than 1, then there must be some colouring with no monochromatic
(because if every possible colouring of
had at least one monochromatic
, then the probability of a monochromatic
would be one). And that will do nicely.
How do we pick a random colouring? We colour each edge of red or blue, using each colour with probability
. We colour the edges independently.
What’s the probability that we get a monochromatic ? It’s certainly at most the sum of the probabilities that each
is monochromatic (using quite a crude bound). We can find this by counting the total number of
s in
and multiplying by the probability that an individual
is monochromatic.
How many s are there? If we pick any collection of
vertices from our
, there’s exactly one
with those vertices. So the number of
s is the number of ways to choose
vertices from
, which is the binomial coefficient
.
What’s the probability that a single is monochromatic? The probability that a single edge is red is
. The graph
has
edges, and since the probabilities for different edges are independent we can multiply to see that the probability that
is red is
. Also, the probability that
is blue is
, so the probability that
is monochromatic is the sum of those, which is
.
So the probability of a monochromatic is at most
, and we want to choose
as large as possible so that this is less than 1. We’ve basically done all the thinking now — the rest is down to careful estimation! I’m not going to be very careful here, so this isn’t the best possible bound that one can extract from this argument.
If we take (where the funny brackets mean we round down to the nearest whole number — we take the integer part), what do we get? We use a crude bound for the binomial coefficient; I’ll let you convince yourself that it’s true. The probability of a monochromatic
is at most
, so the probability is strictly less than 1, as we wanted. (You could look at David Conlon’s notes from his second lecture (see below) if you want to see a more careful estimate.)
If you look at Erdős’s paper, you’ll see that he phrases this argument using counting rather probability. The reason that it tends to be phrased probabilistically nowadays is that this language allows one to use lots of other ideas from probability (such as expectation and variance and so on). This argument is a relatively simple application of the probabilistic method, but (I hope you’ll agree) no less interesting or beautiful for that.
Further reading
There’s more information about Ramsey numbers and Ramsey theory all over the place. For example, David Conlon is giving a lecture course on Graph Ramsey Theory in Cambridge this term, with notes online. The standard reference for the probabilistic method is The Probabilistic Method by Alon and Spencer, which discusses the above proof and much much more! (The proof above is really the simplest sort of application of probability to problems like this — there are more sophisticated approaches too.)
Leave a Reply