There are some results in mathematics that are clearly very deep or interesting or surprising or useful. There are others that seem rather innocuous at first, but that turn out to be just as deep or interesting or surprising or useful. It seems to me that the Steinitz Exchange Lemma is definitely in the latter camp.

In order to write about this result, I have to say something about what a vector space is. Informally, a vector space is a set of objects (*vectors*) that we can add and that we can multiply by *scalars*. And the addition behaves nicely (officially, the set of vectors forms a *group* under addition), and the multiplication behaves nicely, and the two operations interact nicely. In essence, everything behaves as we think it ought to. The scalars come from another set, which has to be a *field*: we can add, subtract, multiply and divide scalars (except that we can’t divide by 0). Let me give a couple of my favourite examples of vector spaces, which I hope will help.

One example is , which is a vector space over the field of the reals. The vectors in this space are -tuples of real numbers. For example, when this is just the familiar space of vectors in the plane.

Another is , which is a vector space over the field of integers modulo (here is a prime). Vectors in this space are -tuples of integers modulo .

In order to get to the main topic of this post, I need to introduce a couple more concepts. (This is all a good illustration of the flavour of university-level mathematics, where one has to get to grips with a bunch of concepts before being able to put them together in theorems!)

One is the idea of linear independence. Let’s start with the familiar plane , and let’s take two non-zero vectors and in the plane. (It’s conventional to typeset vectors in **bold**.) They might point in completely different directions. Or they might point in essentially the same direction: we might have for some scalar . In the former case we say that they’re linearly independent; in the latter that they’re linearly dependent.

If we take three vectors, then we could have something like , , . In this case, any two of these vectors point in completely different directions, but if we consider all three then they are related: . It isn’t quite that they point in the same direction, but there is a dependence between them: we can write in terms of and . So any two of them are linearly independent, but the three of them are linearly dependent.

Formally, we say that the vectors are *linearly independent* if the only way to have is when . Putting that another way, we can’t write one of them in terms of the others. (One can extend this definition to a set of infinitely many vectors, but let’s stick to finite collections in this post.)

I said that I needed to introduce a couple more concepts; here’s the second. Let’s go back to our favourite example of the plane . There’s something really special about the vectors and : we can write every vector in as a *linear combination* of them. For example, , and . We’re quite used to doing that, because and are familiar vectors.

Can we do the same sort of thing with and ? Well, , and more generally . So yes we can! But we couldn’t do it if we just had , because we can’t possibly write every vector in as a multiple of .

We say that the vectors *span* a vector space if every vector in the space can be written as a linear combination for some scalars . (Again, one can extend this definition to infinite collections of vectors, and again I’m not going to worry about that here.)

Now that we know what the words mean, we can get down to business…

So the question is: given a vector space, how large/small can a collection of linearly independent vectors be, and how large/small can a spanning collection of vectors be? I strongly encourage you to think about this, perhaps with the help of some examples.