Sorry for the delay in posting this: I’ve been working on it for a few days, but it took a little while to get together. I hope you find it interesting!

I’d like to send you on some mystery tours. In each case, I’d like you to think about where you end up (relative to where you started). You’ll need a very accurate compass and plenty of space, or just a good imagination! Or perhaps you could send a turtle for a walk, if you have some form of Logo on your computer (alas, I do not). I’ve drawn some pictures for this post; I used GeoGebra (which I used last time too). It was easy to use for this, but I couldn’t find a turtle.

Ready?

In each case, I’m going to give you some bearings, and I’d like you take one step on the first bearing, one step on the second, and so on. The bearings are all in degrees.

- Bearings: 0, 0, 0, 0, 0, 0, 0.
- Bearings: 360/7, 360/7, 360/7, 360/7, 360/7, 360/7, 360/7.
- Bearings: 0, , , , , , .
- Bearings: 0, , , , , , .

Done that? I encourage you to try the same sort of thing with some other routes. For example, you could try replacing 7 by other numbers.

What can we say about where we end up, following each of these four routes? Can we predict this in advance? Can we predict what would happen for other numbers?

Here are some pictures. Hopefully they look like yours! (The angles are measured anticlockwise from the horizontal positive axis. It now occurs to me that this is not the standard direction if you’re using a compass, but it matches standard mathematical practice better. So I hope you won’t be too confused and will forgive me.)

- Bearings: 0, 0, 0, 0, 0, 0, 0.
- Bearings: 360/7, 360/7, 360/7, 360/7, 360/7, 360/7, 360/7.
- Bearings: 0, , , , , , .
- Bearings: 0, , , , , , .

I think that we can hopefully see reasonably easily that in examples 1 and 2, with a constant bearing, we’re going to get straight lines. So let’s not dwell on that. What happens with the other two? Here are the corresponding things with 7 replaced by 13.

Bearings: 0, , , , , , , , , , , , .

Bearings: 0, , , , , , , , , , , .

Notice anything interesting?

I’d like to draw your attention to a few points.

- In example 3, we have a closed regular polygon, starting and finishing at the origin (so we end up exactly where we started).
- In example 4, we take quite an erratic-looking path, but finish on an axis, and at distance (or ) from the origin. (I agree it’s not obvious from my pictures above that these are the distances, so try it for yourself if you don’t believe me!)

(There are other interesting things to notice and explain too, such as some symmetry, but I’m not going to talk about them here.)

In order to explore this further, I want to introduce some notation. I’ll try to keep it as non-technical as possible, but I’ll use the standard notation for the benefit of people who’ve come across it before.

I’m going to write for a single step with `bearing’ (in the above sense) . (As is fairly standard, at least in certain circles, I’m writing for .)

We take a sequence of steps by adding such things. So example 3 corresponds to . This is not enormously convenient notation (I don’t really want to type the version with 13 instead of 7!), so let me introduce the sigma sign (if you haven’t met it before). I’d record the above sum as . The symbol means that we take the sum of the following term as ranges from 0 to 6, and this is precisely . In this notation, we’d write example 4 as . OK?

It turns out not to be too difficult to see that for any , we have . That is, we always end up back where we started in this case (as we predicted based on our experience with and ). I think I’ll leave this as an exercise for you.

What about the squares? I want to concentrate on primes here, so I want to look at things like , where is an odd prime. These are called Gauss sums, and that brings me to this week’s theorem.

**Theorem** Let be an odd prime. Then , where is if , and if .

What does this mean? It means that the size of the sum is always . That is, the distance of the finishing point from the starting point is , as we suspected. And it tells us that if leaves remainder 1 when divided by 4, then the finishing point will be on the positive horizontal axis (as I’ve drawn it), whereas if leaves remainder 3 when divided by 4, then the finishing point will be on the positive vertical axis. You might like to use the pictures above to check this for and .

I think I’m going to skip the proof here, because this post is probably long enough already. Showing that the size of the sum is isn’t too difficult, when one knows a bit about manipulating these exponentials . It takes a bit more work to establish the direction. There are probably proofs of this all over the place. Off the top of my head, I happen to know that Davenport’s *Multiplicative Number Theory* and Iwaniec and Kowalski’s *Analytic Number Theory* both have sections on Gauss sums.

What’s the point of this? It tells us something about the distribution of the quadratic residues modulo a prime. We’ve seen three quite different types of behaviour with these mystery tours. In one, we ended up as far away from the starting point as possible, when all the bearings were the same. They were concentrated in one direction, rather than being evenly spread round the circle (compass). In another, we had the opposite extreme: the bearings were so uniformly distributed round the circle that we ended up where we started. Those are both quite artificial situations: if I picked seven bearings at random, I wouldn’t expect to get either of those types of behaviour. I’d expect something in between, and this is the sort of behaviour that we see with the squares. So this is telling us (being rather vague) that the quadratic residues are somehow quite randomly distributed. They don’t all bunch up and conspire to line up in some direction. It’s notoriously difficult to say precise things about the distribution of the quadratic residues (Tao has a nice post about some ideas about this), but this evaluation of the Gauss sum gives some information that turns out to be useful. I think it’s pretty amazing that we can exactly compute the value of this sum: it takes a wiggly route along the way, but we can say exactly where it will end up.

If only I had a turtle…

August 8, 2010 at 9:07 pm

I don’t have Logo, but I do have a Ghostscript, the GNU PostScript interpreter. I reproduced the drunken stagger you show for n=13. Given the square root and your comments on randomness, I was fairly sure what I would see when I cranked n up to 557: a drunkards random walk.

So my little bit of PostScript

2 setmiterlimit

/unit 20 def % about half an inch

72 400 moveto

/n 557 def

1 1 1 n mul

{ dup mul %square it

dup %use it twice, for both x and y

360 n div mul cos unit mul % that’s x

exch % start again for y

360 n div mul sin unit mul % that’s y

rlineto

} for

stroke

showpage

gave me quite a surpris, see http://www.cawtech.demon.co.uk/photos/for.png

The path spirals in, jiggles in place, and spirals back out + repeat due to symmetry. I’m verifying that the sum comes to the square root of n with a very easy bit of Common Lisp

(defun gauss-sum (n)

(loop for r below n

sum (cis (/ (* 2 pi r r) n))))

but I’m not about to try proving anything. I thought that summing e to the index squared was the warning sign that theta functions are lurking nearby. Scary!

August 8, 2010 at 9:29 pm

Nice picture, thanks Alan! There are some similar nice pictures in a book by Hugh Montgomery (http://www.ams.org/bookstore-getitem/item=CBMS-84). My picture for p=13 shows signs of that behaviour, but only when you know what you’re looking for. (On the other hand, I put in each line segment by hand, which would have taken a long time for p=557! So I’m grateful to you.)

The crucial thing about these sums is that it’s n^2/p, and we’re summing over all values of n (mod p). So they’re finite sums (not scary theta functions!).

August 20, 2010 at 6:10 pm

Just a quick note on the argument/direction of the Gauss sums: it is not too hard to see that the sums must land on an axis. Indeed, writing , consider . If -1 is a square mod p, then one has that , whereas if -1 is not a square mod p then one can argue slightly differently, making use of the linear exponential sums you’ve discussed above.

Unfortunately, the only demonstrations of which side of the origin the Gauss sums land on that I know of are much harder than this!

September 3, 2010 at 4:34 pm

[…] Neale writes about theorems. Here she discusses Theorem 33: the size of Gauss sums posted at Theorem of the week. Robin Whitty (who does the same thing, but 7 times as often) […]

January 28, 2013 at 1:31 pm

[…] directly related to the Analysis I course, but hopefully might be interesting: a blog post about the size of Gauss sums, which links closely with this business of sums with and without cancellation (and which has some […]