Quantized Columns

What Is the Langlands Program?

The Langlands program provides a beautifully intricate set of connections between various areas of mathematics, pointing the way toward novel solutions for old problems.

Video: Rutgers University mathematician Alex Kontorovich takes us on a journey through the continents of mathematics to learn about the awe-inspiring symmetries at the heart of the Langlands program.

Emily Buder / Quanta Magazine; Adrian Vasquez de Velasco, Björn Öberg, Rui Braz, and Guan-Huei Wu for Quanta Magazine

Introduction

Not long ago, I was asked to explain the so-called Langlands program in a single tweet. Impossible, I immediately thought. It’s one of the biggest, most sweeping projects in mathematics, capable of connecting distant realms of research and, naturally, fiendishly difficult to describe.

But then I remembered the story of a student asking the great Talmudic sage Hillel to explain the whole Bible while standing on one foot. The reply: “Do not do to your neighbor what is hateful to you; the rest is generalization.” Of course, you can find much more wisdom in the Bible than that, and you can spend a lifetime studying said generalizations. But to Hillel, that was the kernel that started it all. Was there an analogue for Langlands? I’m no Hillel, but here is the best I can do.

Consider the functions (shown with their graphs):

In case you don’t already recognize the denominators, they’re the odd factorials. A factorial is the product of all positive integers less than or equal to a given number and is represented by an exclamation point. So, for example, 3! = 1 × 2 × 3 = 6 and 5! = 1 × 2 × 3 × 4 × 5 = 120.

Hopefully now the pattern is clear: To get the next polynomial in the sequence, simply add or subtract (in an alternating fashion) x raised to the next odd power and divide by that power’s factorial. Notice that, as with any polynomial, as x goes to positive or negative infinity — farther to the right or the left, respectively — the function either blows up to infinity or plunges to negative infinity. But despite this, in some region around the origin, the function’s behavior begins to stabilize. It soon becomes a regularly wiggling curve, seemingly bounded between −1 and 1.

When we take this sequence of functions to its logical conclusion, ignoring all kinds of important questions about whether this can in fact be done (yes, it can), we get the infinite series

It turns out that this is another way of writing the simple sine function from trigonometry. And the sine function can also be understood as the height of a dot, pasted to the edge of a spinning circle, undulating up and down over time. Critically, if you rotate the circle by 2π radians (a full rotation), that circle will start the same wiggles all over again. That means that the sine function and our infinite series above have a special symmetry: If you change the input by 2π, the function repeats. That is,

$latexF_{\infty}(x+2π)=F_{\infty}(x)$, for all values of $latexx$.

If this doesn’t seem like a spectacular miracle to you, you’re not looking hard enough. The coefficients of all those polynomials were only made up of odd factorial denominators with their alternating signs. Who invited 2π to the party? None of the first polynomials we saw have this translation symmetry — it only appears at infinity. This unexpected appearance of symmetry in the limit, we shall see, is the key insight underpinning the Langlands program.

The sine function is a basic example of what we mathematicians more generally call an automorphic function: When we change (morph) a variable by some process (in this case, sliding over by 2π), the function turns back into itself (hence “auto” morphic).

Today we know many techniques that can reveal this automorphy for this infinite series. For example, instead of starting with all those polynomials, we could have begun with the sine function itself. Then its invariance under translation is tautological, following from basic definitions, and we’d just have to connect the sine function to that sequence of polynomials. The latter is a general process known as a Taylor series expansion, which, in the case of the sine function, gives the polynomials discussed above. (It’s also possible to show this automorphy even without any reference to the sine function by using derivatives, a way of measuring how much a function changes locally.)

So what is the Langlands program? It predicts “extra” non-apparent symmetries (that is, automorphy) of objects defined by certain (infinite) sequences. That’s the best I can do, standing on one foot!

Now, as discussed in the video accompanying this text, mathematicians are not merely interested in proving these symmetries for their own sake — though surely this would already suffice, since most mathematicians consider them beautiful and important. These symmetries have incredible consequences, as well as applications to other math problems, such as the full resolution of Fermat’s Last Theorem.

Here’s a glimpse of how these symmetries can help solve another set of problems known as the Ramanujan conjectures, which in their most general form remain unsolved today.

The Ramanujan conjectures say something very roughly like the following. If you have an automorphic function given by some sequence of coefficients, like so:

$latexG(x)=a_{0}+a_{1}x+a_{2}x^2+a_{3}x^3+…$

then all the coefficients — all those a’s — are bounded by 1, meaning that their values are all between −1 and 1.

Again, though, we can’t prove that. The best we can do is bound those coefficients by 10, which is a considerably weaker — and seemingly almost useless — piece of information.

But here’s where Langlands comes in. If a conjectured part of the program, called functoriality, is true (as mathematicians suspect), then we could fully prove the Ramanujan conjectures. Functoriality claims that we can make new automorphic functions out of G(x), simply by raising all the coefficients to any fixed integer power. (In reality, the process is much more involved, but let’s keep going to get the idea.) So, given that G(x) is automorphic, functoriality conjectures that the function

$latexG_{2}(x)={a_{0}}^2+{a_{1}}^{2}x+{a_{2}}^{2}x^2+{a_{3}}^2x^3+…$

should also be automorphic. Because of that seemingly useless result that we could prove any automorphic function’s coefficients to be bounded by 10, we can now show that the coefficients of G2 — which are the squares of the coefficients of G — are also bounded by 10. And if the square of G’s coefficients is bounded by 10, then the coefficients themselves are bounded by the square root of 10, which is about 3.16. Thanks to the links provided by Langlands, we’ve drastically improved our knowledge of the bound!

But functoriality doesn’t stop there. It also predicts that the function whose coefficients are the cubes of the coefficients of G is also automorphic:

$latexG_{3}(x)={a_{0}}^3+{a_{1}}^{3}x+{a_{2}}^{3}x^2+{a_{3}}^3x^3+…$

If true, then the coefficients of G are actually bounded by the cube root of 10 (about 2.15), and not just its square root. And so on for all such “functorial lifts”:

$latexG_{k}(x)={a_{0}}^k+{a_{1}}^{k}x+{a_{2}}^{k}x^2+{a_{3}}^kx^3+…$

Now do you see how the Ramanujan conjectures would follow? The kth root of 10 for a huge k gets closer and closer to 1. So if you know that all of these functorial lifts are indeed automorphic, as Langlands predicts, you’ve just solved Ramanujan. What a clever trick!

Our discussion here is just the tip of the massive iceberg that is the Langlands program. I’ve omitted L-functions, motives, trace formulas, Galois representations, class field theory and all kinds of amazing mathematics that’s been built around the program over the last half century. If you’re interested in these things, I encourage you to study them further — just as Hillel hoped his answer would also inspire the questioner to continue their studies.

Comment on this article