Drawn to computer science at the suggestion of his parents—who thought the field might provide a more practical outlet for his love of mathematics—Avi Wigderson, the 2023 ACM A.M. Turing Award recipient, has made lasting contributions to the theory of computational complexity. Wigderson’s voracious intellectual curiosity led him to explore topics ranging from cryptography and optimization to randomness, pseudorandomness, and circuit complexity. His insights have reshaped fields and underpinned applications in privacy, network security, and beyond. He is also a thoughtful leader and a generous mentor, and has worked to make the field more accessible to nonexperts with his recent book, Mathematics and Computation.
Let’s start with your work on zero-knowledge proofs, which reached the surprising conclusion that every statement that has a proof can be proved to someone in a way that reveals no information beyond what they already know.
The word “proof” is loaded. There are many kinds of proofs and proof systems. There are classical proofs that mathematicians use, where you write a paper and lay out all the proof steps. There are interactive proofs, which were introduced a year before we began our work, and which allow for interaction between the prover and the verifier.
The prover in this scenario is anyone who has a proof of some mathematical statement like, “Fermat’s last theorem is true.” And the verifier is somebody who cannot prove it by themselves.
In normal proofs, we need information. In fact, the whole point of a proof is that we can learn from it. But in a cryptographic context, you want to be able to prove a statement like, “My public key is the product of two primes” without revealing what that product is. That’s why Shafi Goldwasser, Silvio Micali, and Charles Rackoff introduced the model of interactive proofs, along with the idea—which looks completely radical and ridiculous—that you can convince somebody of something without telling them anything they don’t know.
And what you proved the following year with Oded Goldreich and Silvio Micali is that you can do that for anything you can prove.
Indeed! If you have a mathematical proof for something, you can turn it into a type of interactive proof in which you cannot fool the verifier except with a very tiny probability. And if your statement is correct—if you are proving something true—the verifier will learn nothing from this interaction.
In other words, the verifier could have simulated the interaction and arrived at the same result.
It’s a very paradoxical notion, and it has to be carefully defined. What the verifier gets from the interaction is a transcript of this conversation. It’s randomized, so it’s a distributional conversation. And yes, that same distribution could have been generated by the verifier alone.
When we say, “the same,” we don’t mean in the information theoretic sense; we mean in the cryptographic sense, where all participants are computationally limited. In this world, there’s no way to distinguish between the actual interaction with the prover and the transcript that the verifier generated. This computational indistinguishability is a key notion. But surely you will agree that if the verifier could generate something that looks exactly the same to everybody, they didn’t learn anything.
It would be an understatement to assert that this turned out to have powerful applications in authentication, network security, and beyond—but you’ve said before that your work was motivated by intellectual curiosity.
There was no practical motivation, or almost none, because the Internet didn’t exist, much less electronic commerce. But I know from years of experience that even when things look remote from practice—since we are studying computation, which is so fundamental to math and science—it’s very likely they will be useful. Take quantum computing; it wasn’t until Shor’s algorithm (designed for a fictional machine!) was published that governments and companies started investing billions in building them.
It’s certainly hard to quantify the impact that your own work has had on those fields. Let’s move on to the topic of randomness, which has been a big focus throughout the years. One of your most widely hailed results in that field shows that under standard computational assumptions, every efficient randomized algorithm can be fully derandomized.
In the early 1970s, two different parts of the field were shaping up. One is the theory of NP completeness—the idea that there are these universal problems that capture, essentially, all the interesting problems we cannot solve and don’t know why. But they are linked to each other, so if one is hard, the others are hard.
At the same time, there were many deterministic problems that were not NP-complete—like primality testing and volume computation—for which people couldn’t find fast deterministic solutions. However, they could find fast probabilistic algorithms. But randomness is a resource. You may not think about it that way, but if you are implementing a probabilistic algorithm, you have to get your random coins from somewhere. So, the question was, can we remove the randomness without penalty to efficiency?
Over the course of the next 25 years, you showed that randomness is not necessary for efficient computation.
There were several jumps, and even this year, new results have come out that sharpen our understanding. First, in some cases, an efficient deterministic algorithm was found for an existing probabilistic algorithm. More generally, various results in the early 1980s showed that in very specific contexts—under specific assumptions about cryptographic hardness, like “factoring is difficult”—you could remove randomness efficiently from algorithms.
Then we asked, “Can we do that without assumptions, or with weaker assumptions?” And what we slowly started understanding is that if practically any problem is hard, then you can remove randomness. This came about mainly through with my work with Noam Nisan, where we invented a method that can take any hard function and convert it to a pseudorandom generator. So, all you need to believe in order to remove randomness is not that factoring is hard, but that NP-complete problems are hard.
Of course, this pseudorandom generator itself connects two streams of thought—about hardness and randomness, and the fact that hardness implies pseudorandomness.
It’s fascinating that there’s such a connection between two things that don’t seem related at all. And it works in the other direction as well: if you can remove randomness from any algorithm, then you have demonstrated hardness of a sort.
More recently, you’ve been working on non-commutative optimization, which extends the tools of convex optimization in Euclidean space to a more general setting of Riemannian manifolds.
This is a project I’ve been very passionate about for 10 years now. My motivation was derandomization and lower bounds, but it’s grown in connections and applications.
There’s a particular problem that we don’t know how to derandomize. I started working on a non-commutative analog, which turned out to be connected to a very interesting mathematical question in the understanding of symmetries. And then it turned out to be connected to questions in algebra, analysis, and quantum information theory, and required a theory for geodesic convex optimization. It’s analogous to Euclidean settings, but applies to manifolds where the shortest paths between points is very different than a straight line. What we have now is a set of algorithms and partial results that extend the Euclidean framework, which solves many problems and still fails to solve many other problems we care about.
In other words, it doesn’t prove lower bounds.
It doesn’t prove lower bounds, and I don’t see how it will. I am, of course, looking at various other ways to use it. But it’s also useful to prove barrier results, showing that particular techniques fail.
I imagine it’s like a little warning sign: “Beware, do not enter.”
It’s exactly like that, and it’s very useful. I would love if somebody could do it for the Riemann hypothesis, saying, “These techniques we developed so far will not solve it.”
[See “Wigderson Named Turing Awardee for Decisive Work on Randomness”]
Join the Discussion (0)
Become a Member or Sign In to Post a Comment