We have not yet have realized—or, perhaps, even fully understood—the full promise of quantum computing. However, we have gotten a much clearer view of the technology's potential, thanks to the work of ACM Computing Prize recipient Scott Aaronson, who has helped establish many of the theoretical foundations of quantum supremacy and illuminated what quantum computers eventually will be able to do. Here, Aaronson talks about his work in quantum complexity.
Let's start with your first significant result in quantum computing: your work on the collision problem, which you completed in graduate school.
The collision problem is where you have a many-to-one function, and your task is just to find any collision pair, meaning any two inputs that map to the same output. I proved that even a quantum computer needs to access the function many times to solve this problem.
It's a type of problem that shows up in many different settings in cryptography. How did you come to it?
When I entered the field, in the late 1990s, I got very interested in understanding quantum computers by looking at how many queries they have to make to learn some property of a function. This is a subject called query complexity, and it's actually the source of the majority of what we know about quantum algorithms. Because you're only counting the number of accesses to an input, you're not up against the P vs. NP problem. But you are fighting against a quantum computer, which can make a superposition of queries and give a superposition of answers. And sometimes quantum computers can exploit structure in a function in order to learn something with exponentially fewer queries than any classical algorithm would need.
So, what kind of structure does your problem need before a quantum computer can exploit it to get this exponential speed-up?
That's exactly what we've been working on for the past 30 years. In 1995, Peter Shor showed that quantum computers are incredibly good at extracting the period of a periodic function. Others showed that, if you're just searching for a single input that your function maps to a designated output, then quantum computers give only a modest, square-root improvement. The collision problem was interesting precisely because it seemed poised between these two extremes: it had less structure than the period-finding problem, but more structure than the "needle in a haystack" problem.
When my advisor, Umesh Vazirani, told me that the collision problem was his favorite problem in quantum query complexity, I said, "Okay, well, I'll know not to work on that one, because that one's too large." But it kept coming up in other contexts that I cared about. I spent a summer at Caltech and I decided to try to attack it.
I had a colleague, Andris Ambainis, who had invented this amazing lower bound technique—what's now called the Ambainis adversary method—a couple years prior. I didn't know it at the time, but he had actually invented it to try to solve the collision problem, though he was not able to make it work for that. But he could solve some problems that I couldn't solve using this method that I understood really well, called the polynomial method. I started trying to use Ambainis' method to attack the collision problem. I worked on it probably more intensely than I've worked on anything before or since, and what I finally realized was that the polynomial method would work to prove a lower bound for the problem and show that even a quantum computer needs at least N1/5 queries to solve it, where N is the number of outputs. Shortly afterward, Yaoyun Shi refined the technique and was able to show, first, that you need N1/4 queries, and then that you need N1/3.
You have since gone on to produce groundbreaking results in quantum supremacy.
Around 2008 or 2009, I got interested in just how hard quantum computations can be to simulate classically. Forget whether the quantum computer is doing anything useful; how strong can we make the evidence that a quantum computation is hard to simulate? It turns out—and there were others who came to the same realization around the same time—if that is your goal, you can get enormous leverage by switching attention from problems like factoring, which have a single right answer, to sampling problems, where the goal of your computation is just to output a sample from some probability distribution over strings of N bits.
There are certain probability distributions that a quantum computer can easily sample from.
Not only that, but a pretty rudimentary quantum computer. If a classical computer could efficiently sample the same distribution in polynomial time, then the polynomial hierarchy would collapse to the third level, which we use as a kind of standard yardstick of implausibility.
But if you want to do an experiment to demonstrate quantum supremacy, it's not enough to have a distribution that's hard for a classical computer to sample exactly. Any quantum computer is going to have a huge amount of noise, so the most you could hope for is to sample from some distribution that's close to the ideal one. And how hard is it for a classical computer to approximately sample from the same distribution?
To answer that, we realized you needed a quantum computation where the amplitudes (related to probabilities) for the different possible outputs are what's called "robustly #P-complete," meaning that if you could just approximate most of them, then you could solve any combinatorial counting problem.
And bosons fit that bill perfectly.
Fermions and bosons are the two basic types of particles in the universe. If you have a bunch of identical fermions, and you want to know their quantum amplitude to get from an input state to an output state, then you have to calculate the determinant of a matrix. If they're bosons, then it becomes the permanent of the matrix. Now, the determinant and the permanent are two of the most famous functions in theoretical computer science, for reasons having nothing to do with physics. The determinant is easy to compute, but the permanent is #P-complete.
"We're in an era now where it's a real fight, on these special sampling benchmarks, beween the state-of-the-art quantum computers and the largest classical supercomputers."
I remembered that from a talk that Avi Widgerson gave in 2000, where he'd said, "Isn't it unfair that the bosons have to work so much harder than the fermions to figure out what they're going to be doing?"
The joke stuck with me, but I knew that the permanent had exactly the right sort of properties, being robustly #P-complete. And I said, "Well, what if we could design a quantum computation where the amplitudes would be per-manents of matrices? Well, I guess we would use bosons. Now I'd better learn what these bosons are and how physicists model them."
Eventually, you and your student, Alex Arkhipov, put forward a new conjecture that approximate boson sampling would be hard for a classical computer to simulate.
We came at it purely for theoretical reasons. But afterward, we gave talks about it, and the experimental physicists started getting all excited. Especially the quantum optics people, because photons are bosons, and they are dealing all the time with generating a bunch of identical photons, sending them through a network of beamsplitters, and measuring them.
And so there was a race to do it.
The first paper, in 2013, was from a group in Australia that did it with just three photons and confirmed experimentally that the amplitudes were per-manents of three-by-three matrices.
Now, no classical computer is going to break a sweat calculating a three-by-three permanent. Basically, if I want to calculate the permanent of an N-by-N matrix, the difficulty is going to grow roughly two to the power of the number of photons using the best classical algorithms that we know. If you want to outperform any classical computer on earth, you're going to want 50 or 60 photons. And now you're talking about a pretty hard experiment, because there's no photon source on earth that will generate one photon on demand, exactly when you want it.
But then there was an incredibly interesting interaction between the theorists and the experimentalists—a huge effort to meet in the middle with something that the experimentalists could actually do and that we would agree seems hard for a classical computer.
In 2019, Google's Quantum AI lab announced it had demonstrated quantum supremacy with 53 superconducting qubits, and you wrote a paper with Lijie Chen that gave some theoretical evidence to support the claim.
Yes, we developed the theory of those experiments. The best classical simulations we know take exponential time, and we gave some evidence that this is inherent. But it's still debated. IBM and others have said they could simulate Google's results with a classical computer a lot faster than Google thought they could be simulated, albeit still not breaking the exponential barrier.
We're now in the era where it's a real fight, on these special sampling benchmarks, between the state-of-the art quantum computers and the largest classical supercomputers. I expect the quantum computers will pull ahead soon. If there is a real fight today, it suggests that in a few years' time, there's not going to be a real fight anymore.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment