News
Architecture and Hardware News

The Power of Quantum Complexity

A theorem about computations that exploit quantum mechanics challenges longstanding ideas in mathematics and physics.
Posted
  1. Introduction
  2. Enlisting Omnipotence
  3. Checking the Checkers
  4. Beyond Complexity
  5. Author
complex abstract visualization

For decades, computer scientists have compared the fundamental difficulty of solving various tasks, such as factoring a number or finding the most efficient route for a traveling salesperson. Along this way, they have described an alphabet soup of computational complexity classes and formal techniques for showing how various classes relate to each other.

The advent of quantum computers has introduced new flavors into such classification. It also has given urgency to understanding the potential of these still-limited machines, including the role of mysterious correlations of distant particles, known as entanglement. A recent manuscript concludes that incorporating entanglement into a well-known framework could allow verification of a staggering range of proofs, no matter how long they are.

The new result was first posted on a preprint server in January 2020, and immediately stimulated chatter in the vibrant computational-complexity blogosphere, but it has not yet been peer reviewed. Indeed, the authors already have identified a flaw in an earlier paper they built upon, although they later devised an alternate argument that left their conclusions intact.

uf1.jpg
Figure. A representation of the relationship among complexity classes, which are subsets of each other.

If it stands up, the proof also will disprove a longstanding mathematical conjecture, with profound implications for both pure mathematics and physics. As a result, “Quite a few people take this to be true and are trying to follow up on it without really fully understanding it,” said Vern Paulsen, a mathematician at the University of Waterloo, Ontario, who was not involved in the work. “It just shows how mainstream to the world of science computational complexity is.”

“At some point, people who are working far away from computer science will find another proof of this result in their own language,” said Henry Yuen of the University of Toronto. For now, however, “There’s a lot of computer science concepts that lend themselves very naturally to putting the pieces together.” Yuen co-authored the new paper with fellow computer scientists Zhengfeng Ji of the University of Technology, Sydney; Anand Natarajan and Thomas Vidick of the California Institute of Technology, and John Wright of the University of Texas, Austin.

Back to Top

Enlisting Omnipotence

Computational-complexity theory classifies problems in terms of the resources, such as time, circuit size, or memory, needed to solve them. The complexity class P, for example, can be solved in a time that is only a polynomial function (some power) of the size of the problem.

In contrast, such efficient solutions are not known for problems in the class NP, such as the “traveling salesman” problem. However, for these problems, if a solution were somehow provided, it would require only polynomial time to verify it. A major open question is whether having such efficient verifiability implies some as-yet-undiscovered efficient way to find solutions, which would mean P=NP, although this is considered unlikely.

For non-specialists, the seemingly magical appearance of a solution may seem an odd ingredient for a mathematical proof. For decades, however, complexity theorists have considered infinitely powerful “provers,” not to reveal solutions but to interact with a “verifier” that has more limited computational power. The prover’s answers—aimed to convince the verifier of the proof—are not necessarily trusted by the verifier, which can cross-check the responses to randomly selected questions. Granting the imaginary prover infinite power gives the system the credibility to prove a negative, Yuen said. “If even an infinitely powerful person couldn’t convince you of a statement, then that statement must have been false.”

These techniques are amazingly powerful. Work in the 1980s showed that, even when requiring only polynomial-time verification, “Interactive proofs are equal to a complexity class called PSPACE,” said Lance Fortnow, dean of the College of Science of the Illinois Institute of Technology. That complexity class includes “everything you can do with a small [polynomial] amount of memory,” even in exponential time.

In 1991, Fortnow and two colleagues examined a further extension proposed a few years earlier: multiple provers, that are isolated to prevent them from coordinating their answers. Fortnow likens this to questioning a couple claiming they are married, for immigration. “You can put them in separate rooms, and ask them questions like ‘What side of the bed do you sleep on?'” By interacting with such multiple provers, a verifier can do in polynomial time what would otherwise require an exponentially longer time.


“You have to deduce whether the entire proof is a valid proof or not. The PCP theorum says that this is possible, which is really astounding.”


The new result adds another feature to this scheme: although the provers cannot share information about what they are asked, they share access to an infinite supply of entangled quantum bits, or qubits. “I would have guessed they could possibly use it to cheat, and that it would actually make the model weaker,” said Fortnow. “Surprisingly, it’s much stronger.” As a result, this MIP* class (multiple independent provers, with the asterisk indicating access to entanglement) can verify a proof of “basically any size,” a huge complexity class denoted RE, for “recursively enumerable.”

Back to Top

Checking the Checkers

As it turns out, this conclusion contradicts a widely influential conjecture in mathematics. Unfortunately, the proof itself is long, currently 206 pages, and relies heavily on techniques from computational complexity theory that are unfamiliar to mathematicians. “Mathematicians have a very strong sense of what a proof should look like, and what’s a deep proof,” said Vidick, adding that they do not know what to make of this one. When people ask him what is deep in this result, he says, “There’s the PCP theorem, and that’s it.”

PCP stands for “probabilistically checkable proof,” and the theorem, which built on the study of multiple provers, is an established “crown jewel” of computational complexity, Yuen said. A verifier is asked to check a long proof by looking at only a handful of spots, which can be chosen at random. “You have to deduce whether the entire proof is a valid proof or not. The PCP theorem says that this is possible, which is really astounding.”

To exploit entanglement, the researchers first must prevent the provers from using it to coordinate their answers. “If you devise your game in a clever-enough way, you can actually detect any time your provers are trying to use entanglement to trick you,” Wright said. Then, “We use this entanglement to generate big random strings for the two provers, and these random strings are then viewed as questions for a PCP protocol. These questions index different parts of their computation, and you can use them to check whether they’re carrying out a giant computation correctly for you.”

“The verifiers become like a puppet master for these wildly powerful, infinitely adversarial provers,” Yuen said. Then “the verifier checks that these two provers themselves are interrogating two more complex provers, that are interrogating two more complex provers, and so forth,” Vidick said.

This recursive compression scheme can address a wide array of problems, including whether a program will ever terminate. Alan Turing proved that this “halting problem” is undecidable in general, but, amazingly, MIP* can verify a solution.

Back to Top

Beyond Complexity

The significance of the new result extends far beyond computational complexity, to other areas of mathematics and even to the understanding of quantum mechanics. This is because it is inconsistent with a longstanding conjecture made by Alain Connes, who received the prestigious Fields Medal in 1982 for his extension of John Von Neumann’s classification of “operator algebras.” As part of that program, Connes suggested the set of infinite matrices, or factors, that define some algebras could all be approximated by a consistent scheme of finite matrices, in what has become known as his embedding conjecture.

“What’s amazing is the relatively innocent conjecture Connes made back in the Seventies turned out to have so many implications, if it was true,” Paulsen said, even in seemingly unrelated fields like group theory, or models of entropy.

In recent years, though, it had begun to seem “a little too good to be true,” he noted. “Now all that’s wiped out. We’re back to ground zero,” Paulsen said. Still, the proof only shows that exceptions to the conjecture can exist. It does not provide much guidance for finding them, so there is much work to be done.

Disproving Connes’ embedding conjecture also has implications for the mathematical representation of quantum entanglement, distinct from its role in MIP*. When two particles are entangled, for example because they emerge from a single quantum process, their properties remain correlated even if they become widely separated. Moreover, as shown by John Bell in 1964, the outcomes from some combinations of measurements on the two entangled particles are more correlated than would be possible if the particles “knew” the outcome to every possible measurement beforehand.


When two particles are entangled because they emerge from a single quantum process, their properties remain correlated even if they are separated.


Nonetheless, physicists tend to describe the pair of measurements as a “tensor product” of two measurements, rather than requiring a comprehensive description of the entire system. About a decade ago, it was shown that Connes’ embedding conjecture is equivalent to the assertion, formalized by mathematician Boris Tsirelson, that the representations match up, even for complex combinations of measurements. (Paulsen and his fellow mathematicians carefully distinguish a third description.) The new proof shows that this assumption is false. As a result, “we don’t know which of these mathematical models is really the one that represents physical reality,” said Paulsen.

*  Further Reading

Ji, Z., Natarajan, A., Vidick, T., Wright, J., and Yuen, H.
MIP*=RE, https://arxiv.org/abs/2001.04383

M. Junge et al.
Connes’ embedding problem and Tsirelson’s problem, J. Math. Phys. 52, 012102 (2011)

Hartnett, K.
Landmark Computer Science Proof Cascades Through Physics and Math, Quanta, March 4, 2020.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More