Architecture and Hardware News

Questioning Quantum

Researchers hunt for ways to keep quantum computing honest.
  1. Article
  2. Author
quantum concept, illustration

At the beginning of December last year, a committee set up by the U.S. National Academies of Sciences, Engineering, and Medicine said it had come to the conclusion a viable quantum computer with the ability to break ciphers based on today’s encryption algorithms is a decade or more away, but they are coming. Committee chair Mark Horowitz said he and his colleagues could see no fundamental reason, in principle, why a functional quantum computer could not ever be built.

When they do finally arrive, quantum computers pose a number of problems for computer scientists when it comes to determining whether they work as expected. Quantum computers can make use of the property of superposition: where the bits in a register in the machine do not exist in a single known state, but in a combination of states. Each state has a finite probability of being the one recorded when the register is read and the superposition collapses.

Quantum computers can use quantum entanglement and interference between states in superposition to slash the number of compute operations needed for certain problems. Interference is a major part of the algorithm developed by Massachusetts Institute of Technology professor of applied mathematics Peter Shor to factor the large primes that form the basis of many current encryption systems. Although it is possible to simulate the behaviors of a quantum computer with a conventional computer, the programs are extremely slow because the complexity of the state space grows exponentially as more quantum bits—or qubits—are added.

Figure. The IBM Q dilution refrigerator cools the quantum processor to 15 millikelvin (-459 degrees Fahrenheit).

Without a radical improvement in technology, production machines are likely to be large, expensive, and only suitable for use as coprocessors for specific problems. The most likely usage model is as a server accessed using the conventional Internet, or a network able to support communication based on quantum states.

California Institute of Technology researcher Alexandru Gheorghiu points out a number of potential problems a remote user will face: “If this communication is performed over a network, one can imagine man-in-the-middle attacks in which malicious parties are trying to deceive us. Alternatively, it could be that the party claiming to have a quantum computer is lying,” Gheorghiu says.

Verification of a result can be simple in some cases. Shor’s algorithm for factoring large primes provides in its results a convenient method for testing whether a quantum computer behaved correctly. If the quantum computer outputs the wrong primes, it can be judged on whether it produced two primes that, when multiplied together, generate the expected value.

In the long term, few problems for which quantum computation offers a dramatic speedup over their classical counterparts will be as easy to verify as Shor’s algorithm. Problems for which there is no obvious solution cannot be readily checked, unless the user is willing to wait days or months for a classical computer to produce a result.

In order to address this problem, researchers, starting with Dorit Aharonov and colleagues at the Hebrew University of Jerusalem, began working on the use of interactive proofs for the problem a little over a decade ago. With interactive proofs, someone who wants to verify whether the system they are using is providing them with correct answers tests it time and again with different requests. If the answers are consistently believable, they can be confident the application server is functioning properly.

To provide reliable tests, interactive proofs assume the verifier can call on assistance from an oracle: a system that can compute anything immediately. In practice, there is no technology that can deliver such an all-powerful oracle, but for the purposes of checking quantum computing, another quantum computer acting as a ‘prover’ can fulfill the role of oracle. But why would anyone trust that quantum prover, either?

If a prover has been compromised, it might use knowledge of the algorithms it is expected to test to provide believable but false answers. The answer to this is for the verifier to disguise its intent, generally by encrypting the tasks and the data. “It’s harder for a prover to misbehave when performing a certain computation if they don’t know what the computation is,” Gheorghiu says. However, further checks are needed to be certain.

Around the same time Aharonov’s work was published, Gheorghiu’s Ph.D. advisor at the University of Edinburgh, Elham Kashefi, developed with Anne Broadbent and Joseph Fitzsimons of the University of Waterloo, Ontario, a protocol they call Universal Blind Quantum Computation (UBQC). As well as using encryption to hide the intent of computational requests supplied to the prover and the target quantum computer, traps are buried in the encrypted. These trigger if the server fails to compute correctly, alerting the verifier client to the problem. Because of the many iterations needed to complete a proof, there is a potentially heavy computational cost.

Broadbent says she had reservations about the complexity of the protocols developed in the earlier work, and set about creating a simpler system. She opted to work with a technique based on quantum-circuit models, which provide a way of expressing quantum-computing operations using notation similar to classical logic.

As with previous methods, the computations in Broadbent’s technique are encrypted. However, in place of the traps are what she calls ‘gadgets’. These are computational elements made from standard quantum gates placed in the circuit. The gadgets support testing, but do not alter computations if a test is not active.

Used in combination with encryption, quantum computers cannot determine whether the additional gadgets are part of the computation or the test. In one form of Broadbent’s protocol, the verifier can take advantage of quantum behavior to decide whether a run was a test or actual computation after the event, instead of specifying this upfront.

“Some of the circuits are larger than what you would want for efficiency, but the overhead is very reasonable in this model: performing a series of tests is pretty cheap,” Broadbent says.

A common factor in almost all the protocols developed so far is that they require the verifier’s own machine be able to prepare or measure quantum states—or do both. But this raises the possibility of a hacker corrupting the device used to prepare quantum states before the protocol even starts. That might be exploited to trick the verifier into accepting incorrect results.

Gheorghiu and Kashefi worked with Petros Wallden, also based at the University of Edinburgh, to propose a device-independent verification protocol that makes it possible to operate without the need to trust any of the quantum devices that are needed to run the tests. In doing so, they called on ideas developed by Ben Reichardt of the University of Southern California working with Falk Unger and Umesh Vazirani, who were based at the University of California at Berkeley. They found it is possible to avoid the issue of preparing and measuring quantum states if the quantum computers being used for the application and proving functions are far apart. The distance is important, because it introduces a communications barrier enforced by the speed of light. The protocol operates in such a way that the machines cannot collude with each other effectively to fake results. With the help of this requirement, Gheorghiu and colleagues found this property makes it possible for a verifier to avoid handling quantum states at all.

“Experiments can tell us what kind of noise we can expect for different kinds of implementations, and that, in turn, can tell us how to design our protocols to cope with it.”

“In most situations we would, ideally, like the verifier to be classical,” Gheorghiu says, because most users will only have access to remote quantum computers over the conventional Internet and not a network that supports quantum communications.

Last year, UC Berkeley Ph.D. student Urmila Mahadev found another approach to enabling a fully classical machine to be used by the verifier without demanding that the prover and application quantum computers are separated by a long distance. Conceptually similar to UBQC, Mahadev’s protocol lays traps that expose misbehavior. To enforce blindness, she used a post-quantum cryptography scheme on the assumption that a quantum computer will be unable to crack it except through brute force. This computational assumption could prove false in the long term: although none are known today, a quantum algorithm may be found that efficiently breaks the encryption and destroys the property of blindness.

“Whether verification can be performed with a classical client and a single quantum prover and no computational assumptions remains open,” Gheorghiu says. “On the other hand, it is true that one can perform verification without any computational assumptions provided the verifier is not completely classical.”

In the current version of Mahadev’s protocol, the overhead of verification is higher than with those that make use of quantum-enabled verifiers though she and other researchers say they believe optimizations will improve efficiency. Gheorghiu says the lowest overhead may only come with quantum-enabled verifiers. And the need for fault tolerance for real-world computers may make it difficult to employ a fully classical verifier.

The incremental-proof algorithms developed so far are not expected to work well with today’s experimental machines, which provide noisy results. Broadbent says: “The models that we are talking about right now are quite paranoid. Even small noise levels would not work in the sense that the protocols would detect any deviation as malicious. But others are possible where the errors that occur we can assume are not maliciously chosen.”

The nature of the errors that near-term machines generate will, in turn, inform the development of practical verification techniques, Gheorghiu says: “Experiments can tell us what kind of noise we can expect for different kinds of implementations and that, in turn, can tell us how to design our protocols to cope with it.”

*  Further Reading

Gheorghiu, A., Kapourniotis, T., and Kashefi, E.
Verification of Quantum Computation: An Overview of Existing Approaches Theory of Computing Systems (2018), pp. 1–94. DOI: 10.1007/s00224-018-9872-3

Broadbent, A.
How to Verify a Quantum Computation Theory of Computing, Vol. 14(11), 2018, pp. 1–37.

Mahadev, U.
Classical Verification of Quantum Computations 59th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2018), October 2018. arXiv:1804.01082

Aharonov, D., Ben-Or, M., and Eban, E.
Interactive Proofs for Quantum Computations arXiv preprint (2008), arXiv:0810.5375

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More