Sign In

Communications of the ACM

Research highlights

Technical Perspective: Deciphering Errors to Reduce the Cost of Quantum Computation

View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

Quantum computers may one day upend cryptography, help design new materials and drugs, and accelerate many other computational tasks. A quantum computer's memory is a quantum system, capable of being in a superposition of many different bit strings at once. It can take advantage of quantum interference to run uniquely quantum algorithms which can solve some (but not all) computational problems much faster than a regular classical computer. Experimental efforts to build a quantum computer have taken enormous strides forward in the last decade, leading to today's devices with over 50 quantum bits ("qubits"). Governments and large technology companies such as Google, IBM, and Microsoft, as well as a slew of start-ups, have begun pouring money into the field hoping to be the first with a useful quantum computer.

However, many hurdles remain before we have large-scale quantum computers capable of the tasks described here. Whereas hardware errors are rare in classical computers, they will be a significant complication for quantum computers, in part because quantum systems are small and therefore fragile, and in part because the act of observing a quantum system collapses it, destroying the superpositions that distinguish quantum from classical. Even a single atom passing by can interact with a qubit, develop a correlation with it, and thereby eliminate the qubit's quantum coherence.

Consequently, quantum error-correcting codes are essential for building large quantum computers, along with fault-tolerant protocols that describe how to perform computations on encoded qubits. The most popular fault-tolerant protocol is based on a family of quantum codes called "surface codes." Surface codes work by arranging the qubits of the computer in two dimensions and imposing local constraints so the encoded information is spread out and can't be accessed or changed without touching many qubits. Surface codes are an example of a broader class of codes known as "low-density parity check" codes, or quantum LDPC codes for short.

Surface codes have many desirable features: they can be easily laid out in two dimensions, they tolerate high error rates, and local constraints are straightforward to check during a computation. Unfortunately, they also require many extra qubits to work, so it is worthwhile to consider other codes. More general LDPC codes have local constraints like surface codes but with more complex connectivity, and some are much more efficient than surface codes. In particular, a fault-tolerant protocol based on a class of codes known as "quantum expander codes" could in principle reduce the qubit cost of fault tolerance by orders of magnitude.

However, in order for a code family to be actually useful, we need a good way of deciphering the information it gives about the errors in the system. In a well-designed quantum error-correcting code, an error will cause some of the local constraints to be violated. The list of unsatisfied constraints is known as the "error syndrome," from which it is possible to deduce the nature of the error. Possible, but not necessarily easy. Determining which error occurred is a computationally hard problem for some codes. Fault tolerance adds an additional complication, since the error syndrome itself might be faulty due to imperfect measurements while the error syndrome is being determined.

Classical LDPC codes have fast syndrome decoding algorithms, but sadly, these algorithms fail for quantum LDPC codes. The reason for this is that quantum LDPC codes exhibit a uniquely quantum phenomenon known as "degeneracy:" Multiple different errors can act the same way on the codewords, which confuses the classical algorithms. A new approach is needed, and in the following paper, the authors, building on earlier work by themselves and others, produce an algorithm that can rapidly deduce the error in a quantum expander code, even when the syndrome is partially incorrect.

The key to making the algorithm work is to consider multiple qubits at a time. Rather than treating each individual qubit separately, the algorithm looks for small groups of qubits that are part of the error; considering sets of qubits as a unit resolves the ambiguity introduced by degeneracy. The authors then use a result about percolation to show that errors appear in only small clusters, meaning many local decisions about errors can be performed independently and even simultaneously. Consequently, not only does the algorithm work but it is highly parallelizable, making it potentially even faster than the algorithms used for syndrome decoding of surface codes.

To see if expander codes are genuinely useful, much more work is needed, however. We need good codes of reasonable size and better ways of performing fault-tolerant algorithms on encoded qubits. We need to better understand how much error expander codes can tolerate and to deal with the requirement for long-range interactions. If these problems can be solved, expander codes will offer an exciting alternative to surface codes for fault tolerance in a large quantum computer.

Back to Top


Daniel Gottesman is a faculty member at the Perimeter Institute in Waterloo, Ontario, and a Senior Scientist at Quantum Benchmark Inc. in Kitchener, ON, Canada.

Back to Top


To view the accompanying paper, visit

Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents:
  • Article
  • Author
  • Footnotes