Architecture and Hardware

More Efficient Fault-Tolerant Quantum Computing

Novel designs reduce the hardware overhead for error correction in simulations.

Credit: Atdigit spheres with embedded arrows, illustration

Quantum computing has been making steady progress toward large and reliable systems. Hundreds of qubits (quantum bits) have been integrated and experimentally argued to outperform classical computing in specialized cases. The fidelity of qubits also has improved, but their high sensitivity to outside influence means they will always be much more fidgety than traditional electronics, and truly useful quantum computing will need ways to correct the inevitable errors.

Quantum error correction uses extra qubits to determine whether and how the data has been disrupted. The most widely studied techniques, however, require hundreds or more “physical qubits” for each “logical qubit” that represents information used in the computation. A realistic system for world-changing tasks like cryptography cracking therefore could need devices with tens of millions of physical qubits, which is well beyond current capabilities.

Researchers have known for years that other codes could, in principle, do better by including long-range connections between qubits, but there have been no practical, concrete examples. In August 2023, however, two groups posted detailed, more-achievable schemes. Although the details were different, both simulated an order-of-magnitude reduction in the required overhead for modest-sized devices that may be available in the not-too-distant future.

“It’s promising and it’s exciting that we have this potential alternative, but it’s not clear that it’s going to win out in the end,” said Daniel Gottesman of the University of Maryland, who laid critical theoretical groundwork a decade ago. Implementing high-fidelity long-range connections also will require new hardware techniques.

Error correction

Because quantum information is extremely delicate, the invention of quantum error correction (QEC) in the 1990s was crucial to making quantum computing more plausible. The task is much more difficult than correcting classical bits, because qubits’ information can’t simply be duplicated, and measuring it destroys the powerful quantum “superposition” that simultaneously includes multiple possible states.

QEC schemes require many extra qubits and measure the combined state of many qubits at once to check whether any have been altered. Most use a “stabilizer” combination whose measurement destroys some quantum information but preserves other combinations that embody the logical qubits. Comparing the measurements from various stabilizers reveals which qubits need to be fixed—for example, a qubit that is the only one shared by two stabilizers that both indicate an error.

Because experimental error rates are still significant, the systems must deal with multiple errors. Different codes are characterized by “distance,” which is the minimum number of errors they can tolerate before they cannot be corrected.

Much research to date has focused on surface codes, which work with two-dimensional arrays of qubits, and each stabilizer only checks physically nearby qubits. Theorists proved that surface codes are particularly robust. For example, with idealized models of noise, errors can be corrected reliably when the single-qubit error rate is below about a 1% threshold, which some experimental systems already are beating.

An important challenge, however, is that surface codes with a satisfactory distance require a large overhead, with hundreds of error-correction qubits for each logical qubit that actually can be used for computation. As a result, although experiments already are demonstrating systems with hundreds of qubits, these are still not big enough to do a reliable calculation on usefully large problems. (For now, researchers also are exploring algorithms whose results are interesting, even if they are degraded by noise.)

Moreover, things are expected to get worse for bigger systems. In particular, the encoding rate—the ratio of logical qubits to total physical qubits—decreases as the system grows, requiring codes with greater distance. “The overhead is quite dramatic; it’s quite bad,” said Nikolas Breuckmann of the University of Bristol in the U.K., who is exploring alternatives theoretically. “People are very conscious of any opportunity to reduce this overhead.”

LDPC Codes

“We knew that we can do better with more connectivity,” Breuckmann said. Indeed, in 2013, Gottesman showed that long-range connections allow for schemes whose encoding rates do not decrease for bigger systems. “He wrote the very first paper about constant-overhead protocols, which was very inspiring,” Breuckmann said, adding that these results “brought a lot of attention to the field.”

The two new results build on those codes, which are colloquially referred to as Low-Density Parity Check (LDPC) codes (or qLDPC, where “q” stands for quantum). “LDPC just means that the connectivity is not arbitrary, so the checks that you measure can only act on a constant number of qubits, and the connectivity between the qubits is also only constant,” Breuckmann said. “That’s actually true for, I would say, all studied quantum codes,” including surface codes.

“It’s kind of a shorthand,” Gottesman agreed. “When people use that phrase, usually what they mean are high-rate LDPC codes, so LDPC codes that don’t need as many extra qubits as a surface code.” To achieve high-rate LDPC codes, researchers “associate the same physical qubits with many different logical qubits, or overlapping sets,” he explained. “You need all the logical qubits to kind of be connected in a relatively short way to all or most of the physical qubits.” Stabilizers that include long-range connections between qubits are critical to achieving such connectivity.

The early high-rate LDPC codes were highly idealized, and “The original protocols that we had were very impractical,” Gottesman said, adding that they seemed to demand quite low error rates. “What these two new papers do is they show that indeed you can do it with relatively high error rates, comparable to what the surface code can tolerate. That’s a big deal.”

Both new papers involve detailed circuit-level simulations including fault-tolerant memories enabled by LDPC codes, and simulated how the error rate for logical qubits depended on the physical error rate. Although the articles (which have not yet undergone peer review at the time of writing) were posted publicly, both groups declined to talk with Communications about the specifics, out of concern it could lead high-impact journals to refuse publication.

The two groups targeted their conceptual circuits at different qubit hardware. One group, from IBM, imagined implementing its design using superconducting qubits. IBM has implemented this technology with more than 1,000 qubits, but the current two-dimensional technology only connects neighboring qubits. Importantly, the researchers designed a circuit that comprises just two planar subnetworks. Such circuits might, for example, be built on opposite sides of a substrate that includes through-wafer connections like those used in some advanced semiconductor devices.

In simulations of a high-rate-LDPC memory with a physical error rate of 0.1%, “12 logical qubits can be preserved for 10 million syndrome cycles using 288 physical qubits in total,” the IBM team wrote. They argued that achieving the same level of error suppression with the surface code would need roughly 14 times as many qubits.

The other paper, finding similar improvements, came from a collaboration of the University of Chicago, Harvard University, the California Institute of Technology, the University of Arizona, and a startup called QuEra Computing, Inc. Their simulated circuit combined a high-rate LDPC memory with a small computational circuit with no error correction. “[Fewer] than 3,000 physical qubits are sufficient to obtain over an order of magnitude qubit savings compared to the surface code, and quantum algorithms involving thousands of logical qubits can be performed using less than 105 physical qubits,” they stated.

These researchers envision long-range connections between qubits embodied as neutral atoms in laser traps that can quickly drag widely separate atoms into close proximity. “That lends itself quite well to these codes where you need more interconnectivity,” Breuckmann observed. Such trapped atoms can have decoherence times of a second or longer, said last author Hengyun “Harry” Zhou, who is a member of Mikhail Lukin’s group at Harvard University and also with QuEra, which is exploring technology based in part on work from that group. “Actually, all of the technical tools are things that we have already implemented in the lab,” he said.

Fault Tolerance

Designing error correction is relatively straightforward for memories and communication, but less so for computation itself. In the long run, “We need fault tolerance, which goes beyond error correction,” Gottesman said. In particular, because errors can enter at any point in a computation, quantum computers will need designs that correct errors along the way before they spread.

“To build a quantum computer, you really will need fault tolerance as well because the hardware is not nearly as reliable [as classical hardware], and it can’t be nearly as reliable,” Gottesman said. “The individual quantum gates that you’re using to do that computation are themselves imperfect, so they can have errors as well,” so quantum gates and circuits will need new ideas. “There’s still a bunch more theoretical work that needs to be done before we can do that.”

“It still needs to be shown that qLDPC codes are really the way of implementing fault tolerance,” Breuckmann agreed. If it does succeed, then the ability to provide low-error long-range connections could be an important factor for selecting the best qubit platform, which is still actively debated. “Having the possibility of higher connectivity would certainly be a big advantage in that case.”

Further Reading

  • Bravyi, S., Cross, A.W., Gambetta, J.M., Maslov, D., Rall, P., and Yoder, T.J. High-threshold and low-overhead fault-tolerant quantum memory, (2023).
  • Xu, Q., Ataides, J.P.B., Pattison, C.A., Raveendran, N., et al. “Constant-Overhead Fault-Tolerant Quantum Computation with Reconfigurable Atom Arrays,” (2023).
  • Breuckmann, N.P., and Eberhardt, J.N. “Quantum Low-Density Parity-Check Codes,” PRX Quantum 2, 040101, (2021)

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More