Research and Advances
Systems and Networking Research highlights

QIP = PSPACE

Posted
  1. Abstract
  2. 1. Introduction
  3. 2. Quantum Proof Systems
  4. 3. PSPACE and Boolean Circuits
  5. 4. The Main Algorithm
  6. 5. Analysis of the Algorithm
  7. 6. Conclusion
  8. References
  9. Authors
  10. Footnotes
  11. Figures
Read the related Technical Perspective
weigh in

The interactive proof system model of computation has been studied extensively in computational complexity theory and theoretical cryptography for more than 25 years, and has driven the development of interesting new techniques and insights in those fields. This work considers the quantum interactive proof system model, which is the classical model’s natural quantum computational analog. An exact characterization of the expressive power of quantum interactive proof systems is obtained: the collection of computational problems having quantum interactive proof systems consists precisely of those problems solvable with an ordinary classical computer using at most a polynomial amount of memory (or QIP = PSPACE in complexity-theoretic terminology). One striking implication of this characterization is that it implies quantum computing provides no increase in computational power whatsoever over classical computing in the context of interactive proof systems.

Back to Top

1. Introduction

The notion of a proof has fundamental importance in the theory of computation. Indeed, the foundational work of Alonzo Church and Alan Turing in the 1930s, which produced the first formal models of computation (λ-calculus and Turing machines), was principally motivated by questions concerning proofs in formal logic. The theory of NP-completeness, developed in the 1970s by Stephen Cook, Leonid Levin, and Richard Karp, provides another example. It is built on the notion of efficient proof verification, and is arguably the most widely applicable discovery ever made in the theory of computation.

This paper is concerned with the potential advantages offered by quantum computation in the setting of proofs, and in particular its advantages when applied to the interactive proof system model of computation. Considered by many to be a cornerstone of modern computational complexity theory, the interactive proof system model was first introduced in the mid-1980s, and its quantum computational variant has been an object of study in quantum computing for more than a decade.

The main result to be presented herein is that quantum computation does not enhance the expressive power of interactive proof systems at all: quantum and classical interactive proof systems are equivalent in power, both coinciding with the complexity class PSPACE of computational problems solvable using an amount of memory scaling polynomially in the length of the input to the problem. This resolves a fundamental question about the quantum interactive proof system model that has been open since its introduction.

*  1.1. Randomness and interaction in proofs

When speaking of proofs, one typically has a traditional notion in mind where, at least in an abstract sense, the proof itself is a string of symbols to be verified by some form of computation. In the theory of NP-completeness, a proof is generally a string of bits certifying that a given object has a property of interest, such as a graph having a three-coloring. In mathematics, proofs appear in journal papers and books, to be verified by mathematicians with interest in the claimed theorems. Although it is not done, one imagines that in principle—and undoubtedly through monumental effort—such proofs could be transformed into a purely symbolic form verifiable by a computer, presumably in accordance with axiomatic set theory.

There are, however, other interesting notions of proofs that extend the traditional notion in various ways. For instance, randomness may be used in the verification process to gather statistical evidence for or against a given claim. For example, if a coin is flipped 1,000 times, and heads come up 975 times, one can be reasonably sure that the coin is biased toward heads—and although the claim has not been proved in the sense of formal logic, it would be foolish to hold out any hope that the coin is actually fair. Allowing for an interaction in the process of verification, where the proof takes the form not of a static string of symbols, but as a process that may receive input and produce output, is another extension. The interactive proof system model, first proposed by Shafi Goldwasser et al.8 and László Babai2 in 1985, combines both the extensions of randomness and interaction into a formal computational model.

One variant of a well-known, often-repeated, and highly informal example illustrating a benefit of interaction and randomness in proofs is as follows. Suppose that you can taste the difference between Coke and Pepsi, and that you wish to convince a person that is skeptical of this claim. You cannot reasonably hope, for a variety of reasons, to succeed in this task through the traditional notion of a proof, but it is easily achieved through the use of interaction and randomness. In particular, you may allow the skeptic to subject you to a “blind taste test,” where you are offered Coke or Pepsi in an unmarked can—and when you are able to repeatedly make the correct identification, over the course of many independent random tests, the skeptic should be convinced that you can taste the difference. The negligible—but never-theless nonzero—probability that every single identification was made through luck rather than an actual difference in taste is accepted: the formal definition of the model requires a high probability, but not absolute certainty, of correct outcomes.

As a computational model, interactive proof systems are not typically considered for single, isolated statements such as “Coke and Pepsi taste different”—irrespective of that example’s informality. Rather, as is done in the theory of NP-completeness, interactive proof systems are connected to computational decision problems where input strings are to be classified as yes-inputs and no-inputs. A particular decision problem is said to have an interactive proof system if there exists a computationally efficient verification procedure (called the verifier) with two properties that capture the essence of what it means to be a proof:

  1. Completeness. This property represents the requirement that true statements can be proved. Here, the requirement is that for any yes-input string x, there exists a behavior of the entity the verifier interacts with (called the prover) that causes the verifier to believe that x is indeed a yes-input. This situation may be indicated by the verifier outputting 1 (or accept) after interacting with the prover.
  2. Soundness. This property represents the complementary requirement to completeness, which is that false statements cannot be proved. In the current situation, the requirement is that the probability that the verifier will be convinced to output 1 given a no-input is negligible, regardless of the prover’s actions. Instead, the verifier outputs 0 (or reject) with probability very close to 1, indicating the prover’s failure to convince it otherwise.

While the verifier is restricted to be computationally efficient (or more formally to be describable as a polynomial-time probabilistic Turing machine), no such restriction is placed on the prover. These assumptions serve to place an emphasis on efficient verification, as opposed to efficient construction of proofs.

It must be stressed that there is an inherent asymmetry between the completeness and soundness conditions: when the verifier outputs 0 (or reject), it is not necessarily convinced that the input is a no-input, but only that the prover has failed to convince it that the input is a yes-input. This is analogous to the traditional notion of a proof: one would not be convinced that a particular mathematical statement is false by seeing an incorrect proof claiming it is true.

The most fundamental question, from the viewpoint of complexity theory, about the interactive proof system model is: which computational decision problems have interactive proof systems? The answer is known: a decision problem has an interactive proof system if and only if it is solvable by an ordinary computer (or deterministic Turing machine) that requires an amount of memory that scales at most polynomially in its input length. A more succinct expression of this fact is given by the equation IP = PSPACE. In this equation, IP denotes the set of decision problems having interactive proof systems, PSPACE represents the set of decision problems solvable using polynomial memory (or space), and of course the equality expresses the fact that the two sets are equal, meaning that the two descriptions give rise to precisely the same set of problems.

Like all set equalities, there are two subset relations represented by the equation IP = PSPACE. One of the two relations, IP Í PSPACE, is easy to prove: the typical proof involves a fairly straightforward recursive traversal of a game tree whose edges represent messages exchanged between the prover and verifier, which can be performed in a space-efficient way. The other relation, PSPACE Í IP, was proved by Adi Shamir14 in 1990, based on work of Carsten Lund et al.10 It is a landmark result that established the powerful proof technique of arithmetization as a standard tool in computational complexity.

*  1.2. Quantum computation in proofs

The idea of quantum computation was born in the early 1980s when Richard Feynman7 asked a brilliant question: If quantum mechanics is so hard to simulate with a classical computer, why not build a computer based on quantum mechanics to simulate quantum mechanics more directly? Feynman’s ideas on the subject led David Deutsch6 to define the quantum Turing machine model and to investigate its computational power. Driven in large part by Peter Shor’s subsequent discoveries of polynomial-time algorithms for factoring and computing discrete logarithms on a quantum computer,15 quantum computation has developed into an active field of study within theoretical computer science and both theoretical and experimental physics.

Large-scale quantum computers do not currently exist, and it is universally agreed that at the very least their realization will be an enormous technological challenge. However, it must also be appreciated that quantum mechanics is a remarkably accurate theory that has never been refuted—and with the theory suggesting that quantum computing should be possible, scientists are naturally compelled to test the theory by attempting to build a quantum computer. Efforts to do this are underway in many laboratories around the world.

Within the theoretical study of quantum computation, it is natural to consider quantum computational variants of interesting classical models, including those based on the notion of proofs. The quantum interactive proof system model, which was first introduced in 1999,17 represents a natural quantum computational analog to the (classical) interactive proof system model. In simple terms, the quantum model allows the verifier and prover in an interactive proof system to perform quantum computations and exchange quantum information, but is otherwise similar to the classical model.

The potential advantages of quantum computation in the setting of interactive proof systems are not limited to the fact that the verifier is able to perform a potentially wider range of computations. The nature of quantum information is such that it has striking benefits in a variety of information-processing tasks, such as secret key exchange3 and distributed computational tasks allowing limited communication.13 A known benefit of quantum computation in the interactive proof system model is that it allows for a major reduction in the number of messages that must be exchanged: quantum interactive proof systems allowing just three messages to be exchanged between the prover and verifier have the full power of those allowing any polynomial number of messages.9 It is not known if the analogous fact holds for classical interactive proof systems, but it is conjectured not to hold. (It would, in particular, send complexity theorists reeling from the collapse of the polynomial-time hierarchy if it were true.)

It is not difficult to show that quantum interactive proof systems are at least as powerful as classical ones—due, in essence, to the fact that quantum computers can mimic classical computers. This implies PSPACE Í QIP. Unlike the subset relation IP Í PSPACE, however, it is not straightforward to prove QIP Í PSPACE, and prior to the work presented in this paper this relationship was not known. The remainder of this paper is devoted to a presentation of this result.

Back to Top

2. Quantum Proof Systems

This section aims to provide readers with a basic understanding of quantum information and the quantum interactive proof system model, narrowly focused on material that is required for the subsequent sections of this paper. The standard text, Nielsen and Chuang,12 is recommended to those readers interested in a more comprehensive presentation of the much more broad field of quantum information and quantum computation.

*  2.1. States and measurements

Quantum information is described in the language of matrices and linear algebra in way that is reminiscent of probabilistic models such as Markov chains.

Consider a physical device X whose possible states are the binary strings of length k for some fixed choice of a positive integer k. For brevity, one may say that X is a k-bit register, or a k-qubit register in the quantum setting, as a way of suggesting that X is a device used for storing and processing information.

One way to represent one’s knowledge of X in a probabilistic setting, where the states are changed through a randomized process of some sort, is by a vector v of probabilities: the value v[x] represents the probability that X takes the state x for each binary string x of length k. The vector v is therefore a K-dimensional vector for K = 2k.

In quantum information, the K-dimensional vector v of probabilities is replaced by a K × K matrix ρ with complex number entries, known as a density matrix. (It is traditional in quantum physics to use lower-case Greek letters, often ρ, σ, and ξ, to represent density matrices.) It is reasonable to view that the diagonal entries of a density matrix ρ represent probabilities, so that a “standard measurement” of the register X would yield each possible state x with probability ρ[x, x]. The off-diagonal entries of ρ are not easily connected to classical intuitions, but they do have great significance with respect to their role in calculations. Informally speaking, for distinct choices of binary strings x and y, the off-diagonal entries ρ[x, y] and ρ[y, x] provide information about the degree to which the states x and y are in “superposition” in X, or alternately the degree to which these states could interfere with one another in processes involving X.

Although they are not as intuitive as vectors of probabilities, density matrices are very simple objects in a mathematical sense: they are positive semidefinite matrices whose diagonal entries sum to 1 (i.e., whose trace is 1). A matrix ρ is positive semidefinite if and only if it satisfies (i) the condition that cacm5312_f.gif for all choices of x and y (with cacm5312_g.gif denoting the complex conjugate of α), and (ii) the constraint that all of its eigenvalues are nonnegative.

Quantum states of independent registers are represented by tensor products of density matrices. For instance, if X and Y are k-qubit registers independently prepared in the quantum states represented by the K × K density matrices σ and ξ, then the quantum state of the pair (X, Y) is described by the K2 × K2 density matrix

ueq01.gif

where the binary strings of length k indexing the entries of σ have been indicated by the integers they represent in binary notation.

Of course, not every state of a pair of registers (X, Y) can be expressed as a tensor product in this way, representing the fact that there may be dependencies between X and Y. Two such examples for the case k = 1 are the following density matrices:

ueq02.gif

The first density matrix ρ0 represents a simple situation in which X and Y are randomly correlated: the two registers take the same state, chosen to be 0 or 1 uniformly at random. The second density matrix ρ1 is similar to ρ0 in the sense that it represents a joint state with a strong correlation between X and Y. It so happens, however, that the two non-zero off-diagonal entries endow ρ1 with the characteristic of entanglement, which is one of the most remarkable features of quantum information. (Quantum teleportation4 provides a well-known example of the use of the entangled state ρ1 as a resource in an information-processing task.)

For every possible quantum state of a pair of registers (X, Y), whether correlated or not, there is a uniquely determined reduced state of the register X that, in essence, would describe the state of X if Y were to suddenly be destroyed or lost. This is analogous to the marginal probability distribution of X in the probabilistic case. In mathematical terms the reduced state is defined by an operation known as the partial trace. If it is the case that the state of the pair (X, Y) is described by a K2 × K2 density matrix ρ, which may be written as a block matrix of the form

ueq03.gif

for K × K matrices Mi, j, then the K × K reduced density matrix for the register X is defined as

ueq04.gif

For the states ρ0 and ρ1 defined previously, for example, it holds that

ueq05.gif

One can consider other variants of this notion, such as one defining the reduced state of Y rather than X or one for the situation when X and Y have different sizes, but the case presented above is sufficient for the needs of this paper.

Now suppose one has a k-qubit register X in their possession, and that the state of X is described by the density matrix ρ. Much like the probabilistic case, there is a limit to the amount of information about ρ that can be gained by looking at a single copy of X. Any information must be obtained by a measurement, which will generally result in a probabilistic outcome.

In more precise terms, a measurement of X that results in one of the outcomes 0, 1,…, m – 1 (for some choice of m) is described by a collection of positive semidefinite matrices {Π0, Π1,…, Πm – 1} that satisfy the condition Π0 + … + Πm-1 = 1, where 1 represents the K × K identity matrix. Often the matrices Π0,…, Πm-1 are projection matrices, meaning that in addition to being positive semidefinite their only eigenvalues are 0 and 1. This is not a requirement, but every measurement considered in this paper may be assumed to have this property.

Now, when ρ is measured with respect to the measurement {Π0,…, Πm-1}, each outcome a isin.gif {0,…, m – 1} is obtained with probability Trace(Πaρ). The condition that ρ and Πa are positive semidefinite implies that the resulting probabilities are nonnegative, and the conditions that Trace(ρ) = 1 and Π0 + … + Πm-1 = 1 imply that the probabilities sum to 1. According to the simplest definition, the register X is destroyed by the act of measurement, so there is no opportunity to gain further information about ρ unless additional independent copies of it are made available.

One example of a measurement, referred to as the “standard measurement” in passing above, is obtained by taking m = K, associating the possible outcomes 0,…, K – 1 with the length k binary strings with respect to binary notation, and taking Πx to be the matrix with a single 1 in the (x, x) diagonal entry and 0 for all other entries. As suggested before, this measurement results in each outcome x with probability ρ[x, x]. The measurements that will be of interest in this paper have a somewhat different nature, in that they are binary-valued measurements resulting from hypothetical quantum computations performed on registers. They are, nevertheless, still describable within the framework just explained.

One additional component of quantum information theory that has not been mentioned above is the description of transformations of quantum registers according to physical processes (such as computations). These changes are described mathematically by linear mappings known as channels, and are of great interest to the theory. It is, however, not necessary to make use of this notion in this paper, so it is not discussed further.

*  2.2. Quantum interactive proofs simplified

Taken in its most general form, the quantum interactive proof system model can allow for complicated and mathematically unwieldy interactions involving the exchange of quantum messages over the course of many rounds. By the nature of quantum information, such an interaction cannot generally be described by the sort of game tree that describes a classical interaction—the possibility of entanglement among the prover, verifier, and message registers prohibits this.

It is, however, always possible9, 11 to transform a given quantum interactive proof system into one with the following conceptually simple form:

  1. The prover sends the verifier a k-qubit register X, for some choice of a positive integer k—which must be polynomially related to the length of the input string x. Upon receiving X, the verifier sets it aside without interacting with it.
  2. The verifier chooses a bit a isin.gif {0, 1}, uniformly at random, and sends a to the prover.
  3. The prover sends the verifier a second k-qubit register Y. The state of Y may of course be dependent on the random bit a, given that the prover learned a before sending Y to the verifier, but its correlation with X is limited by the fact that the prover did not have access to X after seeing a.
  4. The verifier measures the pair (X, Y) with respect to one of two binary-outcome measurements: {Π00, Π01} in case a = 0 and {Π10, Π11} in case a = 1. The measurement outcome is interpreted as the verifier’s output: 1 means the proof is accepted, 0 means it is rejected.

The verifier’s measurements {Π00, Π11} and {Π10, Π11} will, naturally, be dependent on the input string x to the decision problem under consideration. In accordance with the definition of the quantum interactive proof system model, these measurements must be efficiently implementable by a quantum computation to be performed by the verifier.

It is beyond the scope of this paper to describe how a general quantum interactive proof system can be transformed into one having the above form, but it is not overly complex. The transformation is such that the verifier’s measurements can be forced to output 1 with certainty when the input is a yes-input, while the maximum probability for the output 1 can be made arbitrarily close to ½ when the input is a no-input. (More formally speaking, the transformation can guarantee a probability of at most ½ + ε for any fixed choice of a constant ε > 0.)

A common question about quantum interactive proof systems having the above form is this: Why cannot the prover simply prepare three registers X, Y0, and Y1, send all three to the verifier in a single message, and allow the verifier to measure (X, Y0) or (X, Y1) depending on its choice of the random bit a? This would seem to eliminate the need for interaction, as the verifier would never send anything to the prover. The reason is that entanglement prevents this from working: in order for the two measurements to result in the output 1 with high probability, the registers X and Y will generally need to be highly entangled. One of the curious features of entanglement, however, is that any single quantum register is highly constrained with respect to the degree of entanglement it may simultaneously share with two or more other registers. (This phenomenon was colorfully named the monogamy of entanglement by Charles Bennett.) Thus, again in the general case, the only way for the prover to cause the verifier to output 1 is to prepare X in a highly entangled state with a register of its own, and then use this entanglement to prepare Y once the random bit a has been received.

There are many strategies that a prover could potentially employ in an attempt to cause a verifier to output 1 in the type of proof system described above—but they can all be accounted for by considering the possible states of the pair (X, Y) that the verifier measures, conditioned on the two possible values of the random bit a. That is, for any two K2 × K2 density matrices ρ0 and ρ1, one may ask whether it is possible for the prover to follow a strategy that will leave the verifier with the state ρ0 for the pair (X, Y) when the random bit takes the value a = 0, and with the state ρ1 when a = 1; and the set of all possible such choices for ρ0 and ρ1 is simple to characterize. It is precisely the set of all K2 × K2 density matrices ρ0 and ρ1 for which

eq01.gif

The necessity of this condition follows immediately from the fact that the prover cannot touch X at any point after learning a, while the sufficiency of the condition requires an analysis based on standard mathematical tools of quantum information theory.

It follows that the maximum probability for a prover to cause the verifier to output 1 in the type of proof system described above is the maximum of the quantity

eq02.gif

subject to the conditions that ρ0 and ρ1 are K2 × K2 density matrices satisfying Equation 1.

Back to Top

3. PSPACE and Boolean Circuits

Based on the claims of the previous section, the characterization QIP = PSPACE follows from the existence of a polynomial-space algorithm for approximating the prover’s optimal success probability in a quantum interactive proof system of the highly restricted form described above. The accuracy of this approximation may, in fact, be very coarse: it is only necessary for the algorithm to distinguish an optimal probability of 1 from one very close to ½.

The design of space-bounded algorithms is an unnatural task for most algorithm designers. When one cares only about space-efficiency, and not about time-efficiency, programming techniques that would be ridiculous in a practical situation become useful. For example, rather than storing bits used frequently in a given computation, one may instead opt to recompute and then discard those bits every time they are used. Taking such schemes to an extreme, it becomes possible to implicitly perform computations on numbers and matrices that are themselves exponentially larger than the total memory used for the entire computation.

Fortunately, in the late 1970s, Alan Borodin described a simple way to translate the task of highly space-efficient algorithm design into one that is arguably much more natural and intuitive for algorithm designers.5 For the particular case of PSPACE algorithms, Borodin’s result states that a given decision problem is in PSPACE if and only if it can be computed by a collection of Boolean circuits having the form illustrated in Figure 1. The primary appeal of this reformulation is that it allows one to make use of extensive work on parallel algorithms for performing various computational tasks when designing PSPACE algorithms.

Now, the task at hand is to prove that any decision problem having a quantum interactive proof system can be decided by a family of polynomial-depth circuits. There is a natural two-stage process associated with this task that is illustrated in Figure 2. The first stage of this process turns out to be fairly straightforward, using elementary facts about quantum computations and bounded-depth circuits. The second stage is more difficult, and the algorithm for accomplishing it is representative of the main technical contribution of this work. The remainder of the paper is devoted to a discussion of this algorithm.

Back to Top

4. The Main Algorithm

The algorithm corresponding to the second stage of the computation illustrated in Figure 2 will now be described. As is required to prove QIP = PSPACE, it is a highly parallel algorithm for approximating the value of the optimization problem described at the end of Section 2. This optimization problem is an example of a semidefinite program (or SDP for short): it asks for the maximum value of a linear function of a collection of matrix variables that are subject to a collection of linear and positive semidefinite constraints.

SDPs are well studied and subject to an analytically powerful duality theory that will be useful in the section following this one. There do exist efficient algorithms to approximate solutions to most SDPs, including the theoretically important ellipsoid method and the more practical family of interior point methods. Unfortunately these algorithms do not parallelize, and there is a generally-believed complexity-theoretic conjecture (namely NC ≠ P) that implies that no other general and efficient algorithm will parallelize either: some SDPs seem to force algorithms that approximate them to be inherently sequential. For this reason, the algorithm to be presented below will, presumably by necessity, exploit the specific nature of the SDP at hand to allow for its parallelizability. It does not approximate solutions to general SDPs, only those arising from the specific type of quantum interactive proof system described in Section 2.

The algorithm employs a method from learning theory and combinatorial optimization sometimes known as the matrix multiplicative weights update method, and in particular makes use of a variant of this technique developed by Manfred Warmuth and Dima Kuzmin16 and Sanjeev Arora and Satyen Kale.1

The algorithm is iterative in nature: for successive values of t = 0, 1, 2,…, the algorithm will generate K2 × K2 matrices σ(t)0 and σ(t)1 that, roughly speaking, correspond to guesses for ρ0 and ρ1 in the SDP. In addition, the algorithm also generates a K × K matrix σ(t) for each t, which intuitively represents a common “target” state for

ueq06.gif

Like many iterative algorithms, there is a balance between the algorithm’s rate of convergence and accuracy. It is essential for the parallelizability of the algorithm that ρ(t)0, ρ(t)1, and σ(t) converge quickly to choices that reveal a near-optimal solution; but for the sake of the algorithm’s accuracy it must not move too rapidly in any one direction, for fear of overreacting to poor choices that had the appearance of good ones within a small region of the search space.

To aid in this balance, it is helpful at a technical level to introduce two K2 × K2 penalty matrices

ueq07.gif

which have the effect of “inflating” density matrices ρ0 and ρ1 that would not yield a value of the objective function (2) close to 1. With a bit of algebra it can be shown that, for a sufficiently large value of α > 0, the optimal value of the original SDP will be close to the maximum value of

ueq08.gif

over all choices of positive semidefinite matrices Q0 and Q1, subject to the constraint that σ – PartialTrace (P0Q0P0) and σ – PartialTrace (P1Q1P1) are positive semidefinite for some choice of a density matrix σ. (The choice α = 4 happens to provide sufficient accuracy for the problem at hand.)

The input to the algorithm is the pair of K2 × K2 penalty matrices P0 and P1, which effectively specify the matrices Π00, Π01, Π10, and Π11. The algorithm also refers to constant values α = 4, γ = 4/3, ε = 1/64 and δ = ε/α2, which, for the sake of clarity, are referred to by these names rather than their numerical values.

  1. Set
      
       cacm5312_nueq09.gif
      
      for 1 denoting the K × K identity matrix. (These matrices represent completely random quantum states, reflecting the fact that no information is present in the initial guesses for ρ0, ρ1, and σ.)
  2. Set the maximum iteration number of the algorithm to
      
       cacm5312_oueq10.gif
      
      and repeat the following steps for each t from 0 to T-1:
      
      (a) Compute the K × K matrices
      
       cacm5312_pueq11.gif
      
      (b) Compute the K × K matrices Δ(t)0 and Δ(t)1 that project onto the negative eigenspaces of M(t)0 and M(t)1, respectively.
      
      (c) Compute
      
       cacm5312_queq12.gif
      
      and set cacm5312_h.gif then output 1 and stop.
      
      (d) Set
      
       cacm5312_rueq13.gif
  3. If the algorithm has not output 1 and stopped during any iteration of step 2(c), then output 0 and stop.

The correctness of the algorithm is certainly not obvious from its description, and is discussed in the section following this one. In addition to the algorithm’s correctness, it is of course also necessary for it to have a highly parallel implementation. As the number of iterations is very small (at most O(log K), which is polynomially related to the problem size n), it suffices to observe that each iteration can itself be implemented by a highly parallel computation. This is possible through the use of known parallel algorithms for various matrix and algebraic problems.

Back to Top

5. Analysis of the Algorithm

*  5.1. Intuition behind the algorithm

The SDP described in the previous section asks for the maximum trace of the average (Q0 + Q1)/2, over all positive semi-definite matrices Q0 and Q1 for which σ – PartialTrace (P0Q0P0) and σ – PartialTrace (P1Q1P1) are positive semidefinite—for some choice of a density matrix σ. While the formal analysis of the algorithm indeed makes use of the properties of this SDP, the algorithm itself is more naturally viewed as solving a feasibility problem.

In particular, for every iteration t the algorithm computes matrices ρ(t)0, ρ(t)1, and σ(t) so that ρ(t)0 and ρ(t)0 have average trace equal to 1, but may fail to satisfy the constraint that

eq03.gif

is positive semidefinite for b = 0, b = 1, or both. Successive iterations attempt to bring these matrices closer and closer to satisfying these constraints. If the constraints are “close enough” to being satisfied, the algorithm outputs 1, concluding that the optimal value of the SDP must be close to 1 by taking Q0 and Q1 close to ρ(t)0 and ρ(t)1. If the algorithm fails to make sufficient progress toward meeting the constraints, even after T iterations have passed, it outputs 0, concluding that the optimal value of the SDP cannot be close to 1.

Suppose, for a given iteration t, that the algorithm’s current choices of ρ(t)0, ρ(t)1, and σ(t) are far from satisfying the constraint (3) for b = 0, b = 1, or both. To be more precise, suppose that one of the matrices

ueq14.gif

fails to be positive semidefinite (for γ = 4/3). In this case, the subspace corresponding to the projection Δ(t)b onto the negative part of this matrix represents a part of the search space where ρ(t)0, ρ(t)1 and σ(t) are inadequate: if progress toward meeting the constraints is to be made, σ must be made larger on this subspace while PartialTrace(PbρbPb) must be smaller.

In both cases, this modification is made by including an additional term in the argument to the matrix exponential that is normalized to produce the next iteration’s choices ρ(t+1)0, ρ(t+1)1 and σ(t+1). This has the effect of shifting the weight of ρ(t)0 and ρ(t)1 away from the parts of ρ(t)0, and ρ(t)1 that contributed to the constraint violations indicated by Δ(t)0, and Δ(t)1, respectively, and likewise shifting the weight of σ(t+1) toward these parts of σ(t). (A negative term in the exponential reduces weight while a positive term increases weight.)

*  5.2. Comments on the formal algorithm analysis

That the algorithm works correctly is, of course, not as simple as the intuitive discussion above might suggest, due in large part to the complicated behavior of the exponential of a sum of matrices. A proper analysis of the algorithm is necessarily technical, and one can be found in the full version of this paper. This extended abstract will not present a detailed formal analysis, but will instead present just a high-level sketch of the analysis, intended only to provide the reader with its basic ideas.

A rigorous analysis of the algorithm makes use of the notion of semidefinite programming duality. Every semidefinite programming problem, stated as a maximization problem typically called the primal problem, has a corresponding dual problem, which is a minimization problem. The methodology for determining the dual problem corresponding to a given primal problem is standard, but will not be discussed here. Each possible value of the dual problem provides an upper bound on the optimal value of the primal problem.

For the particular SDP at hand, the dual problem is to minimize the largest eigenvalue of the average ½ R0 + ½ R1, over all K × K positive semidefinite matrices R0 and R1 obeying the constraint that every eigenvalue of P0(R0 ⊗ 1)P0 and P1(R1 ⊗ 1)P1 is at least 1, or equivalently that the matrices

ueq15.gif

are both positive semidefinite. (As before, 1 denotes the K × K identity matrix.)

The analysis works by constructing solutions to either the primal problem or dual problem based on the behavior of the algorithm, guaranteeing its correctness under the assumption that the optimal value of the primal problem is either very close to 1 or to ½.

The simpler case is that the algorithm outputs 1. This must happen during some particular iteration t, and for this choice of t one may consider the matrices

ueq16.gif

For an appropriate choice of σ, which can be constructed from ρ(t)0, ρ(t)1, σ(t), Δ(t)0, Δ(t)1 and β(t), it holds that both σ – PartialTrace (P0Q0P0) and σ – PartialTrace (P1Q1P1) are positive semidefinite. The trace of the average of Q0 and Q1 is at least 1/(γ + 4ε) > 5/8. It follows that the optimal value of the primal problem cannot be too close to ½, so it is necessarily close to 1.

The more difficult case is that the algorithm outputs 0. In this case, the matrices

ueq17.gif

form a solution to the dual problem satisfying the required constraints and achieving a value smaller than 7/8. Establishing that these matrices indeed satisfy the required constraints and achieve a value smaller than 7/8 is somewhat technical, but follows from a basic methodology that appeared in previous works (including Warmuth and Kuzmin16 and Arora and Kale1). With a value of at most 7/8 for the dual problem, the same value has been established as an upper-bound for the primal problem. It follows that the optimal value for the primal problem is sufficiently far from 1 that it must be close to ½.

There is a small complication in the formal analysis that does not present a major obstacle, but is nevertheless worthy of note. The complication is that the algorithm cannot perform all of the required computations exactly, but must approximate some of them. (In particular, the matrix exponentials and the negative eigenspace projections can only be approximated, albeit with very high accuracy.) A slight modification of the analysis sketched above demonstrates, however, that the algorithm is not particularly sensitive to errors. The steps of the computation may, in fact, be performed with significantly less accuracy than is possible within the resource constraints required of the algorithm.

Back to Top

6. Conclusion

The characterization QIP = PSPACE implies that quantum computation does not provide an increase in the expressive power of interactive proof systems. It is tempting to view this fact as a negative result for quantum computing, but this view is not well justified. What is most interesting about quantum computation is its potential in a standard computational setting, where an algorithm (deterministic, probabilistic, or quantum) receives an input and produces an output in isolation, as opposed to through an interaction with a hypothetical prover. The main result of this paper has no implications to this fundamental question. A more defensible explanation for the equivalence of quantum and classical computing in the interactive proof system model is the model’s vast computational power: all of PSPACE. That such power washes away the difference between quantum and classical computing is, in retrospect, perhaps not unexpected.

One future research direction that is clearly suggested by this paper is: what class of SDPs can be solved approximately by parallel algorithms? We do not have an answer to this question, and believe it is certainly deserving of further investigation. There are, in addition, still many open questions relating to variants of quantum interactive proof systems—the most notable being the multiprover quantum interactive proof system model. This model is, in fact, currently so poorly understood that it is not even known if every problem having a multiprover quantum interactive proof system is necessarily decidable.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. A decision problem is solvable in polynomial space if and only if it can be computed by a family of Boolean circuits, one for each input length n, whose size may be exponential in n but whose depth is polynomial in n. (The circuits must also satisfy a uniformity constraint that limits the computational difficulty of computing each circuit’s description.)

F2 Figure 2. An illustration of the two-stage process for solving any problem in QIP by a bounded-depth Boolean circuit. The circuit C1 transforms the input string into an exponentially large description of the verifier’s measurements {Π00Π01} and {Π10Π11}, and the circuit C2 implements a parallel algorithm for approximating the optimal value of the associated optimization problem.

Back to top

    1. Arora, S., Kale, S. A combinatorial, primal-dual approach to semidefinite programs. In Proceedings of the 39th Annual ACM Symposium on Theory of Computing (2007), 227–236.

    2. Babai, L. Trading group theory for randomness. In Proceedings of the 17th Annual ACM Symposium on Theory of Computing (1985), 421–429.

    3. Bennett, C., Brassard, G. Quantum cryptography: Public key distribution and coin tossing. In Proceedings of the IEEE International Conference on Computers, Systems, and Signal Processing (1984), 175–179.

    4. Bennett, C., Brassard, G., Crépeau, C., Jozsa, R., Peres, A., Wootters, W. Teleporting an unknown quantum state via dual classical and EPR channels. Phys. Rev. Lett. 70, 12 (1993), 1895–1899.

    5. Borodin, A. On relating time and space to size and depth. SIAM J. Comput. 6 (1977), 733–744.

    6. Deutsch, D. Quantum theory, the Church–Turing principle and the universal quantum computer. Proc. R. Soc. Lond. A400 (1985), 97–117.

    7. Feynman, R. Simulating physics with computers. Int. J. Theor. Phys. 21, 6/7 (1982), 467–488.

    8. Goldwasser, S., Micali, S., Rackoff, C. The knowledge complexity of interactive proof systems. SIAM J. Comput., 18, 1 (1989), 186–208. A preliminary version appeared in Proceedings of the 17th Annual ACM Symposium on Theory of Computing (1985), 291–304.

    9. Kitaev, A., Watrous, J. Parallelization, amplification, and exponential time simulation of quantum interactive proof system. In Proceedings of the 32nd Annual ACM Symposium on Theory of Computing (2000), 608–617.

    10. Lund, C., Fortnow, L., Karloff, H., Nisan, N. Algebraic methods for interactive proof systems. J. ACM 39, 4 (1992), 859–868. A preliminary version appeared in Proceedings of the 31st Annual IEEE Symposium on Foundations of Computer Science (1990), 2–10.

    11. Marriott, C., Watrous, J. Quantum Arthur-Merlin games. Comput. Complex. 14, 2 (2005), 122–152.

    12. Nielsen, M., Chuang, I. Quantum Computation and Quantum Information. Cambridge University Press, 2000.

    13. Raz, R. Exponential separation of quantum and classical communication complexity. In Proceedings of the 31st Annual ACM Symposium on Theory of Computing (1999), 358–376.

    14. Shamir, A. IP = PSPACE. J. ACM 39, 4 (1992), 869–877. A preliminary version appeared in Proceedings of the 31st Annual IEEE Symposium on Foundations of Computer Science (1990), 11–15.

    15. Shor, P. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 26, 5 (1997), 1484–1509. A preliminary version appeared with the title "Algorithms for quantum computation: discrete logarithms and factoring" in Proceedings of the 35th Annual IEEE Symposium on Foundations of Computer Science (1994), 124–134.

    16. Warmuth, M., Kuzmin, D. Online variance minimization. In Proceedings of the 19th Annual Conference on Learning Theory, volume 4005 of Lecture Notes in Computer Science (Springer, 2006), 514–528.

    17. Watrous, J. PSPACE has constant-round quantum interactive proof systems. In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science (1999), 112–119.

    The original version of this paper was published in the 42nd ACM Symposium on Theory of Computing, 2010.

    DOI: http://doi.acm.org/10.1145/1859204.1859231

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More