The second week of August was an exciting week. On Friday, August 6, Vinay Deolalikar announced a claimed proof that P # NP. Slashdotted blogs broke the news on August 7 and 8, and suddenly the whole world was paying attention. Richard Lipton's August 15 blog entry at [email protected] was viewed by about 10,000 readers within a week. Hundreds of computer scientists and mathematicians, in a massive Web-enabled collaborative effort, dissected the proof in an intense attempt to verify its validity. By the time the New York Times published an article on the topic on August 16, major gaps had been identified, and the excitement was starting to subside. The P vs. NP problem withstood another challenge and remained wide open.
During and following that exciting week many people have asked me to explain the problem and why it is so important to computer science. "If everyone believes that P is different than NP," I was asked, "why it is so important to prove the claim?" The answer, of course, is that believing is not the same as knowing. The conventional "wisdom" can be wrong. While our intuition does tell us that finding solutions ought to be more difficult than checking solutions, which is what the P vs. NP problem is about, intuition can be a poor guide to the truth. Case in point: modern physics.
While the P vs. NP quandary is a central problem in computer science, we must remember that a resolution of the problem may have limited practical impact. It is conceivable that P = NP, but the polynomial-time algorithms yielded by a proof of the equality are completely impractical, due to a very large degree of the polynomial or a very large multiplicative constant; after all, (10n)1000 is a polynomial! Similarly, it is conceivable that P ≠ NP, but NP problems can be solved by algorithms with running time bounded by nlog log log n—a bound that is not polynomial but incredibly well behaved.
Even more significant, I believe, is the fact that computational complexity theory sheds limited light on behavior of algorithms in the real world. Take, for example, the Boolean Satisfiability Problem (SAT), which is the canonical NP-complete problem. When I was a graduate student, SAT was a "scary" problem, not to be touched with a 10-foot pole. Garey and Johnson's classical textbook showed a long sad line of programmers who have failed to solve NP-complete problems. Guess what? These programmers have been busy! The August 2009 issue of Communications contained an article by Sharad Malik and Lintao Zhang (p. 76) in which they described SAT's journey from theoretical hardness to practical success. Today's SAT solvers, which enjoy wide industrial usage, routinely solve SAT instances with over one million variables. How can a scary NP-complete problem be so easy? What is going on?
The answer is that one must read complexity-theoretic claims carefully. Classical NP-completeness theory is about worst-case complexity.
Indeed, SAT does seem hard in the worst case. There are SAT instances with a few hundred variables that cannot be solved by any extant SAT solver. "So what?" shrugs the practitioner, "these are artificial problems." Somehow, industrial SAT instances are quite amenable to current SAT-solving technology, but we have no good theory to explain this phenomenon. There is a branch of complexity theory that studies average-case complexity, but this study also seems to shed little light on practical SAT solving. How to design good algorithms is one of the most fundamental questions in computer science, but complexity theory offers only very limited guidelines for algorithm design.
An old cliché asks what the difference is between theory and practice, and answers that "in theory, they are not that different, but in practice, they are quite different." This seems to apply to the theory and practice of SAT and similar problems. My point here is not to criticize complexity theory. It is a beautiful theory that has yielded deep insights over the last 50 years, as well as posed fundamental, tantalizing problems, such as the P vs. NP problem. But an important role of theory is to shed light on practice, and there we have large gaps. We need, I believe, a richer and broader complexity theory, a theory that would explain both the difficulty and the easiness of problems like SAT. More theory, please!
Moshe Y. Vardi, EDITOR-IN-CHIEF
©2010 ACM 0001-0782/10/1100 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.
The following letter was published in the Letters to the Editor in the February 2011 CACM (http://cacm.acm.org/magazines/2011/2/104382).
Regarding Moshe Y. Vardi's view of computational complexity in his Editor's Letter "On P, NP, and Computational Complexity" (Nov. 2010), I'd like to add that the goal of computational complexity is to explore the potential and limitation of efficient computation. While P vs. NP is a central pivot in that direction, computational complexity is not reduced to it exclusively; nevertheless, my comments are limited to P vs. NP.
P vs. NP refers to the relative difficulty of finding solutions to computational problems in comparison to checking the correctness of solutions to these problems. Common sense suggests that finding solutions is more difficult than checking their correctness, and it is widely believed that P is different from NP. Vardi advocated the study of P vs. NP, saying that knowing is different from believing and warning that beliefs are sometimes wrong.
The ability to prove a central result is connected to obtaining a much-deeper understanding of the main issues at the core of a field. Thus, a proof that P is different from NP is most likely to lead to a better understanding of efficient computation, and such a theoretical understanding is bound to have a significant effect on computer practice. Furthermore, even ideas developed along the way, attempting to address P vs. NP, influence computer practice; see, for example, SAT solvers.
This does not dispute the claim that there is a gap between theory and practice; theory is not supposed to replace but rather inform practice. One should not underestimate the value of good advice or good theory; neither should one overestimate it. Real-life problems are solved in practice, but good practice benefits greatly from good theory.
One should also realize that the specific formulation of the P vs. NP question (in terms of polynomial running time) is merely the simplest formulation of a more abstract question. Ditto with respect to the focus on worst-case complexity. In either case, the current formulation should be viewed as a first approximation, and it makes sense to study and understand it before moving forward.
Unfortunately, we lack good theoretical answers to most natural questions regarding efficient computation not because we ask the wrong questions but because answering is so difficult.
Despite our limited understanding compared to the questions, we have made significant progress in terms of what we knew several decades ago. Moreover, this theoretical progress has influenced computer practice (such as in cryptography). It makes sense that most of computer science deals with actually doing the best it can at the moment develop the best computational tools, given the current understanding of efficient computation rather than wait for sufficient progress in some ambitious but distant project. It also makes sense that theoretical computer science (TCS) helps meet today's practical challenges. But it is crucial for a particular aspect of TCS complexity theory to devote itself to understanding the possibilities and limitations of efficient computation.
Displaying 1 comment