Last Sunday I planned to relax and read a book when I made the mistake of checking my email—never check your email on a Sunday. Among the usual messages was one from the eminent logician Harvey Friedman. It contained an attachment from Vinay Deolalikar, a researcher at Hewlett-Packard Labs in Palo Alto. The attachment was a 100+ page paper that looked serious, was serious, and had the elegant and short title "P≠NP." In addition, the email contained a quote from Steve Cook on Deolalikar's paper, which said,
"this looks like a serious effort."
Steve is a Turing awardee, a winner for discovering the very same P≠NP problem. He—a logician at heart—is very careful, and for him to say the word "serious" made it clear that I would not be relaxing nor reading any book that Sunday. I would be working hard to try and figure out whether this was really a serious effort or not.
I write a blog, along with about 126 million people who do the same. Mine is on complexity theory, and so this claim was something that I felt I needed to write about. There are several other blogs on complexity theory: the best and most famous are Scott Aaronson's blog, Lance Fortnow and Bill Gasarch's blog, and Terence Tao's blog. Even though I knew they would be discussing Deolalikar's paper, I felt I could not avoid getting involved—my blog after all has "P=NP" in its title.
My Sunday was gone, and as it turned out, so was the rest of the week. I kept reading the paper, sending and getting emails, and writing posts: all in an effort to report on what the theory community was doing to understand the claimed proof. Was the famous P≠NP problem actually solved? What did this mean for the field? And who cares anyway?
Perhaps the biggest surprise was that a lot of people do care. One lesson we learned is there are hundreds of thousands of people who were interested, there were hundreds of people who were willing to write thoughtful comments, and there were a number of superstar mathematicians and theorists who were willing to work hard to get to the truth of what Vinay Deolalikar claimed.
What is P≠NP Anyway?
I would guess that many have heard of the P≠NP question, but I would also guess that many may not really know what it is about and why it is so important. I think almost everyone in the world can recognize the equation
E = mc2.
After all it's on T-shirts and is the subject of The New Yorker cartoons. I love the one where a scientist has written on a blackboard:
E = ma2
E = mb2
E = …
He is thinking hard and is about to write the correct equation, since "c" is the next letter in the alphabet. That is not how Albert Einstein found his famous equation. Nor is it how Steve Cook and Richard Karp—also a Turing awardee—discovered the P≠NP question.
So what does the cryptic "P≠NP" sequence of symbols really mean and why should you care about it? Let's agree to abbreviate the statement as Q from now on—I do this to help my slow typing, and also to make the discussion less technical. I will not explain what P is nor what NP is—that can be found on Wikipedia, for example.
§
Whenever we are faced with a computational problem we usually care how fast the algorithm performs that solves the problem. Not always, but most of the time, time is of the essence. If an application is too slow, it will not be used, it will not sell, and that is bad. So almost always we are concerned with the performance of our algorithms. This is true from apps that run on smartphones, to codes that run on supercomputers—performance is a key issue.
After working hard and getting the best algorithm we can find, a natural thought is: Did we find the best possible algorithm? Or is there a better algorithm? This is the core issue of computing that is captured by Q. This statement asserts: the "obvious" algorithms for many important problems are essentially the best.
Imagine, if you can, you are the first person who was ever asked to write an algorithm to search a long list of records. You decide on the algorithm that tries each record in turn and stops when the desired record is found, or when the list is exhausted. This is a pretty "obvious" algorithm, but you ask yourself: Is it the best possible algorithm? The answer is no. If you can arrange the records into a balanced binary search tree ahead of time, then the search time is reduced from linear in the number of records to logarithmic in the number of records. This is a huge improvement. If you were the first to realize this, you should write up a patent quickly.
You might continue and ask again about the binary search tree algorithm: Is it the best possible algorithm? The answer again is no. If you can use hashing you can further reduce the search time to a constant. That is a hash based algorithm which runs in time independent of the number of records. You should really run out quickly and patent this.
§
With 40 years' development since Cook and Karp, the community has strengthened P≠NP into stronger statements that articulate common beliefs about lower bounds. Not wishing to be technical here, we offer Q as an umbrella statement for these beliefs about computation. The key point is:
Problems that arise in optimization in many industrial settings.
Problems that arise in cryptography and security.
Problems that arise in simulations of all kinds: from protein folding, to simulations of complex physical systems, to simulations of mathematical systems.
Problems …
The key point is:
If Q is true, then there will never be algorithms for these problems that are much better than the obvious algorithms.
Why Care About Q?
The strong majority of complexity theorists believe that Q is true. They believe that there will be no surprisingly brilliant algorithm discovered in the future. They believe that the obvious algorithms are essentially the best. I do not agree—but that is another story, for another time.
An interesting question is, Why should we be excited about a proof of something that the majority of theorists think is true? Who today would be excited by a "proof" that the Earth is round. Indeed. You do believe the Earth is round?
Yet the claimed answer of Deolalikar generated an immense amount of interest. Why? My guess is two-part. No matter how sure we are about the answer to a question, we still want to know. One of the most human instincts is the need for closure: the need to really know something "for sure." One of the greatest mathematicians of all time, David Hilbert, once said, "We must know," in reference to other mathematical questions.
The other reason is we hope that a proof of Q will allow us to better understand the nature of algorithms. For example, one of the paradoxes of modern cryptography is we talk about provable security, but all modern security is based on assumptions that we have no idea how to prove. If Q was false, then there could be huge problems in cryptography. A proof of Q does not solve the cryptographers' dilemma—but it would be comforting.
Why Is Proving Q So Hard?
The reason Q is so hard to resolve is really simple: It is hard to prove a negative. Consider the famous equation E=mc2 again, which follows from the theory of Special Relativity. We know this equation is right: There are a myriad ways to "prove" it. From physical experiments, to the existence of nuclear power plants, to existence of atomic bombs, we now have proof that Einstein was right. Perhaps it is unfortunate he was right, but there is little doubt about this great equation.
Yet there are other predictions that flow from his theory of Special Relativity that are less certain. The theory predicts that nothing can accelerate beyond the speed of light. This is not my area of expertise, but that is my understanding. Yet there are theories and thoughts in physics of particles that "could" perhaps go faster than the speed of light. Note, the fundamental difference:
The equation E=mc2 makes a positive prediction.
The limit that we cannot send signals faster than the speed of light is a negative prediction.
That is the crux of why Q is so hard. It is a negative prediction. It claims that no programmer, today or a thousand years hence, will ever be so clever as to discover a way to vastly beat the obvious algorithms. This is a pretty sweeping prediction. No one, no how, not ever, no way, can do something. I often point out science is filled with negative predictions that were wrong. I will leave out my favorite list and move on to this week's events. But beware of people bearing negative predictions—apologies to Virgil.
§
In the rest of this discussion I will try to give an explanation of what happened as the events unfolded this last week. The Internet has changed the way proofs can be examined, what once would have taken months or longer now can be done in days, if the occasion warrants. From Sunday to Friday there were many hectic exchanges of email, posts by the bloggers, comments from readers—it was an unprecedented time. I will try, from my viewpoint, to give a high-level insight of what happened. See the wikis and the blogs for the details—especially to get the full story, and not one filtered through my viewpoint.
I am trying to explain all that happened, yet at the same time, limit the amount of technical jargon and keep the symbols to a minimum. I hope this does not come across as "talking down" in any way. I just want to have this explanation readable by as wide an audience as possible. At least that is my goal. Those in the know will see the gaps in what I say, and they should be able to fill in the details I leave out. I plan eventually to write up a much more technical version of all that happened and is happening.
Act I: A "Proof" Is Announced
The proof of Deolalikar was sent out last weekend to a select group. Quickly, the proof with additional comments moved around the net via email. I decided to look at the paper and make a short post about the claimed proof. Greg Baker of Simon Fraser made the first post on the proof, and he had included a copy of the paper.
I noticed a couple of things right away about the paper. It was long, and lacked detail in many places; but the paper seemed quite interesting and serious. The proof makes an interesting connection between two very different areas of theory. Deolalikar used a special way to define efficient algorithms that is based on logic. This method of defining efficient computation does not use machines at all—it is called "finite model theory." His proof also used an area of statistical physics.
Two things struck me immediately. I had never seen the connection of these areas in any attempted proof of Q. Also, the area of finite model theory has had a number of tremendous successes before. Without going into any details, one of the principles of mathematical proofs is that often there are multiple ways to define something. Many times one definition, even though it is equivalent to another, is the right way to prove something. Deolalikar's use of this logical theory seemed to be potentially very powerful.
I alerted and received feedback on some potential issues with the paper from several confidants, including my chief blog consultant, Ken Regan of the University at Buffalo (SUNY), who co-wrote three later posts. While I was writing the post and reading the paper I also asked Deolalikar for permission to link to his paper. Subsequently, there has been discussion of whether or not he really wanted his paper to be public last Sunday. I finally got approval directly from him, and my post appeared late on Sunday.
Of course in parallel there were many other posts. Some were pure announcements, and others tried to make some sense of this great event—a serious claim of Q. Aaronson created quite a stir with his post where he made a large one-way bet against Deolalikar's proof. There has been lots of discussion of whether this was a "nasty" thing to do, or whether it was just a dramatic way for him to make it clear that he doubted the proof. I will let you decide for yourself.
Act II: Enter the Heavy Hitters
Two kinds of heavy hitters started to get involved very quickly. One group consisted of top mathematicians and theorists who started to work on trying to understand the paper of Deolalikar in detail. A few worked with him via private email that I was cc'ed on, and most worked by commenting on various blogs. A wiki was quickly set up to document the main issues as they came up in studying the paper.
The other group of heavy hitters were from the press. Both Internet press and traditional press called on all of us. They wanted to know what the Deolalikar proof was about? Was it correct? And what did it mean? Top journals like Nature were involved as were other more popular publications such as Forbes and The New York Times. For example, I wound up talking on background with John Markoff of the Times and Ken did likewise with Lee Gomes of Forbes.
The work of the first group, who were trying to figure out the proof, started to pay off. It became clear that the Deolalikar proof had the following rough strategy. He had a statement that I will call D. He thought he could prove two things:
If Q is false, then D is true.
But D contradicts known—to the experts —facts about certain complex physical systems.
Clearly, if Deolalikar can demonstrate the above, his proof is correct, he passes go, and eventually collects his Clay Prize of $1 million—and perhaps more.
Note, even proofs of huge questions can at the right distance have a simple structure. His reasoning above is impeccable. The difficulty is that people by this time started to have trouble with both of his statements—where even trouble with one would be fatal.
Act III: Flaws
As the week moved to the weekend we started to realize that there were two main potential flaws in Deolalikar's claimed proof. The first flaw has to do with his use of the logical definition of efficient algorithms. I think this is a major issue, but I must admit that many think either it can be avoided or it is not a flaw at all.
The more serious flaw is Deolalikar's use of results from statistical physics. This involves the statement D that I mentioned already. It is the issue that has attracted hundreds of comments, the creation of a whole wiki dedicated just to the issue by Terence Tao. And it is the most interesting potential flaw, since even if it is incorrect we all seem to feel that it raises a number of intriguing questions. One important point is while a proof is eventually either correct or not, many incorrect proofs over the years have introduced important new concepts to mathematics. It is quite possible that Deolalikar has done this. Time will tell.
And on …
It is Sunday again, and I feel like Bill Murray in Groundhog Day. My day is gone, I am still not reading my book. Again I am writing a post about Deolalikar's proof—actually two posts—the other is a technical one for my blog. I still do not know if the proof is correct or not, but I believe that I have learned a lot this week. I learned a potentially new approach to Q from Deolalikar, I learned that many, many people care about Q, and I learned that the Internet may not replace referees yet, but it can raise many important questions in a positive and constructive manner. I also decided—not learned—that next Sunday I will not be writing a post on this proof. At least that is my plan.
§
The author, Richard J. Lipton of the College of Computing at Georgia Tech, has a blog on complexity theory and mathematics. Selected posts from its first year, 2009, are due out this autumn as a book published by Springer-Verlag. He would also like to thank Ken Regan for many helpful comments on this post.
Submit an Article to CACM
CACM welcomes unsolicited submissions on topics of relevance and value to the computing community.
Communications of the ACM (CACM) is now a fully Open Access publication.
By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment