Opinion
Computing Applications

The Moral Hazard of Complexity-Theoretic Assumptions

Posted
  1. Article
Communications Editor-in-Chief Moshe Y. Vardi

In November 2015, the computing world was abuzz with the news that László Babai proved the Graph-Isomorphism Problem, that is, the long-standing open problem of checking whether two given graphs are isomorphic, can be solved in quasi-polynomial time. (While polynomial means n raised to a fixed degree, quasi-polynomial means n raised to a poly-logarithmic degree.) The mainstream media noticed this news; Chicago Magazine wrote "Computer Scientists and Mathematicians Are Stunned by This Chicago Professor’s New Proof."

If Babai’s result holds under scrutiny, it is likely to become one of the most celebrated results in theoretical computer science of the past several decades. Graph isomorphism has long tantalized researchers as it was not known to be solvable in polynomial time, yet there are good reasons to believe it is not NP-complete. The new algorithm provides further evidence of that!

As excited as I was about this news, I was quite annoyed by the reports in the media that "A new algorithm efficiently solves the graph-isomorphism problem." Computational-complexity theory focuses on classifying computational problems according to their inherent difficulty. The theory relies on some fundamental abstractions, including that it is useful to classify algorithms according to their worst-case complexity, as it gives us a universal performance guarantee, and that polynomial functions display moderate growth. The complexity class PTIME, often abbreviated as P, consists of the problems that can be solved, in the worst case, in polynomial time. This class is now conventionally considered to be the class of efficiently solvable problems. But this identification of P with the class of efficiently solvable problems is just a mathematically convenient abstraction. In practice, polynomials with degree larger than four grow quite fast. It is unrealistic to consider algorithms with high-degree polynomial bounds as efficient. To consider quasi-polynomial-time algorithms as efficient is simply ignorant of the reality of algorithmic engineering.

The slide from considering polynomial-time algorithms as efficient to considering quasi-polynomial-time algorithms as efficient illustrates, I believe, what I’d like to call the "moral hazard of complexity-theoretic assumptions." In economics, moral hazard occurs when one person takes more risks because someone else bears the burden of those risks. Here, I would like to use the term to imply as one gets used to the "risk" of making complexity-theoretic assumptions, it gets easier to make stronger assumptions. The slide from polynomial time as efficient to quasi-polynomial time as efficient is an instance, I believe, of such moral hazard in action.

Another example of such a hazardous slide can be seen in a news article from August 2015 with the title "For 40 Years, Computer Scientists Looked for a Solution that Doesn’t Exist." The story refers to a paper by Arturs Backurs and Piotr Indyk, "Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)." What is SETH? The conventional wisdom is that P is different from NP. Thus, this is now taken as a standard assumption, even though the belief in it is mostly built around the questionable identification of P with efficiently solvable. In some cases, however, the P≠NP assumption is not strong enough. The Strong Exponential-Time Hypothesis (SETH) is a stronger assumption that asserts that Boolean Satisfiability (SAT) cannot be solved in strongly subexponential time (see the Backurs-Indyk paper for a precise definition).

Proving that SETH implies that edit distance requires quadratic time is already a very nice result. But the title of the paper implies one should not expect SETH to be false, which is the way the result was broadly interpreted. But SETH is a much stronger assumption than the P≠NP assumption. The evidence for this assumption is that so far no one has been able to come up with a strongly subexponential algorithm for SAT. But most progress in complexity theory over the past 40 years has been through the discovery of clever new algorithms, such as Khachian’s Ellipsoid Algorithm for linear programming, Shor’s quantum poly-time algorithm for factoring, and now Babai’s quasi-polytime algorithm for graph isomorphism. If I had to bet, I would bet on further progress in improved upper bounds than on progress in improved lower bounds.

The bottom line is that complexity-theoretic assumptions are mathematical abstractions. They need to be refined further to bring them into better alignment with real-world tractability. As I have written in the past, there is a growing gap between current theory and practice of complexity and algorithms. Bridging that gap is an important research challenge!

Follow me on Facebook, Google+, and Twitter.

Moshe Y. Vardi, EDITOR-IN-CHIEF

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More