On June 16, 1902, British philosopher Bertrand Russell sent a letter to Gottlob Frege, a German logician, in which he argued, by using what became known as "Russell's Paradox," that Frege's logical system was inconsistent. The letter launched a "Foundational Crisis" in mathematics, triggering an almost anguished search for proper foundations for mathematics. In 1921, David Hilbert, the preeminent German mathematician, launched a research program aimed at disposing "the foundational questions once and for all." Hilbert's Program failed; in 1931, Austrian logician Kurt Goedel proved two incompleteness theorems that proved the futility of Hilbert's Program.
One element in Hilbert's Program was the mechanization of mathematics: "Once a logical formalism is established one can expect that a systematic, so-to-say computational, treatment of logic formulas is possible, which would somewhat correspond to the theory of equations in algebra." In 1928, Hilbert and Ackermann posed the "Entscheidungsproblem" (Decision Problem), which asked if there is an algorithm for checking whether a given formula in (first-order) logic is valid; that is, necessarily true. In 19361937, Alonzo Church, an American logician, and Alan Turing, a British logician, proved independently that the Decision Problem for first-order logic is unsolvable; there is no algorithm that checks the validity of logical formulas. The Church-Turing Theorem can be viewed as the birth of theoretical computer science. To prove the theorem, Church and Turing introduced computational models, recursive functions, and Turing machines, respectively, and proved that the Halting Problemchecking whether a given recursive function or Turing machine yields an output on a given inputis unsolvable.
The unsolvability of the Halting Problem, proved just as Konrad Zuse in Germany and John Atanasoff and Clifford Berry in the U.S. were embarking on the construction of their digital computersthe Z3 and the Atanasoff-Berry Computermeant that computer science was born with a knowledge of the inherent limitation of mechanical computation. While Hilbert believed that "every mathematical problem is necessarily capable of strict resolution," we know that the unsolvable is a barrier that cannot be breached. When I encountered unsolvability as a fresh graduate student, it seemed to me an insurmountable wall. Much of my research over the years was dedicated to delineating the boundary between the solvable and the unsolvable.
It is quite remarkable, therefore, that the May 2011 issue of Communications included an article by Byron Cook, Andreas Podelski, and Andrey Rybalchenko, titled "Proving Program Termination" (p. 88), in which they argued that "in contrast to popular belief, proving termination is not always impossible." Surely they got it wrong! The Halting Problem (termination is the same as halting) is unsolvable! Of course, Cook et al. do not really claim to have solved the Halting Problem. What they describe in the article is a new method for proving termination of programs. The method itself is not guaranteed to terminateif it did, this would contradict the Church-Turing Theorem. What Cook et al. illustrate is that the method is remarkably effective in practice and can handle a large number of real-life programs. In fact, a software tool called Terminator, used to implement their method, has been able to find some very subtle termination errors in Microsoft software.
I believe this noteworthy progress in proving program termination ought to force us to reconsider the meaning of unsolvability. In my November 2010 editorial, "On P, NP, and Computational Complexity," I pointed out that NP-complete problems, such as Boolean Satisfiability, do not seem as intractable today as they seemed in the early 1970s, with industrial SAT solvers performing impressively in practice. "Proving Program Termination" shows that unsolvable problems may not be as unsolvable as we once thought. In theory, unsolvabilty does impose a rigid barrier on computability, but it is less clear how significant this barrier is in practice. Unlike Collatz's Problem, described in the article by Cook et al., most real-life programs, if they terminate, do so for rather simple reasons, because programmers almost never conceive of very deep and sophisticated reasons for termination. Therefore, it should not be shocking that a tool such as Terminator can prove termination for such programs.
Ultimately, software development is an engineering activity, not a mathematical activity. Engineering design and analysis techniques do not provide mathematical guarantee, they provide confidence. We do not need to solve the Halting Problem, we just need to be able to reason successfully about termination of real-life programs. It is time to give up our "unsolvability phobia." It is time to solve the unsolvable.
Moshe Y. Vardi, EDITOR-IN-CHIEF
©2011 ACM 0001-0782/11/0700 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.
when I learned about the Halting Problem, I came to a similiar conlclusion that one should not overreact regarding its implications on real world programming.
The theorem covers all possible programms which might cover some very weird cases.
The existence of such wonderful tools as LINT (a syntax checker for the C programming language) and compiler warnings and errors in general showed it is still useful to create such analysis programs.
Thanks to Church and Turing I know that these tools might fail on some programms not only for internal programming errors within the tools but for principal reasons, however experience showed me that they work for a very high number of those programms I usually feed them with. So they are very helpful.
My gut feeling is that I would need to come up with a very crazy case to drive these tools nuts.
However it must not be that way. The Russel contraction was not that weird a logical formula. Maybe there exist very simple programs for the tools I mentioned to send them in their infinite loop.
So we have two opposite pillars, the hard mathematical fact on the one side and the real world experience. It would be nice to get a bit more rigorous while still being relevant to the real world.
Is there way to quantify the success rate of a given analysis tool (will work for 99.99999% of the given "normal" programs)?
What makes a programm written by an average developer "normal"?
And so on.. :-)
Thanks for article!
Thanks for the interesting article, Moshe.
I have been doing static program analysis for a long time now. Something that many people don't realize is virtually all interesting decision problems in this domain are undecidable, but nevertheless static analysis can be extremely useful in many cases.
I do not quite understand your statement about the sentence "in contrast to popular belief, proving termination is not always impossible." though. To me, this sentence is true. It is not always impossible, or in other words it is often possible. Surely I can literally prove termination for many (even real-world) programs. In fact for any fixed set of programs one can write an algorithm that can prove termination for all those programs. The problem is, though, that for any such algorithm one will always be able to find another program which does terminate but for which the algorithm fails to show termination.
If I am not mistaken, this is what undecidability is really all about... for any algorithm that decides the undecidable property for a fixed (but not necessarily finite) set of programs one can construct at least one other program for which the algorithm will give the wrong answer. the same holds for termination analysis, too, and hence I can see nothing wrong with Cook et al.'s statement.
I must say your note caught my attention first of all because of the title. In fact, together with the article you are referring too published in the May issue (p.88), it echoes the title of my paper "Cataloguing the Unfindable" (submitted for publication to a peer-reviewed Journal in November, published in a draft preliminary self-archived version as working paper in an open repository in January 2011, it is having incredible influence and impact beyond my imagination and in several academic and professional contexts).
Speaking about the unsolvable and the incomplete, I have found appalling that neither you nor the authors you mention have considered the role and the importance of Kurt Godel.
Brunella Longo, 4.8.2011
I feel that there is nothing like "unsolvable". The capability of the problem of not getting solved, or rather, the capability of a paradox or a conjecture to occur, is itself a solution to that problem.
The following letter was published in the Letters to the Editor of the September 2011 CACM (http://cacm.acm.org/magazines/2011/9/122792).
Moshe Y. Vardi's Editor's Letter "Solving the Unsolvable" (July 2011) raised an important point that we should reconsider the meaning of unsolvability, especially in terms of its practical application. Even though a problem (such as the Halting Problem) may be theoretically unsolvable, we should, perhaps, still try to solve it.
The proof of undecidability is based on the possibility of self-application; that is, a program cannot look at itself and decide if it is itself stuck in a loop; from a practical point of view, this situation is not relevant. Why even write such a program? The proof does not say I cannot write a server program that looks at running applications to determine if any of them is in a loop.
The same reliance on self-application applies to the Post Correspondence Problem (PCP), a string-matching problem also theoretically unsolvable. The proof does not say PCP is undecidable for any practical problem, only for one using self-application. However, the proof does say if I try to simulate a Turing Machine program that looks to see if it is itself in a loop, then, as in the Halting Problem, PCP is theoretically unsolvable. But from a string-matching point of view, this potential insight about unsolvability is again hardly relevant to the programmer. Perhaps, for all cases of practical interest, PCP is indeed solvable.
The same point applies to the many other theorems that relate to the unsolvability of certain problems. It may be the problems are very difficult to solve; likewise, it may be very difficult to devise a solution for a reasonable sub-problem or solve a sub-problem in polynomial time. In any case, the question of unsolvability might simply be a red herring.
I do not agree that unsolvability is a "red herring" but a fundamental limit on computability. We do not have an algorithm for program termination. My point was we should take a sober view of unsolvability, recognizing that many unsolvable problems can, in practice, be solved.
Moshe Y. Vardi
The paper linked below proposes a solution to the Halting Problem by recognizing and rejecting all of the Turing machine descriptions that would otherwise show that halting is undecidable.
Using the Peter Linz notational conventions for specifying Turing machine descriptions the sequence of state transitions of Turing machine H on its inputs , are seen to be infinitely recursive long before they ever reach their paradoxical final states. This is a brand new insight never seen before.
The Prolog predicate unify_on_occurs_check/2 detects and rejects such infinitely recursive sequences. (Clocksin and Mellish 2003)
Clocksin and Mellish 2003, Programming in Prolog Using the ISO Standard Fifth Edition Chapter 10 The Relation of Prolog to Logic, page 254.
Displaying all 6 comments