A comment on the ACM forum comp.risks provides — as in the last article — a good starting point for reflecting about techniques of software quality.
In the latest issue of RISKS, Henry Baker comments  on an online article by David Wheeler  about "how to prevent the next Heartbleed", referring to the recently uncovered planetary-scale Internet vulnerability.
Both articles advocate the use of run-time assertion monitoring (Wheeler with some reservations) and both achieve the remarkable feat of not citing the one environment, Eiffel, in which run-time assertion monitoring has been routinely practiced for decades. Whether using Eiffel would have avoided the particular bug that led to Heartbleed can only be surmised, but for the users who daily apply techniques of Design by Contract (for a recent paper documenting practical usage see ), this is no speculation: run-time assertion monitoring catches bugs left and right.
Baker inexplicably opposes run-time assertion monitoring to proofs. He writes:
Unfortunately, proving that [a] software system faithfully implements the model is undecidable for essentially every interesting system.
This means that it is hopeless to expect any language or compiler or type-checking system to be able to a priori guarantee faithfulness for all inputs.
Formal analysis can't be used to prove the program correct,
I am afraid Mr. Baker should refresh his understanding of undecidability (or perhaps improve the phrasing of his comments if that is the problem). What is impossible is to devise a general mechanism that could prove any contract-annotated (assertion-equipped) program. That does not prevent the mechanical proof of specific programs. The many researchers -- and, increasingly, practitioners -- using proof tools to perform mechanical verification of ever more ambitious programs would be surprised to learn that their work is impossible. Two examples of modern proof tools include Spark from Altran Praxis, which has proved the correctness of deployed life-critical systems, and Microsoft Research's Boogie on which the AutoProof environment for Eiffel relies.
Indeed static verification and dynamic assertion monitoring are complementary. Sometimes proof is possible (always, in today's technology, after a significant human effort to guide the proof process). Sometimes it fails. In the latter case, dynamic assertion monitoring is the solution; as Mr. Baker writes, one should
document every assumption with aggressive run-time assertions.
He is right on target. This approach is what my articles and books have advocated for a long time and what Eiffel users apply. We are not just talking of assertions sprinkled here and there through the code — in the style of C or Java "asserts" or JUnit assertions — but contracts. The difference is that these contracts are intricately woven into the fabric of the code, appearing not just as consistency checks but as elements of specification associated with the structural elements of the design: routines (preconditions, postconditions), loops (loop invariants and variants), classes (class invariants). As a result, they are an integral part of the software construction process, from requirements through design and documentation to implementation, maintenance and verification. I am writing "verification" and not just "testing" because verification takes many forms, some static, some dynamic.
One of the reasons one should not just use occasional assertions but contracts integrated in the structure of the code and the software construction process is the importance of change in software development. When programs are updated, as will always happen, it is hard to ensure that the developers will always insert new checks are inserted and update old ones as needed. When preconditions, postconditions, class invariants and other structural elements are an integral part of the software, it is far easier to enforce the discipline of updating specification and implementation together.
In dominant Eiffel practice, contracts are mostly used (in their verification role) for dynamic monitoring, exactly in the way suggested by Mr. Baker. The day is getting closer when they will be routinely combined with static proof tools; for an example of how such integration works, see references on EVE, the Eiffel Verification Environment, beginning with the original presentation in .
Wheeler also mentions run-time assertion verification, but states that
This approach does have some weaknesses when it comes to countering Heartbleed. Neither the original developer nor its reviewer realized that checking the request packet length value was important; since the length check was not included, it is unlikely that the developer would have remembered to add assertions to check for it.
Here too the difference between ordinary run-time checks and contracts is essential. What people miss when they have not actually practiced Design by Contract is that we are not just putting a few sporadic sanity checks. We are enabling programmers to develop their design and code from explicitly stated specifications. The approach often leads to uncovering assumptions that, as in the Heartbleed case, are critical to the correctness of the program but risk being overlooked.
Then we enforce these assumptions; for example, dynamically.
 Henry Baker: Heartbleed and Formal Methods, in Risks Forum 27.89, 7 May 2014, ed. Peter G. Neumann, available here.
 David Wheeler: How to Prevent the Next Heartbleed, revised 11 May 2014, available here.
 H.-Christian Estler, Carlo A. Furia, Martin Nordio, Marco Piccioni, and Bertrand Meyer: Contracts in Practice, to appear (this week) in FM 2014, Proceedings of 19th International Symposium on Formal Methods, Singapore, May 2014, draft available here.
 Julian Tschannen, Carlo A. Furia, Martin Nordio, and Bertrand Meyer: Usable Verification of Object-Oriented Programs by Combining Static and Dynamic Technique, in in SEFM 2011, Software Engineering and Formal Methods, Montevideo, 14-18 November 2011, Lecture Notes in Computer Science, Springer Verlag, 2011, with a version available here.
Mr. Meyer, thanks for your post. I agree with you about your interpretation of Mr. Baker's comments, but for different reasons. In fact, I would expect that the problem be better framed by an author on the subject such as yourself.
If the software build itself can be thought of as a massive state machine, with a large but finite number of intended states, we still can't call it a turing machine. It does not necessarily have a finite number of inputs, (this includes module interfaces as well). Consequently, we can't know if any stateful branch will terminate as expected. Essentially, Mr. Baker has tried to reduce an analysis of checking systems to the halting problem: He's sort of correct, we can't know it's behavior, but because we have a universe of unplanned inputs at the front or between the modular interfaces, not only because of failed determinism. Heartbleed is the worst case of this, arriving at an accepted state but because of inputs that were not defined for that state machine.
The argument for contracts is to reduce the risk of these unplanned/out of band inputs so that the software build can indeed be more like a Turing machine, with predictable outcomes. Contracts can't cure the semantic impedance mismatch of modular interfaces, but it can provide a formal process for addressing that. It surely can cure the spurious inputs that turn your build into something less than a massive (modal) turing machine. If the build were say like a plumbing installation, contracts ensure that the joints are welded to some standard, particularly since the unwieldy bush of interwining pipes of drinking water will be ultimately submersed in poison and its more about poison getting in than water getting out. Contracts will not ensure that you are sending hot and cold water to the desired location.
Displaying 1 comment