Systems and Networking

Software Verification Turns Mainstream

Bertrand Meyer

Normal 0 false false false EN-US X-NONE X-NONE

(Warning: plug coming.)

One of the most remarkable developments of recent years is the slow but steady progress of software verification. Along the way there have been many naysayers, starting with the famous Lipton-Perlis-deMillo paper. Practitioners and researchers have largely ignored those who said verification would never work, and instead patiently built verification methods and tools that are increasingly useful and scalable.

A welcome development is unification. For a long time the only verification technique practiced in industry was testing; the dominant verification techniques in research were proofs and other static methods (abstract interpretation, model checking). Increasingly, we understand that the various approaches are complementary; the problem is hard enough to require help from all quarters. A typical example is the old saw that that tests can only prove a program incorrect: it is in the end not an indictment of a testing but great praise, as the result (proving the presence of a bug) is a significant achievement Normal 0 false false false EN-US X-NONE X-NONE saving you, for example, the effort of trying to prove that the program is correct! Conversely, proofs are great when they succeed; but they can fail  (after all, we are dealing with undecidable theories), and they can only prove what has been explicitly specified. In practice we need both.

Another advance is the integration of verification into mainstream development. It used to be that verification required a special development process, very special tools, and restricted programming languages. Increasingly, we will see verification integrated within a standard modern process, a modern IDE, and modern languages. This is the idea behind our VAMOC development [1].

That was not the real plug. The 2011 edition of the LASER summer school on software engineering (5-11 September, in a magnificent venue on the island of Elba off the coast of Tuscany) is devoted to  Practical Tools for Software Verification, with the emphasis on  Normal 0 false false false EN-US X-NONE X-NONE “practical”. The roster of speakers is spectacular, from Ed Clarke on model checking to Patrick Cousot on abstract interpretation, Rustan Leino on the tools developed at Microsoft Research, Patrice Godefroid on model checking and testing, Normal 0 false false false EN-US X-NONE X-NONE César Muñoz from NASA on applications of the PVS prover, Christine Paulin on the Coq system, Andrei Voronkov on the Vampire prover. This is an exceptional opportunity and I hope that many readers can join us; see for details.

The LASER school, and the many initiatives resulting (under the aegis of the NSF and other bodies) from Tony Hoare’s seminal  Normal 0 false false false EN-US X-NONE X-NONE “Verification Grand Challenge” [2] admonition, are just steps in the process  (fueled by society’s insatiable demand for ever more ambitious software which must function correctly) that slowly but surely is making software verification mainstream.


[1] Another blog entry:  Verification As a Matter Of Course, here.

[2] C.A.R. Hoare: The verifying compiler: A grand challenge for computing research, J. ACM, vol. 50, no. 1, pp. 63-69, 2003.


Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More