Sign In

Communications of the ACM

[email protected]

Software Verification Turns Mainstream


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Bertrand Meyer

(Warning: plug coming.)

One of the most remarkable developments of recent years is the slow but steady progress of software verification. Along the way there have been many naysayers, starting with the famous Lipton-Perlis-deMillo paper. Practitioners and researchers have largely ignored those who said verification would never work, and instead patiently built verification methods and tools that are increasingly useful and scalable.

A welcome development is unification. For a long time the only verification technique practiced in industry was testing; the dominant verification techniques in research were proofs and other static methods (abstract interpretation, model checking). Increasingly, we understand that the various approaches are complementary; the problem is hard enough to require help from all quarters. A typical example is the old saw that that tests can only prove a program incorrect: it is in the end not an indictment of a testing but great praise, as the result (proving the presence of a bug) is a significant achievement saving you, for example, the effort of trying to prove that the program is correct! Conversely, proofs are great when they succeed; but they can fail  (after all, we are dealing with undecidable theories), and they can only prove what has been explicitly specified. In practice we need both.

Another advance is the integration of verification into mainstream development. It used to be that verification required a special development process, very special tools, and restricted programming languages. Increasingly, we will see verification integrated within a standard modern process, a modern IDE, and modern languages. This is the idea behind our VAMOC development [1].

That was not the real plug. The 2011 edition of the LASER summer school on software engineering (5-11 September, in a magnificent venue on the island of Elba off the coast of Tuscany) is devoted to  Practical Tools for Software Verification, with the emphasis on  “practical”. The roster of speakers is spectacular, from Ed Clarke on model checking to Patrick Cousot on abstract interpretation, Rustan Leino on the tools developed at Microsoft Research, Patrice Godefroid on model checking and testing, César Muñoz from NASA on applications of the PVS prover, Christine Paulin on the Coq system, Andrei Voronkov on the Vampire prover. This is an exceptional opportunity and I hope that many readers can join us; see http://se.ethz.ch/laser for details.

The LASER school, and the many initiatives resulting (under the aegis of the NSF and other bodies) from Tony Hoare's seminal  “Verification Grand Challenge” [2] admonition, are just steps in the process  (fueled by society's insatiable demand for ever more ambitious software which must function correctly) that slowly but surely is making software verification mainstream.

References

[1] Another blog entry:  Verification As a Matter Of Course, here.

[2] C.A.R. Hoare: The verifying compiler: A grand challenge for computing research, J. ACM, vol. 50, no. 1, pp. 63-69, 2003.

 


Comments


Anonymous

I agree that verification of systems is becoming more a reality. However, when viewed historically, verification from an automated perspective has a higher tendency towards real-time systems rather than business or functional based systems. In real-time systems a significant amount of time and effort is put into the requirements and design process, where no detail is left undocumented. The converse is true for business applications where validation is a more difficult process due to the nature of ad-hoc requirements, specifications, and design, most of which occur during development and testing. The Prover, Coq, and Vampire systems are primarily based around real-time systems.


Mark Wireman

I agree that verification of systems is becoming more a reality. However, when viewed historically, verification from an automated perspective has a higher tendency towards real-time systems rather than business or functional based systems. In real-time systems a significant amount of time and effort is put into the requirements and design process, where no detail is left undocumented. The converse is true for business applications where validation is a more difficult process due to the nature of ad-hoc requirements, specifications, and design, most of which occur during development and testing. The Prover, Coq, and Vampire systems are primarily based around real-time systems.


Anonymous

Verification translated in to a process as inspection of work and artifacts is becoming popular. People have introduced metrics which can save 10 % of project cost by using process metric and people metric viz Depth of Inspection (DI) and Inspection Performance Metric (IPM) (Ref. ASQ Software Quality Professional 2010 and 2011 March). It produces deep inspection process as a vital operation inhibiting defects propagation further, like plugging the leaks of Dam from the water side rather than the front side. Front side approach we do in present testing mode. Models are getting published using state space approach and the industry is curious about these inventions because the outsourcing people can go and check the depth of inspection carried out by providers in the past few projects and calculate how much depth ( verification) of inspection they will be able to give with the personal pattern projected while taking new outsourced work.


Displaying all 3 comments