Research and Advances
Systems and Networking Research highlights

Technical Perspective: Proving Programs Continuous

Posted
  1. Article
  2. Author
Read the related Research Paper

Proving a program’s correctness is usually an all-or-nothing game. Either a program is correct with respect to its specification or it is not. If our proof succeeds, we have 100% correctness; if our proof does not succeed, we have nothing. Formal correctness proofs are difficult, because one must symbolically cover the entire range of possible inputs—and the slightest gap in the input leaves us with a gap in the proof.

But what if it turned out the risk posed by leaving a gap is actually small? This, of course, is the assumption of testing: If I tested some function with a sample of values, and it works correctly for this sample, I have reasons to assume it will also work for similar values. This is something I can assume if the behavior of my function is continuous—if it computes the correct square root for 10, 100, and 1,000, it should also do the right thing for any value in between.

One may think this is a dangerous assumption: Simply because my program has worked well for these three values, why should it work for any other? A program is free to do anything; since it need not obey mathematical or physical laws, it can behave in an entirely different way for any new value. This logical view is true in principle. In real life, however, programmers prefer abstractions that are easy to understand and to reason about. The reason testing works in practice is that programmers naturally strive toward continuity.


Being able to formally reason about continuity and robustness lets us see programs as driven not only by logic, but also analytical calculus; and this view can be very helpful for understanding why programs generally tend to work well even if only coarsely tested.


While the intuitive idea is easy to grasp, the concept of continuity so far has widely evaded formal treatment; in particular, it was not possible to automatically reason about continuity in the presence of loops. This is where the work of Swarat Chaudhuri, Sumit Gulwani, and Roberto Lublinerman comes into play. Their framework can formally verify programs for continuity, proving that small changes to the input only cause small changes to the output. They show that several programs such as sorting or shortest path are continuous—or even Lipschitz continuous, implying that perturbations to a function’s input cause only proportional changes to its output. Such a function would also be declared robust, meaning it will behave predictably even when inputs are uncertain or erroneous.

Being able to formally reason about continuity and robustness lets us see programs as driven not only by logic, but also analytical calculus; and this view can be very helpful for understanding why programs generally tend to work well even if only coarsely tested. This work also bridges the gap between programs and control theory, allowing for ample cross-fertilization between the fields; indeed, one can think of mathematical optimizations of program code just as the adoption of programming concepts by control theory. So, should we treat programs as driven by logic, by calculus, or both? I encourage you to read the following paper to see the manifold connections between logic and calculus in computer programs.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More