Research and Advances
Computing Applications Research highlights

Technical Perspective: Compiling What to How

  1. Article
  2. References
  3. Author
Read the related Research Paper

If a programmer is lucky, her program requires no coding: She specifies the desired property and an existing algorithm computes a result that meets her property. Examples include finding a string with regular expressions, solving systems of linear equations, and determining whether a Boolean formula is satisfiable. Such declarative programming reuses algorithms that we might call solvers. Whenever our problem is expressible as a property understood by a solver, we are free to say what should be computed without saying how to compute it.

Not all problems lend themselves to declarative programming, of course. For some problems it may be faster and more natural to write the program rather than its specification. Large programs, like document editors, will never be fully specified. Then there are undecidable properties for which it is impossible to build a solver.

Given these limitations, should we even contemplate declarative programming? The following paper by Viktor Kuncak et al. integrates declarative programming into a general-purpose language, allowing one to escape the host language when a subproblem can be solved declaratively.

Their first contribution is language constructs that make the Scala host language more powerful without making declarative properties obtrusive. For example, their case statement almost hides the fact that a constraint is being solved at runtime: this code decides whether i is even or odd by determining whether there exists a solution to the equations 2*j=i and 2*j+1=i, yet the code closely resembles the standard pattern matching:


In declarative programming, it is easy to write buggy properties—those that permit either too many or no satisfying results. As the second contribution, the paper develops debugging support for both kinds of errors. When a specification is under-constrained, there exists an input that permits multiple legal results. To help determine whether this freedom constitutes undesirable non-determinism, the compiler looks for such an input. If it exists, the compiler computes two distinct results, which might hint at what constraint is missing in the property. When a specification is over-constrained, there is an input that permits no legal result. The compiler computes the set of such inputs, and describes the set symbolically with a predicate. Both warnings are delivered to the programmer before the program is executed, reducing the need for testing.

The third contribution compiles the specification into efficient solver code. General solvers are less efficient than programs handwritten for a given problem for at least two reasons: solvers may need to backtrack during exploration of the solution space; and they exhibit interpretation overhead of reading the property and maintaining the solution. To reduce this overhead, the authors describe a methodology for “pre-solving” the specification at compile time. The compiler analyzes the property and outputs code that does little more than what fundamentally cannot be moved to compile time—the propagation of values that are available only at runtime.

To enable compilation of specifications, the authors observe that constraint solvers are often based on quantifier elimination, which solves the constraint system by eliminating variables one at a time and then computing their values in reverse direction, as in linear equations solvers based on variable elimination. The compiler methodology thus asks the quantifier elimination to produce also a witness function, which computes the eliminated variable from the remaining variables; the efficient solver is then constructed by chaining the witness functions.

The paper closes with an encouragement to extend declarative programming beyond linear arithmetic constraints. The authors show that they can declaratively compute sets, too, using cardinality constraints. Their methodology generalizes and should help create a compiler for any constraints language that comes with a quantifier-elimination decision procedure. Recent advances in solvers, motivated by software verification, could thus translate into more expressive declarative programming.

Computing was born declarative, millennia ago, with Babylonian algorithms for equation solving2 and the Chinese precursor to the Gaussian method. These algorithms allowed us to solve problems by thinking about what to compute, freeing us from the operational how. More solvers were developed later; in fact, six of the 10 best algorithms of the 20th century, among them Simplex, are solvers.1 These algorithms can be viewed as efficient interpreters for convenient Turing-incomplete programming languages. I predict that as we identify more such restricted languages and integrate them into general-purpose (Turing-complete) languages, we will make programming more productive and programs more reliable.

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More