BLOG@CACM
Computing Applications

Mental Self-Check

Posted
Bertrand Meyer

Some people in insane asylums think those outside are the ones who are crazy; the rest of the world disagrees.

If you are just inside or just outside, it is difficult to adjudicate the question.

Programming with Eiffel and Design by Contract is the privilege of a minority, who think that the rest of the world is crazy. I, for example, have no idea how it is even possible to program otherwise and keep one’s sanity; but maybe I am the insane one. This article might serve as a mental self-check.

In short: using a powerful static type system and Design by Contract radically changes the nature of software development, which becomes a much more predictable and enjoyable endeavor, worthy of being called “software engineering”.

Let me take an example from a program I am writing now. I have several data structures, fairly complex. They serve to represent a multigraph, which is like a directed graph but with labeled edges and the possibility of having any number of edges (not just zero or one), with different labels, between two given nodes. The figure below shows a multigraph; the nodes represent people and houses, and the edges represent relations between them, for example that a person is another person’s boss or a house is a person’s residence.

 

A multigraph

 

Assume that we have a way to represent each node by a unique integer; a simplified view of the above example looks like this, showing the node numbers (26 and 57, arbitrary examples) and the tag (abbreviated from spouse to s) for only one edge:

 

A multigraph (partial view)

A data structure that represents this concept is a “triple table”: an array of triples, each containing a node identifier x, a tag t, and another node y. A typical triple table could be this one:

 

See how the highlighted triple in triple_table represents the highlighted edge in the graphical illustration of the multigraph. The triple_table is declared as follows:

triple_table: HASH_SET [TRIPLE]

where HASH_SET (a variant of HASH_TABLE) is a library class and TRIPLE a class (not from the library, but written for this application) describing objects with three components: source, tag and target. A HASH_SET is an implementation of sets where each element, here each triple, serves as its own key.

These type declarations already help us a lot towards getting things right. For example, to find out if there is a triple with specific values in the table, you will use the test

             triple_table.has (create {TRIPLE}.make (a_source, a_tag, a_target)

Everything here needs to have a specific type. The first compilation attempt will kill many errors in the bud simply as type errors.

Why anyone would use a dynamically typed language (for anything beyond a 10-line script) and deprive himself or herself of this marvelous bug-killing mechanism passes my understanding. But I have to remember I am inside the asylum.

It turns out that my program needs to access the structure in many different ways. A common technique in the design of efficient algorithms is to trade some space for time by providing several, possibly redundant, representations of the same information. Here it is convenient to have another data structure, triples_from, such that triples_from [i] is the list of entries in triple_table representing edges starting with i. Here is triples_from [26] for the example:

 

Obtaining the list of triples with a given source

triples_from[26] gives us, as illustrated, the list of entries for which the first element (the source) is 26.

The type situation gets even more exciting now: the declaration of our new structure is

    triples_from: ARRAY [LIST [TRIPLE]]
            — Triples starting from a given object. Indexed by object numbers.

triples_from is an array of lists of triples. We can do lots of things with it, like iterate on the whole structure and at each step iterate on the corresponding list. Also, we can mess up even more than before! With more complex data structures come more sources of errors; what if you mistakenly use a list as if it were a list element? Without static typing, such common mistakes will have to be detected and fixed during testing, promising a few pleasant nights of debugging. But static typing is again here to help us. When I access an element of the last triple of a list in the table, I have to write something like

             (triples_from [i]).last.tag

where any mixup is immediately flagged as a type mismatch: an integer where a list was an expected, using a structure as an array when it is a list…

With an untyped language, your run-time structure looks not much better than a spaghetti plate (only less appetizing).

As soon as anything goes wrong, you are reduced to pulling one spaghetti strand after the other to try to understand what is going on. Static typing gives structure to the data and catches at compile time many errors that could have required hours or days of spaghetti debugging.

For my multigraphs, I need more modes of access. In fact, I need efficient access not only from the source of triples, but also from the other two components, tag and target. There is indeed a table triples_to, which I do not illustrate here since it is similar to triples_from; and triples_with, which gives us the list of triples with a given tag:

 

Triples with a given tag (multigraph edges with a given label)

Even that is not the last word. I also need fast access to the “successors” of a given node. Here is the declaration of that structure:

triple_successors: ARRAY [HASH_TABLE [LIST [TRIPLE], INTEGER]

Now we are talking business: an array of hash tables of lists of triples, each list being associated with an integer.

At this point, using an untyped language amounts (in the modest view of the asylum inmate) to professional malpractice. A typical access to this structure (to set the value of one of te triple components) is something like

triple_successors [i].item (j).i_th (k).set_target (l)

where so many things can go wrong that you need help. You can only be confident that what you get is what you thought if a powerful type system (with inheritance, genericity etc.) supports it and the associated compiler exorcises it. Without such a type system, programming looks less like engineering than like a succession of Hail Marys.

Type checks are essential, but they are only the first step. There are still many reasons to make errors that will produce inconsistencies in the data structures. In my example, the several complementary data structures described above are different facets of the same information (or, for each of them, on part of that information). They must be compatible!

The corresponding class has dozens of Design by Contract clauses specifying their role: preconditions and postconditions stating the properties of individual operations, as well as an extensive class invariant — as of today, over 20 clauses —  which states how the various structures relate to each other. Here is one of the clauses, a three-level  “for all” property:

     successors_consistent:


        ∀ t: triple_successors ¦                  — t: hash table of lists of triples, indexed by (integer) tags
               ∀ l: t ¦                                          — l: list of triples
                                                                    — (@l.key is a key in the hash table:
                                                                    — an integer representing a tag.)
                             ∀ p: l ¦                           — p: triple
                                                 p.tag = @t.key and p.source = @t.cursor_index

(The comments are not specific to this article but taken from the original, as part of good documentation practice to help the reader understand what is going on. Also, note that this is the actual class text; Eiffel uses Unicode symbols such as and , which the EiffelStudio IDE will produce for you, although you can also use keyword syntax if you prefer.)

This example relies on the property that quantification as available in Eiffel goes a bit beyond the usual quantification (with or ) of predicate calculus. In mathematics it is enough to be able to write things like

         ∀ x: L ¦ some_property (x)

expressing that all elements of a list or other structure L satisfy a given property. In programming, you may also need to access, for successive elements x of the structure, things like its position in the list or, if L is a hash table, the key associated with the element x. The Eiffel notation @x, in such a clause, denotes an abstract cursor, from which we can obtain things like the position and the key (respectively @x.cursor_index and @x.key). For example,  if L is the list [1, 4, 9, 16, 25] it satisfies the property

          ∀ x: L ¦ x = (x.cursor_index) ^2

stating that every element is the square of its index in the list.

The 3-level-deep invariant clause shown above, which I repeat here for convenience (without the comments)

       ∀ t: triple_successors ¦   ∀ l: t ¦  ∀ p: l ¦   p.tag = @t.key and p.source = @t.cursor_index

simply expresses the following. Consider at every element of triple_successors, which is a hash table (we always see the exact types from the type declarations);  then consider every element of that hash table, which is a list of triples (recorded in the hash table under an integer key @l.key; consider finally every triple in that list. Then the three components of that triple (source, key and tag) satisfy the consistency property shown, connecting them to the key of the hash table element and the position of the hash table in the overall list of hash tables.

It is simply impossible in my experience to write such non-trivial software, particularly software with sophisticated data structures, and get it right, without this kind of semantic specification of its properties.

It is not hard, by the way, to write these Design by Contract properties; certainly much less hard than debugging your code if something goes wrong. (Debugging, that is to say trying to find out what is wrong with a program on the basis of its failed execution, is just about the hardest intellectual challenge that exists in the world. It is all the more absurd to claim, like some do, that Design by Contract is hard. Design by Contract is easy to learn and apply. Debugging requires enormous intellectual effort.)

Writing such invariant properties is not just a matter of documenting what you are doing, or what you think you are doing. It is also a tool for checking, with the help of powerful software tools,  that you are actually doing it! When developing Eiffel programs and running sample cases, I turn on contract monitoring as provided by EiffelStudio. This mechanism checks every single contract clause, in every single execution (during development, not during operation once everything has been verified). In a realistic run of my current program, this property implies millions of individual checks.

Every time any part of the software modifies any part of the structure, the mechanism evaluates every one of the consistency properties I patiently accumulate throughout the development. (Not at the end, of course, but throughout. Like anyone inside or outside of the asylum in 2022, I practice the agile style: add a bit to the code — with in my case all the contract elements — devise a new test, run all the existing tests and the new one immediately, don’t proceed until everything runs correctly.)

Any mistake, however specific or marginal, has a high chance of being caught by one of these individual checks. Conversely, once the regression test suite has been run with these millions of individual consistency checks with no violation, the likelihood that significant bugs remain is low. There is a world of difference between a “successful test run” in traditional development, which tells you nothing (one case passed — so what?) and a successful test run in this environment, which tells us not only that the result was expected, but that the entire computation verified every single consistency constraint that had been deemed relevant.

This mode of development is so obvious, so clearly a consequence of merely using the word  “engineering” in “software engineering,” that I have no clue why there is still anyone not doing it. The approach obviously requires Eiffel (the hundreds of attempts over decades to simulate Design by Contract in languages and environments not devised for it have all failed miserably) since the ideas must be supported at all levels:

  • The method starts applying Design by Contract techniques (along with the full type system, inheritance, and other fundamental mechanisms) right from requirements and throughout design and implementation.
  • The language has full support for the contract mechanisms. Note in particular that while you can to some extent simulate preconditions and postconditions in a language not equipped with Design by Contract constructs, such a solution does not work for class invariants, which have to be part of the language. Class invariants are critical elements of the approach, as illustrated by the above example. Also, preconditions and postconditions have a special semantics in the context of inheritance (respectively weakening and strengthening, to permit safe polymorphism and dynamic binding), which requires language and compiler support. Exceptions are also closely connected with contracts.
  • The environment, EiffelStudio, provides mechanisms to enable and disable contract monitoring at run time. Typically you enable all checking during development, as I am doing now, and then once you are convinced of the correctness of your code, you turn it off for deployment and operation. (Some checks might remain on, as a software-fault-tolerance mechanism, but that is for another discussion.)

When I am deprived of these mechanisms, I feel that producing software is pretty much what it must have been in 1960: possible in the end, but at the price of haphazard efforts. You write some code and then you find that it does not quite work. Then you spend the rest of the process in an quest of unpredictable duration to find out what could possibly be wrong, using various debugging insights and Sherlock Holmes-like feats of reasoning. Then you attempt to fix the problem and move on to the next one. How can we flatter ourselves with words like “Software Engineering” with such a process? Engineering, to me, implies doing something with the knowledge of what we are doing. (I know that with this business of what “software engineering” should mean, I am repeating myself. But that is tolerated of children and madmen.)

Knowing what we do, in software, means writing the precise semantic properties associated with the various elements.

You will still mess up along the way, because we all do, but the process is radically different. It becomes predictable since you find the cause, not just the symptoms. A bug is a discrepancy between what the code should do and what it does. How can we be serious about rooting out the bugs and getting our software right unless we specify at least the essentials of the behaviors we expect?

The change of perspective once you move to this rational form of software development treated as actual engineering is staggering. It is almost as if it were a different discipline. But of course, that experience comes from someone inside the asylum and may be a total delusion. Tell me please: am I the crazy one?

Bertrand Meyer is professor and provost at the Schaffhausen Institute of Technology (Switzerland), a visiting professor at Innopolis University, and chief technology officer of Eiffel Software (Goleta, CA).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More