Often, you will be told that programming languages do not matter much. What actually matters more is not clear; maybe tools, maybe methodology, maybe process. It is a pretty general rule that people arguing that language does not matter are simply trying to justify their use of bad languages.
Let us come back to the Apple bug of a few weeks ago. Only a few weeks; the world has already moved to Heartbleed, but that is not a reason to sweep away the memory of the Apple bug and the language design that it reflects.
In late February, users of iPhones, iPads and iPods were enjoined to upgrade their devices immediately because "an attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS." The bug was traced [1] to code of the following form:
if (error_of_first_kind)
goto fail;
if (error_of_second_kind)
goto fail;
if (error_of_third_kind)
goto fail;
if (error_of_fourth_kind)
goto fail;
if (error_of_fifth_kind)
goto fail;
goto fail;
if (error_of_sixth_kind)
goto fail;
The_truly_important_code_handling_non_erroneous_case
In other words: just a duplicated line! (The extra line is highlighted above.) But the excess goto is beyond the scope of the preceding "if", so it is executed unconditionally: all executions go directly to the "fail" label, so that The_truly_important_code_handling_non_erroneous_case never gets executed.
Critics have focused their ire on the goto instruction, but it is of little relevance. What matters, language-wise, is the C/C++-Java-C# convention of delimiting the scope of conditional instructions, loops and other kinds of composite structures. Every component of such structures in these languages is syntactically a single instruction, so that:
- If you want the branch to consist of an atomic instruction, you write that instruction by itself, as in
if (c) a = b; - If you want a sequence of instructions, you write it as a compound, enclosed by the ever so beautiful braces:
if (c) {a = b; x = y;}
Although elegant in principle (after all, it comes from Algol), this convention is disastrous from a software engineering perspective because software engineering means understanding that programs change. One day, a branch of a conditional or loop has one atomic instruction; sometime later, a maintainer realizes that the corresponding case requires more sophisticated treatment, and adds an instruction, but fails to add the braces.
The proper language solution is to do away with the notion of compound instruction as a separate concept, but simply expect all branches of composite instructions to consist of a sequence, which could consist of several instructions, just one, or none at all. In Eiffel, you will write
if c then
x := y
end
or
if c then
a := b
x := y
else
u := v
end
or
from i := 1 until c loop
a := b
i := i + 1
end
or
across my_list as l loop
l.add (x)
end
and so on. This syntax also gets rid of all the noise that permeates languages retaining C’s nineteen-sixties conventions: parentheses around the conditions, semicolons for instructions on different lines; these small distractions accumulate into big impediments to program readability.
With such a modern language design, the Apple bug could not have arisen. A duplicated line is either:
- A keyword such as end, immediately caught as a syntax error.
- An actual instruction such as an assignment, whose duplication causes either no effect or an effect limited to the particular case covered by the branch, rather than catastrophically disrupting all cases, as in the Apple bug.
Some people, however, find it hard to accept the obvious responsibility of language design. Take this comment derisively entitled "the goto squirrel" by Dennis Hamilton in the ACM Risks forum [2]:
It is amazing to me that, once the specific defect is disclosed (and the diff of the actual change has also been published), the discussion has devolved into one of coding style and whose code is better. I remember similar distractions around the Ariane 501 defect too, although in that case there was nothing wrong with the code—the error was that it was being run when it wasn’t needed and it was not simulation tested with new launch parameters under the mistaken assumption that if the code worked for Ariane 4, it should work for Ariane 5.
It is not about the code. It is not about the code. It is not about goto. It is not about coming up with ways to avoid introducing this particular defect by writing the code differently.
Such certainty! While repeating a wrong statement ( "it is not about the code") may not be as catastrophic as repeating an instruction was in the code under discussion, the repetition does not make the statement right. Of course "it" is about the code. Given that if the code had been different the catastrophe would not have happened, one needs some gall to state that it is not about the code—and just as much gall, given that the catastrophe would also not have happened if the programming language had been different, to state that it is not about the programming language.
When Mr. Hamilton dismisses as "distractions" the explanations pointing to programming-related causes for the Ariane-5 disaster, I assume he has in mind the analysis which I published at the time with Jean-Marc Jézéquel [3], which explained in detail how the core issue was the absence of proper specifications (contracts). At that time too, we heard dismissive comments; according to one of the critics, the programming aspects did not count, since the whole thing was really a social problem: the French engineers in Toulouse did not communicate properly with their colleagues in England! What is great with such folk explanations is that they sound just right and please people because they reinforce existing stereotypes, and are by nature impossible to refute, just as they are impossible to prove. And they avoid raising the important but disturbing questions: were the teams using the right programming language, the right specification method (contracts, as we suggested), appropriate tools? In both the Ariane-5 and Apple cases, they were not.
If you want to be considered polite, you are not supposed to point out that the use of programming languages designed for the PDP-8 or some other long-gone machine is an invitation to disaster. The more terrible the programming language people use, and the more they know it is terrible (even if they will not admit it), the more scandalized they will be that you point out that it is, indeed, terrible. It is as if you had said something about their weight or the pimples on their cheeks. Such reactions do not make the comment less true. The expression of outrage is particularly inappropriate when technical choices are not just matters for technical argument, but have catastrophic consequences on society.
The usual excuse, in response to language criticisms, is that better tools, better quality control (the main recommendation of the Ariane-5 inquiry committee back in 1997), better methodology would also have avoided the problem. Indeed, a number of the other comments in the comp.risks discussion that includes Hamilton’s dismissal of code [2] point in this direction, noting for example that static analyzers could have detected code duplication and unreachable instructions. These observations are all true, but change nothing to the role of programming languages and coding issues. One of the basic lessons from the study of software and other industrial disasters
Normal
0
false
false
false
EN-US
X-NONE
X-NONE
—see for example the work of Nancy Leveson—is that a disaster results from a combination of causes. This property is in fact easy to understand: a disaster coming from a single cause would most likely have been avoided. Consider the hypothetical example of a disastrous flaw in Amazon’s transaction processing. It seems from various sources that Amazon processes something like 300 transactions a second. Now let us assume three independent factors, each occurring with a probability of a thousandth (10-3), which could contribute to a failure. Then:
- It is impossible that one of the factors could cause failure just by itself: that would mean it would cause a transaction to fail after around 3 seconds, and would be caught even in the most trivial unit testing. No one but the developer would ever know about it.
- If two of the factors together cause failure, they will occur every million transactions, meaning about once an hour. Any reasonable testing will discover the problem before a release is ever deployed.
- If all three factors are required, the probability is 10-9, meaning that a failure will occur about once a year. Only in that case will a real problem exist: a flaw that goes undetected for a long time, during which everything seems normal, until disaster strikes.
These observations explain why post-mortem examinations of catastrophes always point to a seemingly impossible combination of unfortunate circumstances. The archduke went to Sarajevo and he insisted on seeing the wounded and someone forgot to tell the drivers about the prudent decision to bypass the announced itinerary and the convoy stalled and the assassin saw it and he hit Franz-Ferdinand right in the neck and there was nationalistic resentment in various countries and the system of alliances required countries to declare war [4]. Same thing for industrial accidents. Same thing for the Apple bug: obviously, there were no good code reviews and no static analysis tools applied and no good management; and, obviously, a bad programming language that blows out innocent mistakes into disasters of planetary import.
So much for the accepted wisdom, heard again and again in software engineering circles, that code does not matter, syntax does not count, typos are caught right away, and that all we should care about is process or agility or requirements or some other high-sounding concern more respectable than programming. Code? Programming languages? Did we not take care of those years ago? "I remember similar distractions."
There is a positive conclusion to the "and" nature (in probabilistic terms, the multiplicative nature) of causes necessary to produce a catastrophe in practice: it suffices to get rid of one of the operands of the "and" to falsify its result, hence avoiding the catastrophe. When people tell you that code does not matter or that language does not matter, just understand the comment for what it really means, "I am ashamed of the programming language and techniques I use but do not want to admit it so I prefer to blame problems on the rest of the world", and make the correct deduction: use a good programming language.
References
[1] Paul Duckline: Anatomy of a "goto fail" – Apple’s SSL bug explained, plus an unofficial patch for OS X!, Naked Security blog (Sophos), 24 February 2014, available here.
[2] Dennis E. Hamilton: The Goto Squirrel, ACM Risks Forum, 28 February 2014, available here.
[3] Jean-Marc Jézéquel and Bertrand Meyer: Design by Contract: The Lessons of Ariane, in Computer (IEEE), vol. 30, no. 1, January 1997, pages 129-130, available online here and, with reader responses here.
[4] http://en.wikipedia.org/wiki/Archduke_Franz_Ferdinand_of_Austria#Assassination.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment