The computing field is becoming open to controversy. There are mild and ongoing wars about programming languages, software development approaches, operating systems, vendor allegiance, and even methods of performing indentation and writing commentary. And all of this, I believe, is extremely healthy for the field. The computing field is finally reaching the level of maturity where flat earths can be questioned and bad theory can be rejected.
It’s about time. For too long in our field, there has been a sense of smug self-righteousness among too many of our people. The truth, as the case may have been, was either the possession of a particular branch of the academic world, or a particular collection of powerful practitioners, or a leading vendor, or someone else who tolerated no opposition. Woe be unto those who challenged such truth.
In honor of this newfound maturity, I want to look back into the long-ago past of our field, and bring up an unpleasant chapter to illustrate the kinds of truths and challenges that used to cause earthquake eruptions in our field. People were called names. Ancestors were accused of deviant behavior. Intolerance commanded the landscape. It was not a pretty time.
Come with me back to 1977. Computer science and information systems are relatively new academic disciplines; software engineering has not yet joined them. There are mainframe computers and a few minis, but microcomputers remain off in the future. Software practice has been around for over 20 years, and the field is maturing, albeit more slowly than many would like. IBM is the dominant vendor in spite of a (largely failed) antitrust action against it, and software has only recently been a separately priced item; there is no real software industry to speak of.
Computer science is still struggling to define its identity. The first academic program was formed in the early 1960s at Purdue University, but there were few other programs half a decade later. The mathematical heritage of computer science still has a powerful hold on the newly emerging field. The notion of formal, mathematical approaches to developing software is flourishing after its introduction shortly after the academic field itself came into being. But hardly anyone in software’s practice is paying any attention to these Johnny-come-lately academics with their fancy mathematical ideas. The stage is set for the massive explosion that followed.
The "truth," as the case may have been, was either the possession of a particular branch of the academic world, or a particular collection of powerful practitioners, or a leading vendor, or someone else who tolerated no opposition.
Perhaps the first excursion into advocating formal methods for the software masses came with the notion of formal verification, proof of correctness, which first appeared in the academic software literature in the late 1960s and early 1970s. Academics not only explored the possibilities of these mathematical notions, but they unabashedly advocated that practitioners begin using them. Given the total disinterest among those very same practitioners, what came next—the Proof of Correctness Wars—might have been predictable.
Except that it was from within the academic community that the opening shot of the war was fired. Three of the leading academics of their time—Richard A. DeMillo, Richard J. Lipton, and Alan J. Perlis—had the nerve to issue an open challenge to the proof-of-correctness approach. Appearing at one of the leading splinter conferences of the time, the ACM Symposium on Principles of Programming Languages, they presented a paper ("Social Processes and Proofs of Theorems and Programs," 1977) taking the position that proof of correctness was unlikely to ever succeed in the software practitioner community. Why? Because whereas mathematicians who performed proofs had an elaborate social structure that questioned and refined proposed mathematical proofs before their final publication, there was, the authors said, nothing comparable in the software field. They went on to point out that programmers simply didn’t like to read one another’s code, and since doing so was a prerequisite for using proof of correctness, analogies to mathematical proof approaches were doomed to fail. They even called this fact "the death knell for [formal] verification."
The outcry was immediate. The first published response appeared in ACM’s then-feisty newsletter Software Engineering Notes, which, because of its unrefereed nature, could quickly publish contributions. Noted computer scientist Edsger W. Dijkstra picked up the gauntlet the paper had thrown down. Writing in the April 1978 issue, Dijkstra called the paper "A Political Pamphlet from the Middle Ages," and referred to it as "very ugly," saying the authors had "gone much further than the usual practitioners’ backlash," and left no doubt the defenders of proofs were prepared to sally forth with guns blazing.
This was by no means the end of the uproar. Practitioners themselves, outraged by Dijkstra’s intemperate response, jumped into the fray. In Communications (Nov. 1979), frustrated pragmatists hailed the antiproof paper in a flurry of letters to the editor. "The best I have read in a computer publication and one of the best I have read anywhere" summarized the view of most letter writers, although one said it more succinctly: "marvelous, marvelous, marvelous." And perhaps the final word—one might have thought at the time—came from the letter writer who said, "It was time somebody said it—loud and clear—the formal approach to software verification does not work now and probably never will."
Time—almost a decade—passed. The whole formal methods controversy became a kind of fault line between academic theory and its practice. Academe continued to publish and advocate the approaches. The government of England even mandated the use of formal approaches for certain kinds of (Ministry of Defense, embedded real-time) software. Practitioners grudgingly used the approach when they were required to, but otherwise ignored the whole thing.
Researchers who tried to evaluate the formal approaches in a practitioner setting (and there were surprisingly few attempts to do so) had difficulty finding anyone willing or able to participate. In one classic study, the evaluation of the Cleanroom approach, which includes formal verification as one of its tenets, researchers were forced to substitute "rigorous inspections" when no practitioners in the study were able to use the approach.
And then the guns resumed firing. Once again, the challenge came from academe. But this time it was from a very different source. The article was written by James H. Fetzer of the University of Minnesota/Duluth, whose area of expertise was not computing but philosophy and humanities. It was published in Communications ("Program Verification—The Very Idea," Sept. 1988), and its opposition to the formal approaches was perhaps even more unequivocal than the decade-earlier salvo.
This time, the academic response was broader—and more vitriolic. Ten formal-methods advocates, writing in concert, prepared a response published in Communications (Mar. 1989). It accused Fetzer of "distortion," "gross misunderstanding," "misrepresentation," "misinformation," and finally of being "ill-informed, irresponsible, and dangerous." Fetzer responded in kind, accusing his adversaries of a "complete failure to understand the issues," "intellectual deficiencies," "inexcusable intolerance and insufferable self-righteousness," and described their behavior as that of "religious zealots and ideological fanatics." Then he sort of went off the deep end and suggested they "volunteer to accompany … missiles on future flights in order to demonstrate … program verification under dynamic circumstances." The gunfire was not over, as you might imagine. More vitriol was spilled in the subsequent issue of Communications (Apr. 1989). And then, gradually, it all died down again. The Proof of Correctness Wars had, for all practical purposes, ended.
As you can see, things got extremely ugly. Why? I would like to believe it was because we, as a field, way back those 20-something years ago, had no safety valve for disagreement, no mechanism to explore differences of opinion. And what a pity that was, because those differences are both inevitable and necessary for the maturing of a field. Without that safety valve, whenever disagreement did surface, it exploded in huge—and, as we have seen, deeply troubling—ways.
It is interesting to juxtapose way-back-when with today. In one sense, the use of formal methods hasn’t much changed. Practice continues its disinterest, and perhaps disdain, for those formal approaches so loved and advocated by academe.
But there is now a healthier outlet, I would assert, for this disagreement. Academics have toned down their rhetoric and their advocacy, admitting that there are reasons why practitioners avoid the approaches. Practitioners concede the value of the theory, while they continue to challenge its workableness in practice. Literature appears in the professional journals with mixed academic/industry authorship, describing one or more uses of the formal approaches in an industrial setting. There is still an almost total absence of evaluative research findings on the matter, but there is at least growing dialogue.
Perhaps, one day, there will be another DeMillo-Lipton-Perlis or Fetzer explosion on this subject. But I would be willing to bet it is much less likely now than it was back then, when the Proof of Correctness Wars exploded, in that millennium gone by.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment