Practice
Computing Applications Practical programmer

‘Silver Bullet’ Milestones in Software History

In this second installment of a three-part series, the succession of interconnected creative milestones is traced.
Posted
  1. Article
  2. Author

In my March column, I discussed the earliest creative milestone in software history: the development of business applications software and an application-specific computer called LEO, developed by the J. Lyons Company in England in 1951. It was an astonishing event in the history of the software field, and it deserves to be celebrated as one of the leading events in computing history, as I wrote in that column.

But the J. Lyons work was only the beginning of a long chain of creative milestones in computing and software history. It’s the other events in that chain that I want to discuss here.

Why pour back over the dusty history of how software got to where it is today? After all, what’s exciting about the computing field is what big tricks we can make it do today, not what (by comparison) small tricks we could make it do back then.

Just this. There’s a lot of forgetting, and a lot of "never knew that," in our field today. People discuss the history of computing and frequently get it incorrect. And then others reference those erroneous discussions and compound the errors by producing citation-supported discussions that are also wrong.

And there’s another factor. I was recently writing an article on some concepts that remain very relevant to me, like the structured methodologies and CASE tools, and a young colleague pointed out that many of today’s readers may not know what those are, and that worse yet any reference to them would date the article I was writing. I had to agree that this young colleague was right, of course. Events that happened back in the 1970s and 1980s, like the structured approaches and CASE tools, are essentially off the radar screen of today’s programmers (many of whom may very well have been born at about that time).

But at the same time, I had a second reaction. These are pieces of history that we shouldn’t forget. Not just because they represent important lessons learned, although it’s hard to overstate the importance of that factor. But also because a lot of what we think we know today is either directly or indirectly traceable to those historic milestones. They represent a rich history, I would assert, of material on which we will continue to build the future of the software field.

So allow me to take you on a quick tour of software’s historic creative milestones.

First, back in the time of the J. Lyons Company, came bare computers that were coded in machine language. And machine language was a creation of computing hardware people: extremely useful—all of a computer’s tricks could be performed using it—but definitely human-unfriendly. (Think of writing programs in all-numeric operations and data addresses, and think further of those numerics being not in decimal but in bi-quinary or octal or hexadecimal or one of the other number systems that computers used back then.)

Fortunately, that era passed fairly quickly. First came assembly languages, and instead of coding in numerics we now coded operations and addresses in symbolics. From a milestone point of view, this was a fairly mild one, since it wasn’t that much of a leap forward in our ability to build software. But from a creative point of view, assembly language marked the beginning of thinking about the use of computers in symbols instead of numbers. And most of the remainder of software’s milestones are based on that notion.


There was a lot of software being written before the denizens of ivy-covered halls began to ponder how best to do it.


One of the most fundamental ideas in the software field emerged about the same time as assembly language. We began to create software in modules, separable pieces of software that performed a task that could be invoked by a program proper. This mid-1950s idea remains, to this day, the arguably most important advancement in software history. Oh, it was expanded by generations of software practitioners to come, and by academics a decade or so later, but the idea of building software from (reusable) separate parts is one that many people find exciting—and think they’re discovering afresh—today.

Surprisingly quickly after assembly language and modular programming came two of the most important milestones in software history, the advent of the high-order programming language and the operating system. Both came along in the mid/late-1950s (I can’t accurately recall which appeared first in the computing practitioner shops where I worked at the time). But these were perhaps the most profound tool-based breakthroughs in the entire history of the field. Tools called compilers could transform a high-order language into machine language. The first such language was Fortran, and it remains (after various transmogrifications) in use today. And, following quickly on its heels came COBOL, also remaining in use today. These languages were application-domain focused: Fortran spoke the language of scientists and engineers; COBOL spoke the language of business analysts.

Operating systems, as we know to this day, are tools that allow programmers to forget the mundane hardware-focused aspects of writing applications software and concentrate on the application problem to be solved. In a sense, operating systems were a unified collection of hardware-specific modules, providing the services required by the computer hardware in question. We were so grateful for the creation of the operating system back in those days that it was difficult to foresee a future (such as the current climate) in which operating systems would be a source of controversy and even divisiveness.

The 1950s rolled into the 1960s, and those creative leaps became ever more solidly embedded in the software field. New high-order languages emerged. Newer and better operating systems came along. The rate of these changes was amazingly rapid, but they were evolutionary, not revolutionary. Evolution involved creating programming languages that were domain independent (merging Fortran and COBOL into PL/1, for example) and even creating computer hardware that also straddled domains (most early computers had been either scientific or business oriented). New operating systems, of course, supported that more diverse hardware.

It was late in the 1960s that the academic computing disciplines came into being. First came Computer Science, and shortly thereafter Information Systems (Software Engineering didn’t come along as such until more than a decade later). I suspect that most readers will be surprised by how late in the evolution of the software field these disciplines appeared (note that 15 years had passed since the J. Lyons Company developed LEO). There was a lot of software being written before the denizens of ivy-covered halls began to ponder how best to do it.

At about the same time, some of the most profound applications systems were being developed. We tried—and largely failed—to build complicated and integrated Management Information Systems. (Commercial vendors like SAP and PeopleSoft began working on similar systems at about the same time, and more than a decade later became famously successful at it.) Reservations systems for airlines were successfully implemented. The operating system for the IBM 360 was one of the largest applications of all time, and did what it was supposed to do. Huge and astonishingly complex space-related and weather-prediction systems were developed. We had thought, back in the 1950s and 1960s, that the applications we built then were complex and impressive. Little did we know what was to come!

As applications became larger and more complex, systems and tools and concepts to attack that complexity became the focus of the field. First, in the 1970s, came structured programming. It consisted of a collection of methodological do’s and don’ts about how to create software in a "structured" way. There were two amazing things about structured programming. The first was that it was hyped as being a breakthrough in our ability to build software, and it was accepted and used by almost all programmers in almost all practitioner organizations. The second is that no research was ever performed to demonstrate that the claimed and hyped value existed. Studies conducted a decade later examined the evaluative literature on structured programming and found "equivocal" results that by no means supported the original hyped claims. It is important to note that, although the hype was clearly in excess, most people agree today that the structured approaches were beneficial, and in fact most of today’s programs are in some sense "structured."

The field was by no means finished with hyped approaches. Tools and languages were envisioned as techniques for "automating" the field of software, such that anyone—not just professional programmers—could do that job. CASE (computer-aided software engineering) and 4GL (fourth-generation languages) were the tools that would make that possible in the 1980s. Never mind that many CASE tools were purchased, then put aside and ignored (the standing joke of the time was that they became "shelfware"). Never mind that practitioners and academics had differing opinions on what 4GLs were (to practitioners, they were languages that generated reports from databases; to academics, they were non-procedural languages wherein the programmer specified what was to be done, but not the order in which to do it). It was no coincidence that about this time Fred Brooks published his historically important "No Silver Bullet" article, in which he took the position that most breakthroughs in the software field had already happened, and that there quite likely wouldn’t be anything as exciting in the future as the languages and operating systems of the 1960s had been.

And what happened to CASE and 4GL? My suspicion is that we still use them to this day, but the terms themselves have fallen into such disregard that we rarely see them. And certainly, the hyped benefits were never achieved.

Structured programming was not the end of the notion of methodology as a breakthrough. Object-oriented (OO) approaches became all the rage, and the same kinds of benefits were claimed for them that were claimed for the structured approaches. It is difficult to be objective today about OO, since there has been no next big thing to replace it. But there are mixed claims about how much the OO approaches are used (some claim they are ubiquitous, but the studies I have seen show the penetration in computing organizations is less than 50%).

There are also mixed claims about the OO benefits—it is supposed to be a novice-friendly approach, but studies have shown that novices do better with functionally focused rather than object-focused approaches. It is supposed to facilitate reuse (recall the earlier discussion of modular programming), and studies have shown that for some application domains fully 70% of the code can be reused instead of written afresh (but similar studies have shown nearly the same benefits from non-OO approaches). It is also supposed to allow the creation of software in solution objects with direct correlation to the objects in the problem to be solved, but the OO field has now embraced "use cases," a decidedly functionally focused approach to define system requirements. Once again, as the hype of OO dies away, we are beginning to see clearly that it is a good but not best approach to building software.


We haven’t seen the last of software’s creative milestones, of course. Predicting what they might be could be a stimulating exercise.


What remains in the collection of creative milestones in software history? Today’s contributions: the agile software development approaches, and open source software. There are those in either/both of these camps who take the position that these are, indeed, magnificent creative milestones in the world of software development. And there are others who see agile programming methodologies as appropriate to only small applications of a limited number of kinds, and open source as an emotional blip appreciated more by its fanatical supporters than by the software world in general.

We haven’t seen the last of software’s creative milestones, of course. Predicting what they might be could be a stimulating exercise. But my suspicion is that the reality of what is likely coming down the pike in the decade ahead will be vastly different from what all the prognosticators in the world might choose it to be. `Twas always thus with creative milestones!

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More