Research and Advances
Computing Applications

Developing the Future

  1. Article
  2. Author
  3. Figures

Software is arguably the word’s most important industry. The presence of software has made possible many new businesses and is responsible for increased efficiencies in most traditional businesses. Software, both directly and indirectly through the domains it automates, connects people as well as serves, entertains, educates, protects, heals, and nourishes. Although there are only about 12 million software professionals worldwide, their reach is far, for the software industry is a prime force behind the recent global economic renaissance.

That being said, there are three harsh realities that our industry faces: developing complex software of quality is wickedly hard, it’s not getting any easier, and there is a very real shortage of skilled men and women to do the work. In the face of such pressure, there are pragmatically only two things a development organization can do. First, the best way to reduce the risk of software development is to not develop any new software at all. That’s why the reuse of assets—from code to models to pure intellectual property—is an essential practice of every successful development organization. Second, the team must reduce the friction of development. Since there is an essential complexity to most interesting software, we can’t expect to reduce complexity, we can only seek to manage it: that means raising the levels of abstraction in the components we create, and eliminating the points of friction in the team and in its processes. With a team of one, there’s very little friction but there are also very few resources, certainly insufficient for all but the most trivial projects. As the team grows, resources increase, but so do the points of friction. Ultimately, that’s where the engineering of software becomes essential: managing requirements, iteratively and incrementally growing a system’s architecture, controlling change, and testing continuously are all practices that help to raise the level of abstraction and reduce friction within the project.

And yet, conventional wisdom suggests quite the opposite. Talk to all-too many development teams, especially in the Web domain, and you’ll find individuals wanting to engineer good software whose good intentions are sabotaged by the scarcity of time and resources, causing them to build just-barely-good-enough software. In such shops, you’ll find the attitude that software is disposable, structured teams are unnecessary, simple tools are sufficient, well-defined processes are burdensome, object-orientation is dead, reuse is dead, formal techniques are inconsequential, and, in short, software engineering is irrelevant.

At least for the foreseeable future, then, we’ll likely see the dichotomy of high ceremony and low ceremony processes and an ultimate reconciliation of the two.

As a professional software developer, I find that attitude irresponsible; as a human who lives in a world filled with software stuffed in all its interstitial spaces, I find that attitude frightening, for, increasingly, so much of what I do is invisibly touched by software of dubious quality. Gerald Kotonya wrote to me to say “No one doubts that software engineering has made great strides since its inception in the late 1960s. However, we are still delivering buggy and unreliable systems. Worse still, we have users believing it is normal for software to be buggy.”

Throughout my career, I’ve been a watcher of projects. I’m fascinated by the different ways that teams organize themselves and the different tools they use to do their work. From low ceremony processes such as you’ll find in the open source community, to high ceremony processes found in many large command-and-control systems, it’s clear there’s quite a diversity of practice, but it’s also clear that we still have a long way to go in making the engineering of software real.

In 2000, I spoke on the future of software and software engineering at the International Conference on Software Engineering in Limerick, Ireland. Alan Kay once said “the best way to predict the future is to invent it,” and so, to get a clearer picture of the future of computing, I polled about 700 professionals, from Turing Award winners to CTOs and a lot of people in between. Among the responses I received, I was surprised to see so many address the issue of process. Dick Fairley stated it perhaps most eloquently: “I see a bifurcation coming; or perhaps recognition of the bifurcation that has already occurred. On the one hand shoot-from-the-hip, Internet-time entrepreneurship will continue to flourish. On the other hand, certification of software engineers, as legislated in Texas and now being formalized by the IEEE Computer Society, will also gain ground. So we have the Athenians (entrepreneurs) and the Spartans (disciplined software engineers).”

This split between the Athenians and the Spartans is perhaps most clear among the so-called dot-coms. As Peter Sawyer wrote: “For me, one of the biggest challenges is figuring out how quality software can be reconciled with limited resources and very dynamic markets. In other words, how small start-ups can evolve both themselves and their products in the face of the need to get their products established and ahead of the pack. [For many such organizations], fairly basic good practice—specification documents, reviews, test plans—is sacrificed to short-term exigencies by time-to-market and other constraints.” Sawyer goes on to observe that “Maybe we need to accept that competent start-ups can achieve acceptable quality while they remain “product vision”-driven. If we accept that, we can look at how to help them to incrementally acquire the quality practices that are an overhead in the short term, but pay off as the company gets bigger, and as the product evolves and expands into multiple products and variants.”

This is perhaps one reason why the processes of the open source community and those of the extreme programming community have gained some traction. As Tim O’Reilly observed “Open source software is important partly because it represents the first stage of something that will eventually become far more widespread: shared projects carried out over the Net by people who are geographically and organizationally independent.”

A lot of interesting software has been and will continue to be built by fresh teams who require processes very different than mature teams who must, as Sawyer says, manage multiple products and variants. Eric Brewer suggested to me that “we need lighter-weight processes that the users actually believe in, rather than tools/processes that they feel forced to use. This is the biggest failing of software engineering tools/processes by far.” Derek Coleman also observed “There is a move away from software processes that are hierarchical and management driven (that is, the high ceremony ones that are done to you). The trend is to cooperative styles of development where management dictate is replaced by ethical considerations of community membership. Open source is the best known of these but inner source variants are becoming common.” Finally, Tom Demarco wrote to say, “I think the strongest trend at work today is toward light methodology. In this class I include Booch Lite, Kent Beck’s XP, SCRUM, DSDM, Crystal Methods (by Alistair Cockburn), and Jim Highsmith’s Adaptive Development. Probably the most prominent of these is XP (extreme programming). It is the best of a new breed of approaches that thumb their noses at CMM and all other forms of fundamentalism. The focus has to be on building skills into the people, not building regimentation into the process. It requires no great leap of intellect to see that everything has to go faster in this age. You don’t go faster with bigger methodology; to get faster we have to move toward much lighter methodology. The CMM makes us wonder What else shall we add? while the light methodologists are always asking What can we take away?

In the future, will we thus see radical improvements in the process of engineering software? Al Aho is doubtful—and I vehemently agree with him—”Since software involves people, process and technology, we’ve made some advances in the last two areas, but the first remains constant. So I believe it is unlikely we’ll have the Moore-like curves for software productivity and quality that we see in silicon, optics, and wireless technologies.” As for people, everywhere I go, project members lament the lack of skilled developers to carry out their work. As Brian Reid pointed out “When I was a young teenager in the early 1960s, there weren’t many programmers in the world. Now everybody needs to be a programmer.” In a way, this is what happened to the public telephone network: every user became an operator. Thus, it’s likely that we’ll see a growth of user-programmers in our domain. Yet, as Barry Boehm observed, “In 10 years, the world will have 100 million user-programmers—and 44% of their programs will continue to have mission-critical bugs. We sorely need the equivalent of seat belts and air bags for user-programmers.”

At least for the foreseeable future, then, we’ll likely see the dichotomy of high ceremony and low ceremony processes and an ultimate reconciliation of the two. We’ll also likely see a growth of user-programmers and a maturation of the software engineer.

Predicting the future is always a suspect activity, of course: you can either extrapolate from present events to the near future (in which case it’s difficult to go wrong) or you can make huge leaps that project to the distant future (in which case everyone who read your original projections will be dead, and so no one will be around to complain if you were wrong). In this case, I’m perhaps guilty of repeating the past. In commenting on my ICSE presentation, Dave Parnas observed that my “concluding statements could have been made at the first Software Engineering conferences in 1967 and 1970.” He went on to say “The only reason that we need to make such statements after all these years is that so many people are offering ‘easy solutions’ to hard problems.”

There are no easy solutions to the tough problems of building complex software of quality. But then, finding those solutions is what the future of software engineering is all about. In particular, I’m encouraged by the Software Engineering Body of Knowledge (SWEBOK) effort, found at

Back to Top

Back to Top


UF1 Figure. The “MC Escher Hub”: In the year 3001 people will live in a mind-boogling 3D universe, our skills at adapting to this multidimensional world are one of the most notable changes in our evolution. —Jean François Podevin, illustrator —Jean François Podevin

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More