Practice
Computing Applications Practical programmer

Looking Into the Challenges of Complex It Projects

Investigating the implications of developing conclusions based on faulty fundamental premises.
Posted
  1. Article
  2. References
  3. Author

In 2004, the British Royal Academy of Engineering and the British Computer Society conducted a study and produced the report The Challenges of Complex IT Projects [2], which contains some important findings directly relevant to software research and practice. In this column, I would like to present, then comment on, those findings.

The study’s premise was basically one of a "software crisis." The report’s executive summary found it "alarming" that "significant numbers of complex software and IT projects still fail to deliver key benefits on time and to target cost and specification." To explore that issue, it gathered a body of evidence "collected … from more than 70 individuals, encompassing senior directors, managers, project managers, and software engineers."

The study’s conclusions can be expressed quite succinctly:

  • The levels of professionalism in software engineering are very low.
  • Education in the U.K. is not producing qualified software practitioners.
  • Project management is poorly understood.
  • Risk management is seldom applied.
  • The role of the systems architect is not appreciated.
  • There is an urgent need to promote "best practice."
  • Basic research into complexity is needed.

Now, what is my practitioner’s reaction to that premise and those conclusions?

First of all, I have spoken and written on many occasions about my disbelief in the "software crisis" view of our field. Sure, there are "horror stories of colossal IT project failures," as the report states (I have retold a number of those stories myself), but there is grounds for serious disagreement on how common that problem is. The report cites two U.K. studies, showing success rates of only 16% and 6%, and mentions the now-questioned Standish report success rates of 16%–34% (see my August column for more on the Standish report). But it fails to mention more recent study findings, where for example Cutter, in a 2005 study, found that 60% of surveyed companies reported only a 10% rate of severe project failure [1].

With respect to the survey population of "70 individuals…," it is difficult to find fault. The report names the participants, and although there is a bias toward upper management types and academics (approximately 14% of the participants were academics), the population the survey examined should have been well qualified to present valid opinions on the status of software’s practice.

Now for those seven conclusions…

Is the level of professionalism of software engineers "very low"? This is a key conclusion of the study. To be honest, I know of no research study that has even examined this issue, and certainly I know of none that support this conclusion. My own experience is that the software engineers I deal with are knowledgeable and competent. I know that many of my consulting colleagues do indeed see low professionalism, but to be honest I’ve always attributed that to the "psychiatrist syndrome"—if you deal with people who have problems, you’ll tend to believe that everyone has those problems. I believe the report has no justification for this conclusion. (It is interesting to note that the report recommends a form of professional certification to solve this problem. That, of course, is a deeply controversial issue in our field).

Is education producing poorly qualified software professionals? I can’t comment on U.K. education, of course, but I do see a serious discrepancy between what the typical computer science department teaches, and what the typical software professional needs to know. Lethbridge has done some excellent studies of this issue, and the result tends to support this report’s concern.

Is project management poorly understood in the software field? I have to confess that I haven’t a strong belief in this area. I’ve worked for wonderful and awful project managers in my time, but if I were called upon to decide what made some wonderful and some awful, the differences I would describe wouldn’t fall into the category of things that can be taught. I suspect that there are wonderful and awful project managers in other fields as well, and therefore I have a difficult time seeing this as a special software concern.

How about risk management? Now here there really is some nice data. In a KPMG longitudinal study of severely failing projects, the authors of the study found that none of the projects in trouble had used risk management [3]. It is easy for me to conclude that the British report is "right on" in this matter.

Should the "systems architect" have a key role? This conclusion of the report makes me a bit nervous. It is difficult to see anything wrong with naming and authorizing a key role for a systems architect, and yet the field of software architecture seems terribly immature to me, and I’m not sure that I can conclude that a systems architect could guarantee dodging severe problems as they came along.

And what of "best practice"? This is a topic that also makes me nervous. I deeply believe in what I prefer to call "best of practice" techniques and methods. But "best practice" all too often seems drawn out of textbooks and research studies, rather than what has been demonstrated to actually work really successfully in the field. I have received a lot of reports from colleagues, for example, who find it nearly impossible to interpret and apply some of the standards that describe how to build successful software. My suspicion is that those standards suffer from the same things that all too many "best practices" suffer from, they are based on what someone thinks ought to work rather than on what someone has discovered actually works. "Reality is the murder of a beautiful theory by a gang of ugly facts" is one of my favorite sayings.

And finally, how about basic research into complexity? Here’s one where I can happily shout "hooray," and come out in largely unqualified support. There is far too little examination of the realities of software practice, where problems tower dramatically over the toy problems typically examined by academe, and professionals bring a collection of skills and knowledge far more profound than those of the typical academic student team. But I have a fear here as well. If "basic research" into "complexity" bogs down in some of the same old research that explores minuscule problems in depth, then the goal of this conclusion will be subverted and the result will tend to be value-less.

So where do I stand on balance? I deeply believe in the need to study The Challenges of Complex IT Projects. I think the sponsoring organizations showed a disappointing tendency to base their work on demonstrably faulty premises. But I think their conclusions, if implemented, are likely to do more good than harm. In the software field circa 2006, I’m afraid, it doesn’t get a whole lot better than that.

Back to Top

Back to Top

    1. Cutter consortium: Some new data in the project failure statistics wars. The Software Practitioner (Nov. 2005).

    2. The Royal Academy of Engineering. The Challenges of Complex IT Projects; www.bcs. org/server.php?show=conWebDoc.1167.

    3. Runaway projects: Cause and effects. Software World (U.K.), 1989, 1995.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More