It will come as no surprise to most of us in the software profession that it is difficult to determine what the state of software’s practice is. By contrast, it is easy to determine the state of the art. There are few sources to tell us about the state of the practice (what practitioners actually do), whereas there are many conferences and journals that tell us about the state of the art (what theorists believe that practitioners should do).
In this column, I will describe some steps I recently took to overcome that problem, which is an issue that has concerned me for some time. I like to tell people that "my head is in the theory of software engineering, but my heart is in its practice." I deeply sympathize with the practitioner, and I think practice is in general far more successful than advocates of the "software crisis" point of view give it credit for.
Given all of that, I set out awhile ago to try to gain a better understanding of this elusive state of the practice. To do that, I began by asking the IEEE Software editorial board, of which I am a member, if they would be willing to publish a special issue devoted to this very topic. After we sorted out the difference between describing and prescribing, and I made it clear that the issue I was interested in would be purely descriptive, the board agreed to pursue the matter, and I agreed to be the guest editor for the issue.
The biggest problem with the special issue was to get the authors of contributions and the reviewers of those contributions to all row in the same direction. All too often there was a struggle with that same distinction of describing vs. prescribing (far too many people assume the role of publications is to prescribe, and far too few even understand it is possible to describe, especially for things related to practice). But finally we overcame that problem, with a lot of very nicely done research that really sought to describe, by surveys and by case studies and by "been-there, done-that" experience reports, what that state of the practice really is. That issue was published in November 2003.
I will briefly summarize the findings of that special issue here, because I think the researchers who studied practice and wrote the articles we published demonstrated a good understanding of what practice, circa 20002003, is all about. I hope this sampling whets your appetite enough to cause you to go look at that special issue for a more in-depth understanding or to do some follow-up research on today’s state of the practice, circa 2007.
The issue included a collection of invited articles from well-known writers about software’s practice. Capers Jones set the stage nicely with an article that emphasized the diversity of practice, listing six very different types of systems (military, hardware support, commercial (for lease/sale), outsourced, management information, and end user), and then defined the types of processes used and tasks performed for each of those types of systems, and for each of six very different sizes of projects. The point Jones was making, that it is difficult to define a single state of the practice in the face of this diversity, was nicely reinforced by an invited "Loyal Opposition" column written by Elaine Weyuker, who took the position that there really is no such thing as a single state of the practice.
Michael Cusumano and several colleagues presented the findings of their international study of practice, providing some survey data on the prevalence of various project approaches. Practices used, ranging from most to least, were design reviews (88% of projects), functional specifications (85%), regression testing (83%), code reviews (79%), beta testing (73%), and then down to pair testing (41%) and pair programming (35%). Most software was being built for workstation usage (66%), with smaller amounts for PC (15%) and mainframe applications (12%).
Donald Reifer concluded the series of invited articles with an analysis of the similarities and differences between the state of the art and the state of the practice, identifying, sometimes surprisingly, gaps in both states that need filling (for example, the state of the art needs to look more into "short-life systems such as those on the Internet").
The issue also included several contributed articles, which were equally exciting. Timothy Lethbridge and his colleagues studied the usage of software documentation in practice, obtaining sometimes-counterintuitive findings about the usefulness of various documents. This usefulness was measured in terms of "correlation between a document type’s perceived accuracy and its consultation frequency," and the authors found the highest scores for testing, low-level design, and requirements documents; the lowest scores were for specifications. The authors concluded "the closer you get to the real code, the more accurate the document must be for software engineers to use it."
The beginning of the life cycle—requirements engineering—was studied by Colin Neill and a colleague, who found quite a diversity in the modeling notation used to describe requirements: 51% was informal, 27% was semiformal, and only 15% was formal. Their conclusions included "formal methods are rarely used, ad hoc approaches do not impact product quality, the waterfall life cycle is still popular, object orientation is not a dominant approach, and the perception among practitioners is that failure is infrequent."
Marcus Ciolkowski and colleagues examined the state of the practice in software reviews, and found much less frequent usage of that approach than many might expect. Regular usage of reviews was 40% for requirements/design and 30% for code. These reviews were conducted in a meeting setting 40% of the time. Their conclusion? Reviews are "conducted regularly but unsystematically."
Andreas Birk and colleagues looked at product-line engineering and found it being used "more and more frequently" on "more and more complex projects." "Organizations," they found, "tend to avoid establishing a dedicated organizational restructuring toward separate product-line tasks." Rather, they found, the companies tended to use a shorter-term "task force effort," where later the products so developed evolved into project usage.
Particular kinds of projects are especially interesting, and Bas Graaf and colleagues looked at the state of the practice for embedded software. They found that such work is often hardware-driven (the product involved is usually a hardware device, such as a CD player, and the software is seen as an enabler of the hardware), involves many very different stakeholders, and is "bottom-up" (because it is rare to start afresh with a new project, but rather an existing system is usually tailored to meet new needs; this also accounts for why most tools, being top-down in nature, are not used on embedded projects). Requirements specifications, as I have shown with the studies mentioned in this column, are almost always in informal natural language, with any diagrams using an ad hoc variant of UML. Requirements tracing in such applications is "important," the authors say, but difficult, due to the inherent complexity of requirements interactions.
And finally, in perhaps the most thoroughly performed research of the articles in the special issue, Richard Baskerville and colleagues used case studies, surveys, and "discovery colloquia" to try to come to grips with what they called "Internet-speed software," considering how and whether it differed from more traditional project approaches. They found Internet-speed projects released new versions frequently, used many tools, implanted customers on the development team, made a strong effort to use a stable underlying architecture for all projects, employed lots of reuse (with "components" and "wrappers"), and rarely considered maintenance (it was more common to throw away and rebuild than to revise). These projects tended to use "just enough process to be effective," "tailored methodologies daily," and in general tended to use agile-like approaches. Their conclusion? Internet-speed projects are indeed different from traditional ones.
There you have it. A nice collection of state of the practice of software engineering findings, obtained by researchers who were as curious as I was as to just what practitioners are doing. Note the distinct lack of unanimity in the findings—as Capers Jones and Elaine Weyuker, especially, told us there is a huge diversity in both the nature and the method of attack used for software projects. I think more work is needed to help us all keep up to date with the state of software’s practice, but in my opinion the studies appearing in the special issue serve as a good starting point.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment