In software engineering, the traditional description of the software life cycle is based on an underlying model, commonly referred to as the “waterfall” model (e.g., [4]). This model initially attempts to discretize the identifiable activities within the software development process as a linear series of actions, each of which must be completed before the next is commenced. Further refinements to this model appreciate that such completion is seldom absolute and that iteration back to a previous stage is likely. Various authors' descriptions of this model relate to the detailed level at which the software building process is viewed. At the most general level, three phases to the life cycle are generally agreed upon: 1) analysis, 2) design and 3) construction/implementation (e.g., [36], p. 262; [42]) (Figure 1(a)). The analysis phase covers from the initiation of the project, through to users-needs analysis and feasibility study (cf. [15]); the design phase covers the various concepts of system design, broad design, logical design, detailed design, program design and physical design. Following from the design stage(s), the computer program is written, the program tested, in terms of verification, validation and sensitivity testing, and when found acceptable, put into use and then maintained well into the future.In the more detailed description of the life cycle a number of subdivisions are identified (Figure 1(b)). The number of these subdivisions varies between authors. In general, the problem is first defined and an analysis of the requirements of current and future users undertaken, usually by direct and indirect questioning and iterative discussion. Included in this stage should be a feasibility study. Following this a user requirements definition and a software requirements specification, (SRS) [15], are written. The users requirements definition is in the language of the users so that this can be agreed upon by both the software engineer and the software user. The software requirements specification is written in the language of the programmer and details the precise requirements of the system. These two stages comprise an answer to the question of WHAT? (viz. problem definition). The user-needs analysis stage and examination of the solution space are still within the overall phase of analysis but are beginning to move toward not only problem decomposition, but also highlighting concepts which are likely to be of use in the subsequent system design; thus beginning to answer the question HOW? On the other hand, Davis [15] notes that this division into “what” and “how” can be subject to individual perception, giving six different what/how interpretations of an example telephone system. At this requirements stage, however, the domain of interest is still very much that of the problem space. Not until we move from (real-world) systems analysis to (software) systems design do we move from the problem space to the solution space (Figure 2). It is important to observe the occurrence and location of this interface. As noted by Booth [6], this provides a useful framework in object-oriented analysis and design.The design stage is perhaps the most loosely defined since it is a phase of progressive decomposition toward more and more detail (e.g., [41]) and is essentially a creative, not a mechanistic, process [42]. Consequently, systems design may also be referred to as “broad design” and program design as “detailed design” [20]. Brookes et al. [9] refer to these phases as “logical design” and “physical design.” In the traditional life cycle these two design stages can become both blurred and iterative; but in the object-oriented life cycle the boundary becomes even more indistinct.The software life cycle, as described above, is frequently implemented based on a view of the world interpreted in terms of a functional decomposition; that is, the primary question addressed by the systems analysis and design is WHAT does the system do viz. what is its function? Functional design, and the functional decomposition techniques used to achieve this, is based on the interpretation of the problem space and its translation to solution space as an interdependent set of functions or procedures. The final system is seen as a set of procedures which, apparently secondarily, operate on data.Functional decomposition is also a top-down analysis and design methodology. Although the two are not synonymous, most of the recently published systems analysis and design methods exhibit both characteristics (e.g., [14, 17]) and some also add a real-time component (e.g., [44]). Top-down design does impose some discipline on the systems analyst and program designer; yet it can be criticized as being too restrictive to support contemporary software engineering designs. Meyer [29] summarizes the flaws in top-down system design as follows: 1. top-down design takes no account of evolutionary changes; 2. in top-down design, the system is characterized by a single function—a questionable concept; 3. top-down design is based on a functional mindset, and consequently the data structure aspect is often completely neglected; 4. top-down design does not encourage reusability. (See also discussion in [41], p. 352 et seq.)
The Latest from CACM
Shape the Future of Computing
ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.
Get InvolvedCommunications of the ACM (CACM) is now a fully Open Access publication.
By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.
Learn More
Join the Discussion (0)
Become a Member or Sign In to Post a Comment