Opinion
Computing Applications Forum

Forum

Posted
  1. Extending and Shrinking UML
  2. Give Meaning to the Status of Software Engineer
  3. The Art of the Paper Review
  4. No Waiting for Enterprise-Integration Products
  5. Mathematical Rigor for the Social Sciences
  6. Author

I was delighted to see there are many positions taken regarding the upgrade from the stopgap solution UML1 to a well-thought-out successor ("What UML Should Be," Nov. 2002).

The notion that a large system should be addressed through several rounds of refinement and recursive applications of UML is very much to the point, though something tricky is also going on. The input at the top level in round one is essentially unformalized, while at the subsequent recursive invocations (to elaborate subsystems and subcomponents) developers have the luxury of formalized input. Hence, it is prudent to distinguish between these two settings.

Though the situation at the top level warrants more scrutiny, it is often glossed over when formulating UML requirements. The sponsor is a key stakeholder at the top level of a development project, bringing to the party a requirements document driven by business needs, in short, a description of what the target system is supposed to do.

The role of developer in the interaction with sponsors is quite different from when design/implementation actions are being planned or supported. The former requires figuring out exactly what the task is about; the latter involves the best maneuvers in the solution space. The interplay between these roles can be intricate, especially when the task is ill-defined. Therefore, throwaway prototyping might be worthwhile.

However, figuring out what a sponsor wants, detecting gaps in the business requirements, and challenging the sponsor (after establishing trust about the context, future needs, apparent contradictions, and more) are all early activities in which UML plays a role because it offers a different perspective on business requirements.

The following extreme case must be avoided. Suppose the sponsor and the developer agree on what is to be done, and the developer then delivers the result. The sponsor proclaims, "Indeed, you have done what I told you, but it’s not what I want." Though an early prototype would have prevented such disappointment, a developer can do better; for example, a developer should play devil’s advocate and (while building the initial model) engage the sponsor in a dialogue that reveals hidden positions, desires, and assumptions. The developer must be able to switch smoothly between the problem description space and the solution space.

The entirety of UML is not required for dealing with problem understanding. A subset of UML—call it UML-A (for UML OO Analysis)—should be used to "rewrite" the unformalized requirements document into a formal version, without committing to how the system would operate. UML-A should be simple enough that the sponsor helps validate that the model captures the intent expressed by the requirements; for example, the sponsor should be able to confirm that use cases formulated in plain English (and optionally captured in diagrams) are faithfully represented in interaction diagrams and scenario diagrams.

Being able to extend a UML-A output model to address design issues and committing to how the target system will operate is another story.

The executability of a UML model is very handy. High-level executability opens the door for iterated smart compilation in conjunction with meaning-preserving transformations, ultimately leading to (semi)automatic coding. Thus, the code would be guaranteed to correspond to the high-level executable model.

How can a developer ascertain that a high-level executable model is really OK? Validation can be achieved by showing that an executable model satisfies a declarative behavior specification. This entails ensuring that the behavior descriptions of a UML-A model go beyond use-case descriptions.

Declarative behavior specifications use pre- and post-conditions of operations in state-transition diagrams. But classical pre- and post-conditions are not sufficient for dealing with today’s software. Declarative semantics are also needed for process and thread creation and destruction and timing constraints, as well as for (a)synchronous send-and-forget primitives. Recursive application of UML on lower-level modules also requires declarative semantics of pointer manipulations to resist memory leaks and dangling pointers.

UML allows formulation of constraints that can be used for pre- and post-conditions and loop invariants but that are not sufficient for the concepts described earlier. This insufficiency reflects a flawed "engineering" approach to software development, rather than an applied logic approach. Lack of off-the-shelf semantics for the whole range of such advanced concepts is no justification for their omission in UML.

The complaint of "yet another extension" following the addition of declarative semantics to UML can be countered by raising the level of rigor of UML to force the demarcation of a core fragment. The bar would be high for additional extensions because their semantics would have to be formulated to maintain the overall consistency of all submodels.

UML without declarative semantics is like quicksand, an unsuitable foundation for capturing the requirements of the next generation of safety-critical systems.

UML-A needs to be demarcated as a subset of UML that captures what a target system is supposed to do and that can be appreciated by nontechnical stakeholders. Also needed is the ability to associate declarative semantics and state-transition diagrams in order to rigorously double-check an executable model for safety-critical applications.

Dennis de Champeaux
San Jose, CA

Back to Top

Give Meaning to the Status of Software Engineer

The special section on licensing software engineers (Nov. 2002) did an excellent job raising the question of whether they should be, but none of the articles provided an answer.

Knight and Leveson came closest, though they addressed only whether software developers should be licensed by existing licensing bodies, using the same criteria as other areas of engineering. The established procedures are problematic for many engineering disciplines; for example, each difficulty they cited applies equally to all of them, and none of their arguments claimed that licensing is not needed, only that the existing procedures aren’t the right ones.

Even the popular argument that computer science changes too quickly to allow licensing could be made for the other disciplines as well. Since starting my working life in electrical engineering, I have seen more fundamental changes in that field than in computer science. However, my electrical engineering education remains valuable because it stressed fundamental science and mathematics. The same approach can be applied in software engineering.

The problems caused by technology changing quickly are actually a subset of the problems caused by the fact that nobody can be familiar with all technologies at once. Educators should teach fundamentals, illustrating them with example technologies. Graduates stay current by learning about new technology as needed. Licensing bodies should examine candidates for licenses accordingly. Unfortunately, many in a variety of traditional engineering fields simply don’t; none do it for software engineers.

The other articles (including mine) explored particular jurisdictions. For example, Bagert and I each wrote with the implicit assumption that licensing is needed, leaving arguments in support of that position to other articles. McCalla gave another view of the same story but assumed that no evaluation of basic preparation is possible. His arguments would apply equally to disciplines like medicine where licensing is successful.

The licensing discussion should continue; we haven’t addressed the core issues. I hope all the authors agree that the present situation, in which anyone can identify him- or herself as a software engineer and work on critical software, can be improved. Equally undesirable is the fact that many of the licensed engineers developing critical software do not know or apply the relevant computer science and mathematics.

The term "software engineer" is therefore meaningless; only an expert can differentiate people with minimal qualifications from those truly not qualified. The next round of discussions should include proposals for improving this situation rather than dismissing the procedures now in place.

David Lorge Parnas
Limerick, Ireland

Back to Top

The Art of the Paper Review

I enjoyed Palsberg’s and Baxter’s "Teaching Reviewing to Graduate Students" (Dec. 2002). Their experience sounds very much like my own (using similar techniques) teaching reviewing to graduate students at Stanford University.

I’d like to offer an additional idea. With the help of several authors, I got copies of major conference papers in their original form (the version read by the conference program committees). Sharing them with students was a wonderful experience, as they saw their evolution during the time the authors interacted with the reviews. In a few cases, the authors still had the original conference reviews and allowed me to share them with the students as well.

Craig Partridge
Cambridge, MA

Back to Top

No Waiting for Enterprise-Integration Products

As a professional working for a software company in the enterprise integration arena (Ascential Software, Westborough, MA), I have some comments about Fernando Berzal et al.’s Technical Opinion ("Component-based Data Mining Frameworks," Dec. 2002). Although its arguments were valid conceptually, it ignored tools that are commercially available today, including Ascential’s DataStage.

These tools provide for component-based development of highly complex and scalable data-integration solutions by means of intuitive graphical environments; components include: RDBMS access, aggregation, sort, merge, transformation, and join. Most DataStage-based implementations involve no programming at all, though they are easily adapted to include elaborate custom coding. DataStage and similar products are also referred to as extract-transform-and-load tools. In combination with OLAP tools from a variety of vendors, they are a realization of the concepts, as well as the functionality, proposed by the authors.

The column seemed displaced in time, ignoring sophisticated solutions available today from a number of software vendors.

Julio Lerm
Chicago, IL

Back to Top

Mathematical Rigor for the Social Sciences

Felipe Castel’s Viewpoint ("Theory, Theory on the Wall," Dec. 2002) reminds me of John von Neumann’s and Oskar Morgernstern’s efforts (described in their 1964 book Theory of Games and Economic Behavior) at subjecting economics and indeed all the social sciences to the rigor of mathematics; they also tried to subject economic concepts (such as utility) to being measured. These ideas were necessary for the success of the physical sciences. Progress in the social sciences is not possible unless they are subjected to the same rigor.

Oladokun Olajoyegbe
Lagos, Nigeria

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More