September 1990 - Vol. 33 No. 9

September 1990 issue cover image

Features

Opinion

From Washington: EC directives aim for market harmony

The 1992 unification plans for the 12-nation European Community (EC) have surely been among the most dissected blueprints of the year. Politicians ponder trade agreements, economists refigure potential revenues, and media attention is unrelenting. Absorbing it all are those U.S. high tech industries hoping to make a lasting impression on what the economists, politicians and media all predict will be a $4 trillion market of over 300 million consumers.Harmony has become the slogan adopted for the project. The EC Commission has spent the past five years identifying and implementing a program of close to 300 directives and regulations that would allow for the free movement of consumer products within the community. The goal of these directives, in addition to promoting European commerce and fair business competition, is to eliminate possible physical, technical or fiscal trade barriers. The result, they hope, is one harmonious marketplace.Watching every note of this orchestration has been the U.S. Department of Commerce (DoC). Working with an internal program that includes the participation of senior officials from the International Trade Administration, the DoC has examined over 185 of the adopted or proposed directives issued by the EC Commission. Moreover, it consulted with trade associations and industry representatives to explore how these directives relate to current U.S. business practices and determine how they might affect future EC business dealings.The DoC has published the results of its analyses in a three-volume series that examines directives for a rainbow of products and businesses. The recently released EC 1992: A Commerce Department Analysis of European Directives, Volume 3 features the Department's final roundup of EC Directives—primarily those stipulating technical requirements, nine of which pertain to computers and telecommunications.Much attention has focused on how the high tech industries, particularly computers, software, telecommunications, and information technology, will be affected by EC standards, certification and testing issues. (See Communications, April 1990 and July 1990). According to Myles Denny-Brown, an international economist and coordinator for EC 92 activities for information technology industries at the DoC, the high tech industry regards EC market potential with cautious optimism. “I believe (industry) feels there is the possibility for some real liberalization there,” he says. “But there is also the possibility of some restrictiveness.”Denny-Brown points out that standards and procurement issues are particularly important to build a competitive environment that would allow market growth to really take off the way it should. (see sidebar)
Opinion

Personal computing: compuvision or teleputer

Historically, the computer and communication industries have been separate, although both worked with electronically encoded information and shared similar technology. The regulations that kept computing and communication apart began to crumble in 1956 when the FCC ruled that Hush-a-Phone could attach equipment to the AT&T network under special circumstances. In 1959 they opened a portion of the microwave spectrum, and the 1968 Carter-phone case allowed all kinds of equipment to be tied to the network. Today, IBM is in the communication business and AT&T markets computers, but many feel that the distinction among computing, communication, news, and entertainment industries will blur or disappear. The question is, who will lead the charge—the computer companies, entertainment companies, toy companies, or phone companies? To put it another way, will the home computer swallow up the television set or will the television set become a computer in disguise? Not surprisingly, computer companies and entertainment companies have different answers to this question, and the debate was brought into focus by announcements at the Fifth Microsoft International CD-ROM Conference and Exposition. Let us look at the question of the home computer versus the smart TV and then come back to some of the other interesting announcements at the show and the current state of multimedia applications.
Research and Advances

Introduction—object-oriented design

Object-oriented, a buzzword of the late 1980s, has evolved into an accepted technology that has recognized benefits for the software development process. In its progression from a purely procedural approach, software development reached a data-driven—object-based—approach, and has grown beyond that to the object-oriented approach. The impact of the object-oriented approach is not limited to the design portion of the software development life cycle—its effects are evident at every phase. One of the strengths of the object model is that it provides a unifying element that is common to every phase of the life cycle. This uniformity provides a smooth transition from one phase to the next. The article by Henderson-Sellers and Edwards presents a revision of the traditional life cycle based on the object-oriented approach. It discusses the unique view of the design process and describes how it works: the process takes a specified problem and decomposes it. The resulting product forms the framework for a computer-based solution to the problem. Object-oriented techniques begin this decomposition process in the analysis phase and carry it on into the design phase. A modeling paradigm is used for the decomposition process: The top layer of an object-oriented system is a model of the real-life situation for which the software system is being created. The underlying layers provide the implementation of this model.The “pieces” produced by object-oriented techniques are as unique as the design perspective. Its obvious similarities to and subtle differences from abstract data type (ADT) technology have led to much discussion of objects and classes in terms of ADTs. The unique coupling of data and behavior in the object-oriented components provides much more than a syntactic distinction from the usual ADT. Added to the modeling approach, it produces a recognizably different approach to systems development.The term object-oriented is defined differently by different people. Many professionals agree with the basics of Wegner's definition1 that object-oriented includes three concepts: objects, classes, and class inheritance. Some would add a variety of other requirements including such concepts as polymorphism, dynamic binding, encapsulation, etc. The article by Korson and McGregor provides an overview of these concepts, leaving room for the reader to decide which to include and which to exclude; it also provides an overview of the basic concepts of object-oriented design.The production of software in an increasingly competitive environment is making reuse a priority of software professionals. The popularity of the object-oriented technique is due, in part, to its support for reuse. Two important factors influence reuse: First, it is necessary to have a set of high-quality components that are worth reusing. Second, the components must be well-defined, easy to integrate, and efficient. Meyer's article presents some experiences in developing the classes for the Eiffel library; it also discusses characteristics of the library. For components to be reused, the designer must have the means to locate a component which models an entity in the current problem. Not only must the component be located, but often necessary supporting pieces must be found as well. The article by Gibbs et al. provides information on the management of classes and the software components of the object-oriented paradigm.Tom DeMarco, in a recent interview,2 declared parallel computing to be the emerging new paradigm. According to DeMarco object-oriented techniques will be an integral partner in this emergence. DeMarco observed that designing with objects preserves the natural parallelism in a problem. Agha addresses models for parallel objects, presenting an overview of the problem and focusing on the actor model as a possible solution. He presents examples of design issues when using the actor model. He also considers a basic reflective design architecture.The importance of well-defined and well-managed abstractions in the software development process is discussed in the articles by Meyer and by Gibbs et al as they explore what has come to be called the software base, the set of software components from which future products will be built.The unique components developed by object-oriented methods are characterized by an interface that is separate from the implementation of that behavior. The designer is free to concentrate on modeling the problem at hand either by developing specific classes or by locating and reusing existing classes that model some subset of the needed behavior. Meyer focuses on what he quotes McIlroy as terming a “software components subindustry,”3 presenting a case study of the development of the Eiffel libraries.The final article in this special issue, by Henderson-Sellers and Edwards discusses modifications to the traditional life cycle supported by the object-oriented approach. The modified life cycle recognizes the iterative nature of the development process and incorporates that characteristic into its model.Wirfs-Brock and Johnson present a sampling of current research into several aspects of object-oriented design. Their survey includes efforts to improve reusability through design technique and paradigm-specific tools. The works are representative of the broad spectrum of research activity currently under way.We would like to thank the authors in this special issue for their hours of work both in developing their own articles and for the time spent evaluating and commenting upon the other articles.
Research and Advances

Understanding object-oriented: a unifying paradigm

The need to develop and maintain large complex software systems in a competitive and dynamic environment has driven interest in new approaches to software design and development. The problems with the classical waterfall model have been cataloged in almost every software engineering text [19,23]. In response, alternative models such as the spiral [2], and fountain [9] have been proposed. Problems with traditional development using the classical life cycle include no iteration, no emphasis on reuse, and no unifying model to integrate the phases. The difference in point of view between following data flows in structured analysis and building hierarchies of tasks in structured design has always been a major problem [4]. Each system is built from scratch and maintenance costs account for a notoriously large share of total system costs. The object-oriented paradigm addresses each of these issues. A look at the object-oriented software life cycle, as described by Meyer [5], Coad and Yourdon [4], and Henderson-Sellers and Edwards [9], identifies the three traditional activities of analysis, design, and implementation. However, each of the referenced descriptions eliminates the distinct boundaries between the phases. The primary reason for this blurring of boundaries is that the items of interest in each phase are the same: objects. Objects and the relationships between objects are identified in both the analysis and design phases. Objects and relationships identified and documented in the analysis phase serve not only as input to the design phase, but as an initial layer in the design. This continuity provides for a much more seamless interface between the phases. Analysts, designers and programmers are working with a common set of items upon which to build. A second reason for the blurring of these boundaries is that the object-oriented development process is iterative. Henderson-Sellers and Edwards further refine this idea by replacing the waterfall model of software development with a fountain model. Development reaches a high level only to fall back to a previous level to begin the climb once again. As an example of the blurring of the traditional boundaries of the life cycle phases, Coad and Yourdon recommend that classification relationships between objects be captured and documented during the object-oriented analysis (OOA) phase. This classification will be directly reflected in the class inheritance structure developed in the design and in the code. This classification is in no way required in order to document the system requirements. In other words, Coad and Yourdon are recommending a traditional design activity in the analysis phase. The blurring of the traditional design and implementation phases has been fueled by the development of encapsulation and abstraction mechanisms in object-oriented and object-based languages. For example, Meyer claims [14] that Eiffel is both a design and an implementation language. He goes on to say that software design is sometimes mistakenly viewed as an activity totally secluded from actual implementation. From his point of view, much is to be gained from an approach that integrates both activities within the same conceptual framework. The object-oriented design paradigm is the next logical step in a progression that has led from a purely procedural approach to an object-based approach and now to the object-oriented approach. The progression has resulted from a gradual shift in point of view in the development process. The procedural design paradigm utilizes functional decomposition to specify the tasks to be completed in order to solve a problem. The object-based approach, typified by the techniques of Yourdon, Jackson and Booth, gives more attention to data specifications than the procedural approach but still utilizes functional decomposition to develop the architecture of a system. The object-oriented approach goes beyond the object-based technique in the emphasis given to data by utilizing the relationships between objects as a fundamental part of the system architecture. The goal in designing individual software components is to represent a concept in what will eventually be an executable form. The Abstract Data Type (ADT) is the object-based paradigm's technique for capturing this conceptual information. The class is the object-oriented paradigm's conceptual modeling tool. The design pieces resulting from the object-oriented design technique represent a tighter coupling of data and functionality than traditional ADTs. These artifacts of the design process used in conjunction with a modeling-based decomposition approach yield a paradigm, a technique, which is very natural and flexible. It is natural in the sense that the design pieces are closely identified with the real-world concepts which they model. It is flexible in the sense of quickly adapting to changes in the problem specifications. Object-oriented remains a term which is interpreted differently by different people. Before presenting an overview of a set of techniques for the design process, we will give our perspective so the reader may judge the techniques in terms of those definitions. Briefly, we adapt Wegner's [27] definition for object-oriented languages to object-oriented design. The pieces of the design are objects which are grouped into classes for specification purposes. In addition to traditional dependencies between data elements, an inheritance relation between classes is used to express specializations and generalizations of the concepts represented by the classes. As natural and flexible as the object-oriented technique is, it is still possible to produce a bad design when using it. We will consider a number of general design criteria and will discuss how the object-oriented approach assists the designer in meeting these criteria. We will refer to a number of design guidelines developed specifically for the object-oriented design paradigm and will discuss how these properties reinforce the concepts of good design. The paradigm sprang from language, has matured into design, and has recently moved into analysis. The blurring of boundaries between these phases has led us to include topics in this article that are outside the realm of design, but which we consider important to understanding the design process. Since the paradigm sprang from language, we define the concepts basic to object-oriented programming in the following section.
Research and Advances

Implementation benefits of C++ language mechanisms

C + + was designed by Bjarne Stroustrup at AT&T Bell Laboratories in the early 1980s as an extension to the C language, providing data abstraction and object-oriented programming facilities. C + + provides a natural syntactic extension to C, incorporating the class construct from Simula. A design principle was to remain compatible and comparable with C in terms of syntax, performance and portability Another goal was to define an object-oriented language that significantly increased the amount of static type checking provided, with user-defined types (classes) and built-in types being part of a single unified type system obeying identical scope, allocation and naming rules. These aims have been achieved, providing some underlying reasons why C + + has become so prevalent in the industry. The approach has allowed a straightforward evolution from existing C-based applications to the new facilities offered by C + + , providing an easy transition for both software systems and programmers. The facilities described are based on Release 2.0 of the language, the version on which the ANSI and IS0 standardization of C + + is being based.
Research and Advances

Trellis: turning designs into programs

When designing an object-oriented program, there are several goals to achieve: The program should accurately model the real-world objects to be represented. This leads to a program that is easier to understand and therefore simpler to maintain. Code that implements the model accurately must also be robust. Inconsistent models should be detected at design time, not diagnosed as a run-time error at a customer's installation. The programming environment must allow for convenient exploration of the application; it should foster a reuse of existing types of objects, thus reducing the scale of the design. It should also facilitate changes to the program as the application design is updated. The Trellis programming system is an integrated language and environment that provides many of the mechanisms needed to design and implement object-oriented programs. The remainder of this sidebar discusses how Trellis helps attain the goals outlined above.
Research and Advances

Lessons from the design of the Eiffel libraries

The nature of programming is changing. Most of the software engineering literature still takes for granted a world of individual projects, where the sole aim is to produce specific software systems in response to particular requirements, little attention being paid to each system's relationship to previous or subsequent efforts. This implicit model seems unlikely to allow drastic improvements in software quality and productivity. Such order-of-magnitude advances will require a process of industrialization, not unlike what happened in those disciplines which have been successful at establishing a production process based on the reuse of quality-standardized components. This implies a shift to a “new culture” [14] whose emphasis is not on projects but instead on components. The need for such a shift was cogently expressed more than 20 years ago by Doug McIlroy in his contribution, entitled Mass-Produced Software Components [10], to the now-famous first conference on software engineering: Software production today appears in the scale of industrialization somewhere below the more backward construction industries. I think its proper place is considerably higher, and would like to investigate the prospects for mass-production techniques in software. [...] My thesis is that the software industry is weakly founded [in part because of] the absence of a software components subindustry [...] A components industry could be immensely successful. Although reuse has enjoyed modest successes since this statement was made, by all objective criteria McIlroy's prophecy has not been fulfilled yet; many technical and non-technical issues had to be addressed before reuse could become a reality on the scale he foresaw. (See [1] and [20] for a survey of current work on reuse.) One important development was needed to make this possible: the coming age of object-oriented technology, which provides the best known basis for reusable software construction. (That the founding document of object-oriented methods, the initial description of Simula 67, was roughly contemporary with McIlroy's paper tends to confirm a somewhat pessimistic version of Redwine and Riddle's contention [18] that “it takes on the order of 15 to 20 years to mature a technology to the point that it can be popularized to the technical community at large.”) Much of the current excitement about object-oriented software construction derives from the growing realization that the shift is now technically possible. This article presents the concerted efforts which have been made to advance the cause of component-based software development in the Eiffel environment [12, 17] through the construction of the Basic Eiffel Libraries. After a brief overview of the libraries, this article reviews the major language techniques that have made them possible (with more background about Eiffel being provided by the sidebar entitled “Major Eiffel Techniques”); it then discusses design issues for libraries of reusable components, the use of inheritance hierarchies, the indexing problem, and planned developments.
Research and Advances

Class management for software communities

Object-oriented programming may engender an approach to software development characterized by the large-scale reuse of object classes. Large-scale reuse is the use of a class not just by its original developers, but by other developers who may be from other organizations, and may use the classes over a long period of time. Our hypothesis is that the successful dissemination and reuse of classes requires a well-organized community of developers who are ready to share ideas, methods, tools and code. Furthermore, these communities should be supported by software information systems which manage and provide access to class collections. In the following sections we motivate the need for software communities and software information systems. The bulk of this article discusses various issues associated with managing the very large class collections produced and used by these communities.
Research and Advances

Surveying current research in object-oriented design

The state of object-oriented is evolving rapidly. This survey describes what are currently thought to be the key ideas. Although it is necessarily incomplete, it contains both academic and industrial efforts and describes work in both the United States and Europe. It ignores well-known ideas, like that of Coad and Meyer [34], in favor of less widely known projects.Research in object-oriented design can be divided many ways. Some research is focused on describing a design process. Some is focused on finding rules for good designs. A third approach is to build tools to support design. Most of the research described in this article does all three.We first present work from Alan Snyder at Hewlett-Packard on developing a common framework for object-oriented terminology. The goal of this effort is to develop and communicate a corporate-wide common language for specifying and communicating about objects.We next look into the research activity at Hewlett-Packard, led by Dennis de Champeaux. De Champeaux is developing a model for object-based analysis. His current research focuses on the use of a trigger-based model for inter-object communications and development of a top-down approach to analysis using ensembles.We then survey two research activities that prescribe the design process. Rebecca Wirfs-Brock from Tektronix has been developing an object-oriented design method that focuses on object responsibilities and collaborations. The method includes graphical tools for improving encapsulation and understanding patterns of object communication. Trygve Reenskaug at the Center for Industriforskning in Oslo, Norway has been developing an object-oriented design method that focuses on roles, synthesis, and structuring. The method, called Object-Oriented Role Analysis, Syntheses and Structuring, is based on first modeling small sub-problems, and then combining small models into larger ones in a controlled manner using both inheritance (synthesis) and run-time binding (structuring).We then present investigations by Ralph Johnson at the University of Illinois at Urbana-Champaign into object-oriented frameworks and the reuse of large-scale designs. A framework is a high-level design or application architecture and consists of a suite of classes that are specifically designed to be refined and used as a group. Past work has focused on describing frameworks and how they are developed. Current work includes the design of tools to make it easier to design frameworks.Finally, we present some results from the research group in object-oriented software engineering at Northeastern University, led by Karl Lieberherr. They have been working on object-oriented Computer Assisted Software Engineering (CASE) technology, called the Demeterm system, which generates language-specific class definitions from language-independent class dictionaries. The Demeter system include tools for checking design rules and for implementing a design.
Research and Advances

Concurrent object-oriented programming

Three significant trends have underscored the central role of concurrency in computing. First, there is increased use of interacting processes by individual users, for example, application programs running on X windows. Second, workstation networks have become a cost-effective mechanism for resource sharing and distributed problem solving. For example, loosely coupled problems, such as finding all the factors of large prime numbers, have been solved by utilizing ideal cycles on networks of hundreds of workstations. A loosely coupled problem is one which can be easily partitioned into many smaller subproblems so that interactions between the subproblems is quite limited. Finally, multiprocessor technology has advanced to the point of providing supercomputing power at a fraction of the traditional cost.At the same time, software engineering considerations such as the need for data abstraction to promote program modularity underlie the rapid acceptance of object-oriented programming methodology. By separating the specification of what is done (the abstraction) from how it is done (the implementation), the concept of objects provides modularity necessary for programming in the large. It turns out that concurrency is a natural consequence of the concept of objects. In fact Simula, the first object-oriented language, simulated a simple form of concurrency using coroutines on conventional architectures. Current development of concurrent object-oriented programming (COOP) is providing a solid software foundation for concurrent computing on multiprocessors, Future generation computing systems are likely to be based on the foundations being developed by this emerging software technology.The goal of this article is to discuss the foundations and methodology of COOP. Concurrency refers to the potentially parallel execution of parts of a computation. In a concurrent computation, the components of a program may be executed sequentially, or they may be executed in parallel. Concurrency provides us with the flexibility to interleave the execution of components of a program on a single processor, or to distribute it among several processors. Concurrency abstracts away some of the details in an execution, allowing us to concentrate on conceptual issues without having to be concerned with a particular order of execution which may result from the quirks of a given system.Objects can be defined as entities which encapsulate data and operations into a single computational unit. Object models differ in how the internal behavior of objects is specified. Further, models of concurrent computation based on objects must specify how the objects interact, and different design concerns have led to different models of communication between objects. Object-oriented programming builds on the concepts of objects by supporting patterns of reuse and classification, for example, through the use of inheritance which allows all instances of a particular class to share the same method.In the following section, we outline some common patterns of concurrent problem solving. These patterns can be easily expressed in terms of the rich variety of structures provided by COOP. In particular, we discuss the actor model as a framework for concurrent systems1 and some concepts which are useful in building actor systems. We will then describe some other models of objects and their relation to the actor model along with novel techniques for supporting reusability and modularity in concurrent object-oriented programming. The last section briefly outlines some major on-going projects in COOP.It is important to note that the actor languages give special emphasis to developing flexible program structures which simplify reasoning about programs. By reasoning we do not narrowly restrict ourselves to the problem of program verification—an important program of research whose direct practical utility has yet to be established. Rather our interest is in the ability to understand the properties of software because of clarity in the structure of the code. Such an understanding may be gained by reasoning either informally or formally about programs. The ease with which we can carry out such reasoning is aided by two factors: by modularity in code which is the result of the ability to separate design concerns, and by the ability to abstract program structures which occur repeatedly. In particular, because of their flexible structure, actor languages are particularly well-suited to rapid prototyping applications.
Research and Advances

The object-oriented systems life cycle

In software engineering, the traditional description of the software life cycle is based on an underlying model, commonly referred to as the “waterfall” model (e.g., [4]). This model initially attempts to discretize the identifiable activities within the software development process as a linear series of actions, each of which must be completed before the next is commenced. Further refinements to this model appreciate that such completion is seldom absolute and that iteration back to a previous stage is likely. Various authors' descriptions of this model relate to the detailed level at which the software building process is viewed. At the most general level, three phases to the life cycle are generally agreed upon: 1) analysis, 2) design and 3) construction/implementation (e.g., [36], p. 262; [42]) (Figure 1(a)). The analysis phase covers from the initiation of the project, through to users-needs analysis and feasibility study (cf. [15]); the design phase covers the various concepts of system design, broad design, logical design, detailed design, program design and physical design. Following from the design stage(s), the computer program is written, the program tested, in terms of verification, validation and sensitivity testing, and when found acceptable, put into use and then maintained well into the future.In the more detailed description of the life cycle a number of subdivisions are identified (Figure 1(b)). The number of these subdivisions varies between authors. In general, the problem is first defined and an analysis of the requirements of current and future users undertaken, usually by direct and indirect questioning and iterative discussion. Included in this stage should be a feasibility study. Following this a user requirements definition and a software requirements specification, (SRS) [15], are written. The users requirements definition is in the language of the users so that this can be agreed upon by both the software engineer and the software user. The software requirements specification is written in the language of the programmer and details the precise requirements of the system. These two stages comprise an answer to the question of WHAT? (viz. problem definition). The user-needs analysis stage and examination of the solution space are still within the overall phase of analysis but are beginning to move toward not only problem decomposition, but also highlighting concepts which are likely to be of use in the subsequent system design; thus beginning to answer the question HOW? On the other hand, Davis [15] notes that this division into “what” and “how” can be subject to individual perception, giving six different what/how interpretations of an example telephone system. At this requirements stage, however, the domain of interest is still very much that of the problem space. Not until we move from (real-world) systems analysis to (software) systems design do we move from the problem space to the solution space (Figure 2). It is important to observe the occurrence and location of this interface. As noted by Booth [6], this provides a useful framework in object-oriented analysis and design.The design stage is perhaps the most loosely defined since it is a phase of progressive decomposition toward more and more detail (e.g., [41]) and is essentially a creative, not a mechanistic, process [42]. Consequently, systems design may also be referred to as “broad design” and program design as “detailed design” [20]. Brookes et al. [9] refer to these phases as “logical design” and “physical design.” In the traditional life cycle these two design stages can become both blurred and iterative; but in the object-oriented life cycle the boundary becomes even more indistinct.The software life cycle, as described above, is frequently implemented based on a view of the world interpreted in terms of a functional decomposition; that is, the primary question addressed by the systems analysis and design is WHAT does the system do viz. what is its function? Functional design, and the functional decomposition techniques used to achieve this, is based on the interpretation of the problem space and its translation to solution space as an interdependent set of functions or procedures. The final system is seen as a set of procedures which, apparently secondarily, operate on data.Functional decomposition is also a top-down analysis and design methodology. Although the two are not synonymous, most of the recently published systems analysis and design methods exhibit both characteristics (e.g., [14, 17]) and some also add a real-time component (e.g., [44]). Top-down design does impose some discipline on the systems analyst and program designer; yet it can be criticized as being too restrictive to support contemporary software engineering designs. Meyer [29] summarizes the flaws in top-down system design as follows: 1. top-down design takes no account of evolutionary changes; 2. in top-down design, the system is characterized by a single function—a questionable concept; 3. top-down design is based on a functional mindset, and consequently the data structure aspect is often completely neglected; 4. top-down design does not encourage reusability. (See also discussion in [41], p. 352 et seq.)
Research and Advances

The 1988–89 Taulbee survey report

This report describes the results of a survey of the Forsythe list of computing Departments1, completed in December, 1989. The survey concerns the production and employment of Ph.D.s that graduated in 1988-892 and the faculty of Ph.D.-granting computing departments during the academic year 1989-90. All 129 Computer Science (CS) departments (117 U.S. and 12 Canadian) participated. In addition, 29 of 32 departments offering the Ph.D. in Computer Engineering (CE) were included3. Throughout this report, CE statistics are reported separately so that comparisons with previous years can be made for CS, but the intention is to merge all statistics for CS and CE in a few more years. Some highlights from the survey are: The 129 CS departments produced 625 Ph.D.s, an increase of 8 percent over the previous year; 336 were Americans, 35 Canadians, and 248 (40 percent) foreign (6 were unknown). Of the 625, 309 (49 percent) stayed in academia, 181 (29 percent) went to industry, 24 (4 percent) to government, and 56 (9 percent) overseas; 7 were self-employed; and 9 were unemployed (39 were unknown). A total of 1,215 students passed their Ph.D. qualifying exam in CS departments, an increase of 9 percent over 1987-88. No Afro-Americans, 6 Hispanics, and 87 women (14 percent) received Ph.D.s this year. The 129 CS departments have 2,550 faculty members, an increase of 123, or almost 1 per department. There are 938 assistant, 718 associate, and 894 full professors. The increase came entirely in the associate professor range. The 129 CS departments reported hiring 204 faculty and losing 161 (to retirement, death, other universities, graduate school, and non-academic positions). Only 9 assistant professors in the 158 CS and CE departments are Afro-American, 24 Hispanic, and 103 (9 percent) female. Only 2 associate professors are Afro-American, 8 Hispanic, and 74 (8 percent) are female. Only 5 full professors are Afro-American, 8 Hispanic, and 33 (3 percent) female. The growth in Ph.D. production to 625 is less than what was expected (650-700). Still, a growth of almost 50 Ph.D.s is substantial, and it will mean an easier time for departments that are trying to hire and a harder time for the new Ph.D.s. There is still a large market. The new Ph.D.s. however, cannot all expect to be placed in the older, established departments, and more will take positions in the newer departments and in the non-Ph.D.-granting departments.Growth of Ph.D. production seems to have slowed enough so that over production does not seem to be a problem in the near future. There will not be enough retirements, however, to offset new Ph.D. production for ten years. (In the 158 departments, 22 faculty members died or retired last year.) We believe that many of the new Ph.D.s would benefit from a year or two as a postdoc, and perhaps it is time for the NSF to institute such a program in computer science and engineering.The percentage of CS Ph.D.s given to foreign students remained about the same at 40 percent. In CE, the percentage was much higher, at 65 percent.The field continues to be far too young, a problem that only time is solving. CS continues to have more assistant professors than full professors, which puts an added burden on the older people, but there was substantial growth this year in the number of associate professors (as assistant professors were promoted). But the ratio of assistant to full professors in CS has not changed appreciably in four years. As we have mentioned in previous Taulbee Reports, no other field, as far as we know, has this problem. In fact, most scientific fields are 80 to 90 percent tenured in many universities. In CS, this problem is more severe in the newer and lower-ranked departments. In fact, the top 24 departments now have 223 assistant, 176 associate, and 290 full professors. The CE departments have far more full professors than assistant professors, mainly because many are older EE departments offering CE degrees.As we have indicated, Afro-Americans and Hispanics simply are not entering computer science and engineering. It is high time that we did something about it, and we hope the CRB will take the lead in introducing programs to encourage more participation from these minorities.There was a slight growth in the percentage of female Ph.D.s in CS, from 10 to 14 percent. Still, there are far too few women in our field, and our record of retention of women in the faculty is abysmal. There are only 33 female full professors in the 158 CS and CE Ph.D.-granting departments! Again, we hope the CRB will help introduce programs to encourage more women to enter computing and to remain in academia over the years. The signs are that the NSF is interested in this problem as well.
Opinion

Inside risks: a few old coincidences

Computer Puns Considered Harmful: Presented here are two old examples of harmful input sequences that might be called computer puns. Each has a double meaning, depending upon context.Xerox PARC's pioneering WYSI-WYG editor BRAVO [1] had a lurking danger. In edit mode, BRAVO interpreted the sequence edit as “Everything Deleted Insert t,” which did exactly that—transformed a large file into the letter ‘t’ without blinking. After the first two characters, it was still possible to undo the ‘ed,’ but once the ‘i’ was typed the only remaining fallback was to replay the recorded keystroke log from the beginning of the editing session (except for ‘edit’) against the still-unaltered original file.A similar example was reported by Norman Cohen of SofTech: he had been entering text using the University of Maryland line editor on the Univac 1100 for an hour or two, when he entered two lines that resulted in the entire file being wiped out. The first line contained exactly 80 characters (demarcated by a final carriage return); the second line began with the word “about.” Cohen said: “Because the first line was exactly 80 characters long, the terminal handler inserted its own CR just before mine, but I started typing the second line before the generated CR reached the terminal. When I finished entering the second line, a series of queued output lines poured out of the terminal. It seems that, having received the CR generated by the terminal handler, the editor interpreted my CR as a request to return from input mode to edit mode. In edit mode, the editor processed the second line by interpreting the first three letters as an abbreviation for abort and refused to be bothered by the rest of the line. Had the editing session been interrupted by a system crash, an autosave feature would have saved all but the last 0-20 lines I had entered. However, the editor treated the abort request as a deliberate action on my part, and nothing was saved.Two Wrongs Make a Right (Sometimes):A somewhat obscure wiring fault remained undetected for many years in the Harvard Mark I. Each decimal memory register consisted of 23 ten-position stepping switches (plus a sign switch). Registers were used dually as memory locations and as adders. The wires into (and out of) the least significant two digits of the last register were crossed, so that the least significant position was actually the second-to-least position and vice versa with respect to memory. No problems arose for many years during which that register was used fortuitously only for memory in the computation of tables of Bessel functions of the nth kind; the read-in error corrected itself on read-out. The problem finally manifested itself on the n + 1st tables when that register was used as an adder and a carry went in the wrong direction. This was detected only because it was standard practice in those days to difference the resulting tables by hand (using very old adding machines). Things have changed and we have learned a lot; however, similar problems continue to arise, often in new guises. Discussion:Today's systems have comparable dangers lurking, with even more global effects. In user interfaces, we have all experienced a slight error in a command having devastating consequences. In software, commands typed in one window or in one directory may have radically different effects in other contexts. Programs are often not written carefully enough to be independent of environmental irregularities and less-than-perfect users. Search paths provide all sorts of opportunities for similar computer puns (including triggering of Trojan horses). Accidental deletion is still quite common, although we now have undelete operations. In hardware, various flaws in chip designs have persisted into delivery.Many of you will have similar tales to tell. Please contribute them.Conclusions: Designers of human interfaces should spend much more time anticipating human foibles. Crosschecking and backups are ancient techniques, but still essential. Computers do not generally appreciate puns.

Recent Issues

  1. October 2024 CACM cover
    October 2024 Vol. 67 No. 10
  2. September 2024 CACM cover
    September 2024 Vol. 67 No. 9
  3. August 2024 CACM cover
    August 2024 Vol. 67 No. 8
  4. July 2024 CACM cover
    July 2024 Vol. 67 No. 7