Opinion
Computing Applications

Forum

Posted
  1. The Other Side of Embedding the Internet
  2. A Cure for Lost Programming Companionship
  3. Licensing Software Developers

At face value, the special section "Embedding the Internet" (May 2000) presents an exciting new direction for the semiconductor industry and a bold challenge to those who produce software. For semiconductor vendors, there is the opportunity to create a new kind of product with a potential market thousands of times larger than for today’s chips. Software developers and system integrators would have the opportunity to transform the world with systems based on the new components. Who could say no to such an exciting possibility?

Beneath the surface, however, I see the proposed new direction leading us on a path of extreme danger. Dangers include nearly total destruction of human privacy, the likelihood of serious damage to the natural environment and the constructed world of human artifacts, and irretrievable loss of opportunity for progress through human-centered software. The harmful consequences of embedding the Internet are likely to be realized much sooner than those envisioned by Bill Joy in his recent analysis (Wired, Apr. 2000) of the expected future convergence of robotics, genetic engineering, and nanotechnology. I view the ACM Code of Ethics and Professional Conduct and the ACM/IEEE-CS Software Engineering Code of Ethics and Professional Responsibility as imposing an obligation on both the proponents of embedding the Internet and IT leaders generally to acknowledge these dangers and forge a safer path.

Who really wants to be continuously detected and tracked by sensors in every street light, while traveling through a city? The expectation that an instrumented environment will destroy human privacy is almost trivially obvious, since the network will embrace all types of fixed and movable human artifacts, will be able to sense individuals and their externally recognizable activities in public and private spaces, and will capture as personally identifiable data human actions and events at a minute level of detail. Furthermore, even thoughts are at risk of being captured by the system, based on the work of Kevin Warwick and his colleagues (In Proceeedings of the IEEE, Feb. 1999 and Wired, Feb. 2000).

Embedding the Internet envisions a ubiquitous network of communicating sensors, processors, storage, and actuators; operating in the real world and hosting largely autonomous systems. This vision is truly a prescription for building Big Brother ("From the President," Feb. 2000)—a Big Brother that totally surrounds us with the power of automated sensing and control, greatly augmenting the system of surveillance first envisioned fictionally by George Orwell. The mechanism of control input will be the "supervisory interface" described in "Proactive Computing" (May 2000, p. 43). I foresee this interface will be designed to accept legislative, bureaucratic, and organizational policy inputs that will be processed preferentially by the system to exert direct control over the instrumented environment and the human activities occurring therein. We should expect this system, like most other large-scale software, to have little or no flexibility for most individuals using it or being affected by it. At the level of day-to-day activity, there is likely to be minimal allowance for individual differences and negligible provision for direct human interaction, accommodation, and conflict resolution. Those who control input to the supervisory interface will comprise a penultimate ruling class or individual, with overwhelming power.

The intended instrumented environment may never reach the state of a fully working Big Brother system, because it is likely to exhibit dangerously unpredictable and damaging behavior well before completion. As described, the proposed network of embedded nodes will be open, expandable, and universal and will also host systems functioning autonomously in real time while acting directly in the physical world. Since no such network has ever been built, the required analysis of risk must be performed by extrapolating from the characteristics of existing systems.

Real-time control systems that act directly and autonomously in the physical world are built and tested according to a relatively complete behavioral specification, because predictable behavior within a known acceptable margin of uncertainty is required. Information systems that deliver output to humans are allowed to possess a much higher probability of unexpected behavior, because human judgment and control buffers the output to prevent serious or catastrophic harm. Systems and multisystems that combine real-time and information system characteristics, such as automatic teller machine networks, unattended point-of-sale systems, and production automation/ supervision systems, achieve acceptable predictability of behavior through a combination of the following measures: bounding the scope of data and functionality within the real-time domain of real effects, limiting and controlling interaction between real-time and non-real-time subsystems, accepting a risk of malfunction that is relatively well understood.

The proposed universal instrumented environment will combine, in ways that are unlimited and uncontrolled, the greater probability of unanticipated behavior typical of information systems with direct impact on the physical word that by experience demands predictable behavior. On what basis can anyone ask society to accept such a dangerously high level of risk of serious and irreversible damage to individuals, to human institutions (social, political, and economic), and to the natural and constructed human environment? At the very least, those who propose to redirect whole occupations and industries in a venture of such overwhelming risk should be required to accept responsibility for participation in an extended period of limited prototyping, system modeling and simulation, and independent risk assessment. These activities ought to precede any major commitment of resources or other irrevocable decision.

Embedding the Internet proposes a dramatic and irreversible escalation of technology push that would foreclose the possibility of reconnecting with and beginning to satisfy the human needs of real users. I make this assertion because human-centered automation has only recently become possible. When IT resources were scarce, people were forced to conform to the imperatives of their machines. As IT resources have become more abundant, only some of the expanded processor/communication/storage power has been directed to serve the user interface and human purposes, and over time much potential human-factors progress has been sacrificed in favor of additional expanded functionality and network scope. Today, most software remains inflexible, bloated, bug-ridden, difficult to install, difficult to configure, costly to maintain, and open to abuse. The human-centered attributes of privacy, safety, and ease-of-use continue to elude, largely because they are treated as second-class requirements.

The need to confront the dangers of embedding of the Internet creates a compelling reason for software researchers and developers to declare independence from the hardware technology push. Achieving the goal of truly human-centered software requires at least two changes: (1) Elevate human-centered attributes, such as safety, privacy, and ease-of-use, to the status of mandatory requirements, equal in importance to function and performance; and (2) Establish independent means for verifying human-centered software properties. Such means must be independent of both developers and government. The emergence of abundant IT resources now makes the building of human-centered automation truly possible. Why should the software community abandon this goal now that it can be a reality?

The dangers posed by embedding the Internet are not new to human understanding. Imaginative thinkers long ago foresaw a time when IT would threaten human life in the exact ways now emerging, in the form of autonomous systems (robots) and universal surveillance. Is it ironic or predictable that the era first able to create truly dangerous IT is also the first era that seems reluctant to acknowledge the danger? Substantial recognition of such danger in the current era, if it occurs, will set up a major conflict with ongoing trends in computer-based occupations and industries.

Robert Levine
Sierra Vista, AZ

Back to Top

A Cure for Lost Programming Companionship

William and Kessler (May 2000, p. 108) explicitate much of what I have believed for years. I am a successful programmer working solo, but in many assignments I have missed the companionship of other programmers working on the same project. The advantage of a high-walled cubicle is mainly that one can snooze without being noticed; there is no corresponding advantage to the low-walled cubicle, which nevertheless cuts one off from common consideration of a program unless someone walks around. There was an article in Datamation in the early 1980s about the design of programming areas, but no one with design responsibilities seems to have taken it to heart; perhaps Kessler and William can lead us somewhere good.

What I miss in the article is the importance of paper. It is not a good idea to program directly on the screen. One should first mark up the previous compilation listing. This gives more time for reflection and the transformation of the entire compilation unit, and prevents certain omissions. I realize that compilation units often have tremendously hefty printouts, but that is partly a result of, rather than an excuse for, immediate on-screen work. Consideration of the listing frequently allows the programmers to recognize and delete unused variables and procedure sequences, which should be done in combination with work on the problem at hand. One partner should wield the red pen while the other watches; at editing time, the watcher should become the writer. At the next turnaround (or on the alternating project) the roles should be reversed.

John A. Wills
San Francisco CA

Back to Top

Licensing Software Developers

Computer programming is not an art but an engineering activity whose correct practice produces "safe and reliable products" (see Nancy Mead’s letter, "Forum," May 2000, p. 11).

Unfortunately, too many lawyers, politicians, and well-intentioned laypersons too easily accept such a proposition, which is demonstrably false if only because the very terms "safe" and "reliable" have not been defined in any reasonable and general way.

Thus, while the ACM Council’s decision to not endorse the licensing of software engineers is controversial, it seems entirely justified. Those whose sentiments tend in the other direction should not be taken seriously until the key terms of the debate can be defined in at least a legally (if not mathematically) sustainable manner.

Having been involved in either programming or specifying nontrivial systems for many years, I have concluded that "correct" software for nontrivial systems cannot be obtained at any acceptable cost. Exhaustive testing is the only commonly used means for assuring correctness, but is feasible only for relatively trivial systems.

Other means of software assurance (for example, probabilistic generation of test cases) generally reduces only the likelihood of bugs. Cost of assurance being a vicious function of both the size of a system and the degree of "correctness" desired, every commonly used approach rapidly becomes prohibitively expensive. For any defensible definition of "correct," few if any of today’s larger software systems, expensive as they are, can be viewed as correct. The only exceptions may be some systems (or parts of some systems) that can be proved correct through use of formal methods.

Formal methods offer promise in enabling system developers to create provably correct software modules, but few developers have the necessary training and temperament to use these methods. Moreover, formal methods are still largely ad hoc in application, because the field is still in its infancy and lacks the tools and standards needed for broad and effective use by practitioners.

Software engineers are often overly confident in their own ability to produce bug-free code, but if backed into a corner they will invariably decline to stake their credibility on the correctness of any nontrivial code they have produced and fielded.

Legislatures have been known to enact bad laws. At this stage, ACM can hardly be held to account if states start passing bad laws for licensing software developers. Having ACM involved in the messy legislative process is unlikely to produce any good laws until, once again, the key terms of the debate (for example, the terms "safe" and "reliable") are defined in a usable and legally, if not mathematically, acceptable way.

I am skeptical about the possibility that consensus could gel around any definitions for the key terms of the debate. If we cannot agree even on the definitions, we may be forced into a hard-to-swallow admission. It just may be, after all, that software development is largely an art, not a purely engineering practice.

James L. Rash
Upper Marlboro, MD

Please address all Forum correspondence to the Editor, Communications, 1515 Broadway, New York, NY 10036; email: crawfordd@acm.org.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More