Advertisement

Practice

N-Fold inspection: a requirements analysis technique

N-fold inspection uses traditional inspections of the user requirements document (URD) but replicates the inspection activities using N independent teams. A pilot study was conducted to explore the usefulness of N-fold inspection during requirements analysis. A comparison of N-fold inspection with other development techniques reveals that N-fold inspection is a cost-effective method for finding faults in the URD and may be a valid technique in the development of mission-critical software systems.
Research and Advances

An experimental analysis of the performance of fourth generation tools on PCs

The performance of several Fourth Generation Language (4GL) tools is analyzed empirically and compared with equivalent programs written in the third generation COBOL programming language. A set of performance benchmarks consisting of thirteen separate functions is presented which encompasses the areas of simulating the operators of the relational algebra, accessing records in the database, and updating the database. This serves as a baseline for comparing the various 4GL systems.
Research and Advances

HDTV and the computer industry

In the 1940s, television and radio combinations like the one on the left were the precursors of today's "entertainment centers ." About 25 years later, black and white television gave way to color. The next leap, high resolution television coupled with CD quality stereo may be coming soon. Last April, in the deep hours of the night, NBC, Channel 4 in New York City, announced a test: viewers would witness the first broadcast of the Sarnoff Research Center's Advanced Compatible Television system. The program of choice: the Saint Patrick's Day Parade. Researchers at the Sarnoff Center in Princeton, NJ, watched the signal, which emanated from WNBC at the World Trade Center in New York, on experimental high-definition monitors. The Sarnoff group, along with ABC and General Electric's consumer electronics division, has invested $60 million to develop a system that has twice the lines of regular TV screens (1050), scans at the same rate (29.97 frames per second), and enhances the current standard signal. A planned two-step approach to advanced television, the second phase would provide greater detail to those who bought wide-screen monitors, but it still would be compatible with today's TVs. The Sarnoff system is one of 20 advanced television research projects under way in this country, including those of the Zenith Electronics Group in cooperation with AT&T Microelectronics, Massachusetts Institute of Technology's Advanced Television Research Program, the New York Institute of Technology, the Del-Ray Group, and North American Philips. Many have applied for matching funds from DARPA, which has earmarked $30 million for the next three years—half slated for transmission research and half for display technology. As of August 1989, five companies had been granted money for display technology. Newco Inc. of San Jose, CA, Raychem Corporation of Menlo Park, CA, Texas Instruments Inc. of Dallas, TX, and Projectavision Inc. of New York will receive money for projection display systems. Photonics Technology Inc. of Northwood, OH, was selected for flat panel display technology. More money may be forthcoming, as pressure is exerted to create systems to compete with or to supplant high-definition television (HDTV) as offered by Japan's national broadcasting organization Nippon Hoso Kyokai (NHK) and Europe's Eureka systems.
Research and Advances

Computer accessibility for federal workers with disabilities: it’s the law

In 1986, Congress passed Public Law 99-506, the "Rehabilitation Act Amendments of 1986." This law, amending the famous Rehabilitation Act of 1973, contains a small section, titled "Electronic Equipment Accessibility," Section 508, which may have significant impact on the design of computer systems and their accessibility by workers with disabilities. The bill became law when it was signed by former President Reagan on October 21, 1986. The purpose of this article is to inform concerned computer professionals of Section 508, outline the guidelines and regulations pursuant to the law, describe some of the reaction to the guidelines and regulations, and describe some of the challenges for the future in meeting the computer accessibility needs of users with disabilities. Section 508 was developed because it was realized that government offices were rapidly changing into electronic offices with microcomputers on every desk. In order for persons with disabilities to keep their jobs or gain new employment in the government, Congress decided it was necessary to make provisions to guarantee accessibility to microcomputers and other electronic office equipment. The driving principle behind Section 508 can be found in Section 504 of the Rehabilitation Act of 1973 which states: No otherwise qualified handicapped individual in the United States . . . shall, solely by reason of his handicap, be excluded from the participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance. It should be stated off the top that the scope of Section 508 is not as broad as Section 504. In particular, Section 508 only applies to direct purchases by the federal government and not to purchases made by all programs receiving government funding. Section 508 does not specify what the guidelines should be nor does it delineate a philosophy on which to base the guidelines. A committee established by the National Institute on Disability and Rehabilitation Research (NIDRR) and the General Services, Administration (GSA), in consultation with the electronics industry, rehabilitation engineers, and disabled computer professionals worked for a year developing the philosophy and guidelines which will significantly affect the purchase of electronic office equipment, including computers and software, by the federal government, the largest computer customer in the world.
Research and Advances

Computing, research, and war: if knowledge is power, where is responsibility?

In the United States, artificial intelligence (AI) research is mainly a story about military support for the development of promising technologies. Since the late 1950s and early 196Os, AI research has received most of its support from the military research establishment [37, 55].1 Not until the 1980s, however, has the military connected this research to specific objectives and products. In 1983, the $600-million Strategic Computing Program (SCP) created three applications for "'pulling' the technology-generation process by creating carefully selected technology interactions with challenging military applications" [16]. These applications, an autonomous land vehicle, a pilot's associate, and a battle management system, explicitly connect the three armed services to further AI developments [29, 51, 53]. The Defense Science Board Task Force on the "Military Applications of New-Generation Computer Technologies" recommended warfare simulation, electronic warfare, ballistic missile defense and logistics management as also promising a high military payoff [18]. In his 1983 "Star Wars" speech, President Reagan enjoined "the scientific community, . . . those who gave us nuclear weapons, . . . to give us the means of rendering these nuclear weapons impotent and obsolete" [43]. As in the Manhattan and hydrogen bomb projects, AI researchers and more generally computer scientists are expected to play major parts in this quest for a defensive shield against ballistic missiles. Computing specialists such as John von Neumann played a supportive role by setting up the computations necessary for these engineering feats—with human "computers" for the atom bomb [10]2 and with ENIAC and other early computers for the hydrogen bomb [9]. The "Star Wars" project challenges computer scientists to design an intelligent system that finds and destroys targets—basically in real-time and without human intervention. The interdependence of the military and computer science rarely surfaces during our education as computer practitioners, researchers, and teachers. Where might information concerning these important military applications enter into computer science and AI education? Where do students receive information concerning the important role they may play in weapon systems development? One of our students recently remarked that "as a computer science major, I did not realize the magnitude of the ramifications of advancing technology for the military . . . . In a field so dominated by the DoD, I will have to think seriously about what I am willing and not willing to do—and what lies in between those two poles."3 As researchers and educators, the authors wish to encourage colleagues and students to reflect upon present and historical interactions between computer science as an academic discipline and profession, and military projects and funding. As computer professionals, we lay claim to specialized knowledge and employ that knowledge in society as developers of computing technologies. Thus, we exercise power. Recognizing that as professionals we wield power, we must also recognize that we have responsibilities to society. To act responsibly does not mean that computer professionals should advocate a complete separation between computer science and military missions. However, we should openly examine the inter-relationships between the military and the discipline and practice of computing. To act responsibly does not mean that computer scientists and practioners should eschew support or employment from the military, although some are justified in taking such a stance.4 To act responsibly requires attention to the social and political context in which one is embedded; it requires reflection upon individual and professional practice; it requires open debate. The lack of attention to issues of responsibility in the typical computer science curriculum strikes us as a grave professional omission. With this article, we hope to add material to the dialogue on appropriate computing applications and their limits. We also hope to provoke reflections on computing fundamentals and practice at the individual, professional, and disciplinary levels, as well as prodding government institutions, professional societies, and industry to support in-depth research on the issues we raise here. Reflection requires information and discussion. Academic computer science departments rarely support serious consideration of even general issues under the rubric of the social and ethical implications of computing. Unlike any other U.S. computer science department, Information and Computer Science (ICS) at UC Irvine has an active research program in the social implications of computing (Computers, Organizations, Policy and Society—CORPS). Even within CORPS, research that addresses the interactions between the military and computer science is difficult to pursue—not because individuals aren't interested, but because they are not able to find professional or academic support. The authors' interests in these issues arose from personal concerns over the dependence of military systems upon complex technology, and the possible grave outcomes of this fragile relationship. CORPS provided a supportive intellectual environment that allowed us to pursue our interests. In 1987, we developed and taught an undergraduate course designed to inform students about military applications and their limits, and allow dialogue on professional responsibilities. In general, little monetary support is available for research that considers these issues, and it is only through support from the Institute on Global Conflict and Cooperation and campus instructional funds that we were able to develop and teach the course. Few researchers or educators can devote time and/or energy to pursue the social and ethical implications of their work and profession, in addition to their "mainstream" research. Since the discipline of computer science does not consider these reflections serious "mainstream" research, those who chose to pursue these vital questions have difficulties finding employment and/or advancing through the academic ranks. Growing concern over these issues and interest by computer scientists, as evidenced by the group Computer Professionals for Social Responsibility [38], individuals such as David Parnas [39], and this article, may lead to future research support and academic recognition. For now, as concerned professionals, we offer the following reviews. They pose many more questions than answers. This article exemplifies the interdisciplinary investigations which are required as precursors to serious analysis of computing use in these applications. We hope that our reviews generate discussion and debate. In the first section, we present the course rationale and content, as well as student responses. In the sections following the course description, we consider three applications—smart weapons, battle management, and war game simulations—that are generating research and development funds and that have controversial implications for military uses of computing. We start with smart weapons, that is, the development of weapons that can destroy targets with minimal human intervention. Next we look at battle management systems designed to coordinate and assess the use of resources and people in warfare. Finally, we turn to war gaming as a means for evaluating weapon performance and strategies for war fighting. In each case, we describe the state of technology, its current and potential uses and its implications for the conduct of war.
Research and Advances

The potential of artificial intelligence to help solve the crisis in our legal system

The laws that govern affluent clients and large institutions are numerous, intricate and applied by highly sophisticated practitioners. In this section of society, rules proliferate, lawsuits abound, and the cost of legal services grows much faster than the cost of living. For the bulk of the population, however, the situation is very different. Access to the courts may be open in principle. In practice, however, most people find their legal rights severely compromised by the cost of legal services, the baffling complications of existing rules and procedures, and the long, frustrating delays involved in bringing proceedings to a conclusion . . . There is far too much law for those who can afford it and far too little for those who cannot. No one can be satisfied with this state of affairs. Derek Bok [5] The American legal system1 is widely viewed as being in a state of crisis, plagued by excessive costs, long delays, and inconsistency leading to a growing lack of public confidence. One reason for this is the vast amount of information that must be collected and integrated in order for the legal system to function properly. In many traditional areas of law, evolving legal doctrines have led to uncertainty and increased litigation at a high cost to both individuals and society. And in discretionary areas such as sentencing, alimony awards, and welfare administration, evidence has shown a high degree of inconsistency in legal decision making, leading to public dissatisfaction and a growing demand for "determinate" rules. In this article, we consider the potential of artificial intelligence to contribute to a more fair and efficient legal system. First, using the example of a middle income home buyer who was misled by the statements of a real estate broker, we show how a predictive expert system could help each side assess its legal position. If expert systems were reasonably accurate predictors, some disputes could be voluntarily settled that are now resolved by costly litigation, and many others could be settled more quickly. We then consider the process of discretionary decision making, using the example of a judge sentencing a criminal. We describe how diagnostic expert systems developed in the medical domain could be adapted to criminal sentencing, and describe a process by which this technology could be used—first to build a consensus on sentencing norms, and then to make those norms accessible. In the ideal case, legal decisions are made after lengthy study and debate, recorded in published justifications, and later scrutinized in depth by other legal experts. In contrast to this ideal, most day-to-day legal decisions are made by municipal and state court judges, police officers, prosecuting attorneys, insurance claims adjusters, welfare administrators, social workers, and lawyers advising their clients on whether to settle or litigate. These decisions must often be made under severe pressures of limited time, money, and information. Expert systems can provide decision makers with tools to better understand, evaluate and disseminate their decisions. At the same time, it is important to reiterate that expert systems should not and cannot replace human judgement in the legal decision making process.
Research and Advances

Using a relational system on Wall Street: the good, the bad, the ugly, and the ideal

Developers of a Wall Street financial application were able to exploit a relational DBMS to advantage for some data management tasks (the good). For others, the relational system was not helpful (the bad), or could be pressed into service only by means of major or minor contortions (the ugly). The authors identify database constructs that would have simplified developing the application (the ideal).

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More