August 1989 - Vol. 32 No. 8

August 1989 issue cover image

Features

Research and Advances

Introduction—computing and social responsibilities

The three articles in this special section deal with computing applications that affect how people are treated in their social roles: as litigants, as employees, or as participants or bystanders in warfare. We believe that applications like these obligate their builders to consider their duties, not only to their employers or customers, but to all the people who will be affected when the systems are placed into service. These duties are the social responsibilities of computer professionals. Authors Donald erman and Carole Hafner address the social responsibility of creating a timely, affordable and just legal system. Jack Beusmans and Kären Wieckert address the social responsibilities of educators, researchers and applications programmers to consider whether their contributions to weapons technology are a positive contribution to society. Richard Ladner addresses responsibilities that society has for people with disabilities.
Research and Advances

The potential of artificial intelligence to help solve the crisis in our legal system

The laws that govern affluent clients and large institutions are numerous, intricate and applied by highly sophisticated practitioners. In this section of society, rules proliferate, lawsuits abound, and the cost of legal services grows much faster than the cost of living. For the bulk of the population, however, the situation is very different. Access to the courts may be open in principle. In practice, however, most people find their legal rights severely compromised by the cost of legal services, the baffling complications of existing rules and procedures, and the long, frustrating delays involved in bringing proceedings to a conclusion . . . There is far too much law for those who can afford it and far too little for those who cannot. No one can be satisfied with this state of affairs. Derek Bok [5] The American legal system1 is widely viewed as being in a state of crisis, plagued by excessive costs, long delays, and inconsistency leading to a growing lack of public confidence. One reason for this is the vast amount of information that must be collected and integrated in order for the legal system to function properly. In many traditional areas of law, evolving legal doctrines have led to uncertainty and increased litigation at a high cost to both individuals and society. And in discretionary areas such as sentencing, alimony awards, and welfare administration, evidence has shown a high degree of inconsistency in legal decision making, leading to public dissatisfaction and a growing demand for "determinate" rules. In this article, we consider the potential of artificial intelligence to contribute to a more fair and efficient legal system. First, using the example of a middle income home buyer who was misled by the statements of a real estate broker, we show how a predictive expert system could help each side assess its legal position. If expert systems were reasonably accurate predictors, some disputes could be voluntarily settled that are now resolved by costly litigation, and many others could be settled more quickly. We then consider the process of discretionary decision making, using the example of a judge sentencing a criminal. We describe how diagnostic expert systems developed in the medical domain could be adapted to criminal sentencing, and describe a process by which this technology could be used—first to build a consensus on sentencing norms, and then to make those norms accessible. In the ideal case, legal decisions are made after lengthy study and debate, recorded in published justifications, and later scrutinized in depth by other legal experts. In contrast to this ideal, most day-to-day legal decisions are made by municipal and state court judges, police officers, prosecuting attorneys, insurance claims adjusters, welfare administrators, social workers, and lawyers advising their clients on whether to settle or litigate. These decisions must often be made under severe pressures of limited time, money, and information. Expert systems can provide decision makers with tools to better understand, evaluate and disseminate their decisions. At the same time, it is important to reiterate that expert systems should not and cannot replace human judgement in the legal decision making process.
Research and Advances

Computing, research, and war: if knowledge is power, where is responsibility?

In the United States, artificial intelligence (AI) research is mainly a story about military support for the development of promising technologies. Since the late 1950s and early 196Os, AI research has received most of its support from the military research establishment [37, 55].1 Not until the 1980s, however, has the military connected this research to specific objectives and products. In 1983, the $600-million Strategic Computing Program (SCP) created three applications for "'pulling' the technology-generation process by creating carefully selected technology interactions with challenging military applications" [16]. These applications, an autonomous land vehicle, a pilot's associate, and a battle management system, explicitly connect the three armed services to further AI developments [29, 51, 53]. The Defense Science Board Task Force on the "Military Applications of New-Generation Computer Technologies" recommended warfare simulation, electronic warfare, ballistic missile defense and logistics management as also promising a high military payoff [18]. In his 1983 "Star Wars" speech, President Reagan enjoined "the scientific community, . . . those who gave us nuclear weapons, . . . to give us the means of rendering these nuclear weapons impotent and obsolete" [43]. As in the Manhattan and hydrogen bomb projects, AI researchers and more generally computer scientists are expected to play major parts in this quest for a defensive shield against ballistic missiles. Computing specialists such as John von Neumann played a supportive role by setting up the computations necessary for these engineering feats—with human "computers" for the atom bomb [10]2 and with ENIAC and other early computers for the hydrogen bomb [9]. The "Star Wars" project challenges computer scientists to design an intelligent system that finds and destroys targets—basically in real-time and without human intervention. The interdependence of the military and computer science rarely surfaces during our education as computer practitioners, researchers, and teachers. Where might information concerning these important military applications enter into computer science and AI education? Where do students receive information concerning the important role they may play in weapon systems development? One of our students recently remarked that "as a computer science major, I did not realize the magnitude of the ramifications of advancing technology for the military . . . . In a field so dominated by the DoD, I will have to think seriously about what I am willing and not willing to do—and what lies in between those two poles."3 As researchers and educators, the authors wish to encourage colleagues and students to reflect upon present and historical interactions between computer science as an academic discipline and profession, and military projects and funding. As computer professionals, we lay claim to specialized knowledge and employ that knowledge in society as developers of computing technologies. Thus, we exercise power. Recognizing that as professionals we wield power, we must also recognize that we have responsibilities to society. To act responsibly does not mean that computer professionals should advocate a complete separation between computer science and military missions. However, we should openly examine the inter-relationships between the military and the discipline and practice of computing. To act responsibly does not mean that computer scientists and practioners should eschew support or employment from the military, although some are justified in taking such a stance.4 To act responsibly requires attention to the social and political context in which one is embedded; it requires reflection upon individual and professional practice; it requires open debate. The lack of attention to issues of responsibility in the typical computer science curriculum strikes us as a grave professional omission. With this article, we hope to add material to the dialogue on appropriate computing applications and their limits. We also hope to provoke reflections on computing fundamentals and practice at the individual, professional, and disciplinary levels, as well as prodding government institutions, professional societies, and industry to support in-depth research on the issues we raise here. Reflection requires information and discussion. Academic computer science departments rarely support serious consideration of even general issues under the rubric of the social and ethical implications of computing. Unlike any other U.S. computer science department, Information and Computer Science (ICS) at UC Irvine has an active research program in the social implications of computing (Computers, Organizations, Policy and Society—CORPS). Even within CORPS, research that addresses the interactions between the military and computer science is difficult to pursue—not because individuals aren't interested, but because they are not able to find professional or academic support. The authors' interests in these issues arose from personal concerns over the dependence of military systems upon complex technology, and the possible grave outcomes of this fragile relationship. CORPS provided a supportive intellectual environment that allowed us to pursue our interests. In 1987, we developed and taught an undergraduate course designed to inform students about military applications and their limits, and allow dialogue on professional responsibilities. In general, little monetary support is available for research that considers these issues, and it is only through support from the Institute on Global Conflict and Cooperation and campus instructional funds that we were able to develop and teach the course. Few researchers or educators can devote time and/or energy to pursue the social and ethical implications of their work and profession, in addition to their "mainstream" research. Since the discipline of computer science does not consider these reflections serious "mainstream" research, those who chose to pursue these vital questions have difficulties finding employment and/or advancing through the academic ranks. Growing concern over these issues and interest by computer scientists, as evidenced by the group Computer Professionals for Social Responsibility [38], individuals such as David Parnas [39], and this article, may lead to future research support and academic recognition. For now, as concerned professionals, we offer the following reviews. They pose many more questions than answers. This article exemplifies the interdisciplinary investigations which are required as precursors to serious analysis of computing use in these applications. We hope that our reviews generate discussion and debate. In the first section, we present the course rationale and content, as well as student responses. In the sections following the course description, we consider three applications—smart weapons, battle management, and war game simulations—that are generating research and development funds and that have controversial implications for military uses of computing. We start with smart weapons, that is, the development of weapons that can destroy targets with minimal human intervention. Next we look at battle management systems designed to coordinate and assess the use of resources and people in warfare. Finally, we turn to war gaming as a means for evaluating weapon performance and strategies for war fighting. In each case, we describe the state of technology, its current and potential uses and its implications for the conduct of war.
Research and Advances

Computer accessibility for federal workers with disabilities: it’s the law

In 1986, Congress passed Public Law 99-506, the "Rehabilitation Act Amendments of 1986." This law, amending the famous Rehabilitation Act of 1973, contains a small section, titled "Electronic Equipment Accessibility," Section 508, which may have significant impact on the design of computer systems and their accessibility by workers with disabilities. The bill became law when it was signed by former President Reagan on October 21, 1986. The purpose of this article is to inform concerned computer professionals of Section 508, outline the guidelines and regulations pursuant to the law, describe some of the reaction to the guidelines and regulations, and describe some of the challenges for the future in meeting the computer accessibility needs of users with disabilities. Section 508 was developed because it was realized that government offices were rapidly changing into electronic offices with microcomputers on every desk. In order for persons with disabilities to keep their jobs or gain new employment in the government, Congress decided it was necessary to make provisions to guarantee accessibility to microcomputers and other electronic office equipment. The driving principle behind Section 508 can be found in Section 504 of the Rehabilitation Act of 1973 which states: No otherwise qualified handicapped individual in the United States . . . shall, solely by reason of his handicap, be excluded from the participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving Federal financial assistance. It should be stated off the top that the scope of Section 508 is not as broad as Section 504. In particular, Section 508 only applies to direct purchases by the federal government and not to purchases made by all programs receiving government funding. Section 508 does not specify what the guidelines should be nor does it delineate a philosophy on which to base the guidelines. A committee established by the National Institute on Disability and Rehabilitation Research (NIDRR) and the General Services, Administration (GSA), in consultation with the electronics industry, rehabilitation engineers, and disabled computer professionals worked for a year developing the philosophy and guidelines which will significantly affect the purchase of electronic office equipment, including computers and software, by the federal government, the largest computer customer in the world.
Research and Advances

Using a relational system on Wall Street: the good, the bad, the ugly, and the ideal

Developers of a Wall Street financial application were able to exploit a relational DBMS to advantage for some data management tasks (the good). For others, the relational system was not helpful (the bad), or could be pressed into service only by means of major or minor contortions (the ugly). The authors identify database constructs that would have simplified developing the application (the ideal).
Research and Advances

A graphics interface for linear programming

We describe the interface to a software system that assists users in the process of formulating linear programming models. The main idea is to introduce a new representation that allows modelers to depict their problems in a graphical rather than mathematical form. This representation is described in detail together with a number of other interface design principles that we believe will aid modelers—including hierarchical decomposition, multiple model representations, alternative formulation approaches, the use of model templates, and database and model management facilities. These features are illustrated using the output of a prototype system formulating a realistic LP problem.

Recent Issues

  1. July 2024 CACM cover
    July 2024 Vol. 67 No. 7
  2. June 2024 Vol. 67 No. 6
  3. May 2024 CACM cover
    May 2024 Vol. 67 No. 5
  4. April 2024 CACM cover with text
    April 2024 Vol. 67 No. 4