Opinion
Computing Applications Viewpoint

Designing AI Systems that Obey Our Laws and Values

Calling for research on automatic oversight for artificial intelligence systems.
Posted
  1. Introduction
  2. Terra Incognita
  3. Different Kinds of AI Guardians Interrogators
  4. Who Will Guard the AI Guardians?
  5. References
  6. Authors
  7. Footnotes
Designing AI Systems that Obey Our Laws and Values, illustrative photo

Operational AI systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems ("AI Guardians") as an approach to addressing this challenge, and to respond to the potential risks associated with increasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such second-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on ‘strong’) controls. Operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience and to be able to render at least semi-autonomous decisions (more about this later). However, all operational systems need some boundaries, both in order to not violate the law and to adhere to ethical norms. Developing such oversight systems, AI Guardians, is a major new mission for the AI community.

All societies throughout history have had oversight systems. Workers have supervisors; businesses have accountants; schoolteachers have principals. That is, all these systems have hierarchies in the sense that the first line operators are subject to oversight by a second layer and are expected to respond to corrective signals from the overseers. (These, in turn, are expected to take into account suggestions or even demands by the first line to change their modes of oversight). John Perry Barlow, in his famous "Declaration of the Independence of Cyberspace" in 1996, described the burgeoning online world as one that would be governed by a social contract formed among its users.b

Back to Top

Terra Incognita

AI systems not only need some kind of oversight, but this oversight must be provided—at least in part—not by mortals, but by a new kind of AI system, the oversight ones. AI needs to be guided by AI.c

One reason is that AI operational systems are learning systems. These systems do not stop collecting data once they are launched; instead, continued data mining and experience are used to improve their performance. These AI systems may hence stray considerably from the guidelines their programmers initially gave them. But no mortal can monitor these changes, let alone in real time, and determine whether they are legal and ethical.

Second, AI systems are becoming highly opaque, "black boxes" to human beings. Jenna Burrell from the School of Information at UC-Berkeley distinguishes three ways that algorithms become opaque: Intentional opacity, for example with proprietary algorithms that a government or corporation wants to keep secret; Technical illiteracy, where the complexity and function of algorithms is beyond the public’s comprehension; and Scale of application, where either "machine learning" and/or the number of different programmers involved renders an algorithm opaque even to the programmers.1


However smart a technology may become, it is still a tool to serve human purposes.


Finally, AI-guided systems have increasing autonomy in the sense they make numerous choices "on their own."5 That is, these instruments, using complex algorithms, respond to environmental inputs independently.6 They may even act in defiance of the guidelines the original programmers installed. A simple example is automatic emergency braking systems,3 which stop cars without human input in response to perceived dangers.7 Consumers complain of many false alarms, sudden stops that are dangerous to other cars,4 and that these brakes force cars to proceed in a straight line even if the driver tries to steer them elsewhere.

For all these reasons, AI oversight systems are needed. We call them AI Guardians. A simple dictionary definition of a guardian is: "a person who guards, protects, or preserves."d This definition captures well the thesis that oversight systems need not be strong because this would inhibit the innovative and creative development of operational AI systems—but cannot be avoided. Indeed, a major mission for AI is to develop in the near future such AI oversight systems. We now describe whose duty it is to develop these oversight systems and to whom they are to report their findings and whose values they are to heed.

Back to Top

Different Kinds of AI Guardians Interrogators

After a series of crashes of drones manufactured by one corporation, another corporation that purchased several hundred drones is likely to try to determine the cause of the crashes. Were they intentional (for example, caused by workers opposed to the use of drones)? Unwitting flaws in the design of the particular brand of drones? Flaws in the AI operational system that serves as the drone’s ‘brain’? For reasons already discussed, no human agent is able to provide a definitive answer to these questions. One would need to design and employ an interrogator AI system to answer the said questions.

In recent years, several incidents show the need for such interrogation. In 2015, a team of researchers from Carnegie Mellon University and the International Computer Science Institute found that Google was more likely to display ads for high-paying executive jobs to users that its algorithm believed to be men than to women.e Google stated that there was no intentional discrimination but that the effect was due to advertisers’ preferences.f

In 2014, Facebook conducted a study unbeknownst to its users wherein its algorithms manipulated users’ posts to remove "emotional content" in order to gauge reactions from the posters’ friends.g Facebook later apologized for not informing its users about the experiment. Twitter recently deleted 125,000 accounts, stating that these included only accounts that were linked to the Islamic State. If a committee of the board of these corporations or an outside group sought to verify these various claims—they would need an AI monitoring system.

Auditor: Wendell Wallach, a scholar at Yale’s Interdisciplinary Center for Bioethics points out that "in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care units—often those who are at the edge of death. Wallach points out that, though the doctor may seem to have autonomy, it could be very difficult in certain situations to go against the machine—particularly in a litigious society."h Hospitals are sure to seek audits of such decisions and they cannot do so without an AI auditing system.

Monitor: Because self-driving cars are programmed to learn and change, they need a particular kind of AI Guardian program—an AI Monitor—to come along for the ride to ensure the autonomous car’s learning does not lead it to violate the law, for example learning from the fact that old-fashioned cars violate the speed limit and emulating this behavior.

Enforcer: In rare situations, an AI Guardian may help enforce a regulation or law. For instance, if the computers of a military contractor are repeatedly hacked, an AI enforcer may alert the contractor that it needs to shore up its cyber defenses. If such alerts are ignored, the AI enforcer task will be to alert the ‘clients’ of the contractor or to suspend its clearance.

Ethics bots: AI operational systems must not only abide by the law, but also heed the moral norms of society. Thus driverless cars need to be told whether they should drive at whatever speed the law allows, or in ways that conserve fuel to help protect the environment, or to stay in the slower lanes if children are in the car. And—if they should wake up a passenger in the back seat if they "see" an accident.

Several ideas have been suggested as to where AI systems may get their ethical bearings. In a previous publication, we showed that asking each user of these instruments to input his or her ethical preferences is impractical, and that drawing on what the community holds as ethical is equally problematic. We suggested that instead one might draw on ethics bots.2

An ethics bot is an AI program that analyzes many thousands of items of information—not only information publicly available on the Internet but also information gleaned from a person’s own computers about the acts of a particular individual that reveal that person’s moral preferences. And then uses these to guide the AI operational systems (for instruments used by individuals, such as the driverless cars).

Essentially, what ethics bots do for moral choices is similar to what AI programs do when they ferret out consumer preferences and target advertising accordingly.i In this case, though, the bots are used to guide instruments that are owned and operated by the person, in line with their values—rather than by some marketing company or (political campaign). For instance, such an ethics bot may instruct a person’s financial program to invest only in socially responsible corporations, and in particular green ones, and make an annual donation to the Sierra Club, based on the bot’s reading of the person’s past behavior.

In short, there is no reason for the digital world to become nearly as hierarchical as the non-digital one. However, the growing AI realm is overdue for some level of guidance to ensure AI operational systems will act legally and observe the moral values of those who own and operate them.

It is not necessarily the case that AI guardians are more intelligent than the systems they oversee. Rather, the guardians need to be sufficiently capable and intelligent that they are not outwitted or short-circuited by the systems they are overseeing. Consider, for example, an electrical circuit breaker in a home: it is far less sophisticated than the full electrical system (and associated appliances) but it is quite reliable, and can be "tripped" by a person in an emergency.

AI researchers can work toward this vision in at least three ways. First, they can attempt to formalize our laws and values following an approach akin to that outlined in the work on formalizing the notion of "harm."8 Second, researchers can build labeled datasets identifying ethical and legal conundrums labeled by desired outcomes, and provide these as grist for machine learning algorithms. Finally, researchers can build "AI operating systems" that facilitate off switches as in the work on "safely interruptible agents" in reinforcement learning.j Our main point is that we need to put AI guardians on the research agenda for the field.

Back to Top

Who Will Guard the AI Guardians?

There are two parts to this question. One aspect concerns who will decide which AI oversight systems will be mobilized to keep in check the operational ones. Some oversight systems will be introduced by the programmers of the software involved at the behest of the owners and users of the particular technologies. For example, those who manufacture driverless cars and those who use them will seek to ensure that their cars will not speed ever more. This is a concern as the cars’ operational systems—which, to reiterate, are learning systems—will note that many traditional cars on the road violate the speed limits. Other AI oversight systems will be employed by courts and law enforcement authorities. For instance, in order to determine who or what is liable for accidents, and whether or not there was intent.

Ethics bots are a unique AI Guardian from this perspective. They are to heed the values of the user, not the owner, programmer, or those promoted by the government. This point calls for some explanation. Communities have two kinds of social and moral values. One kind includes values the community holds, which are of particular importance and hence their implementation cannot be left to individual choice; heeding them is hence enforced by coercive means, by the law. These values include a ban on murder, rape, theft and so on. In the AI world, heeding these is the subject of a variety of AI Guardians, outlined earlier. The second kind of values concern moral choices the community hold it can leave to each person to decide whether or not to follow. These values include whether or not to donate an organ, give to charity, volunteer, and so on. These are implemented in the AI world by ethics bots.

The question of who will guard the guardians rises. Humans should have the ultimate say about the roles and actions of both the AI operational and AI oversight systems; indeed, all these systems should have an on and off switch. None of them should be completely autonomous. Ultimately, however smart a technology may become, it is still a tool to serve human purposes. Given that those who build and employ these technologies are to be held responsible for their programming and use, these same people should serve as the ultimate authority over the design, operation, and oversight of AI.

Back to Top

Back to Top

Back to Top

    1. Burrell, J. How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016).

    2. Etzioni, A. and Etzioni, O. AI assisted ethics. Ethics and Information Technology 18, 2 (2016), 149–156; http://bit.ly/28Yymx0

    3. Kapnan, C. Auto-braking: A quantum leap for road safety. The Telegraph, (Aug. 14, 2012); http://bit.ly/2917jog.

    4. Limer, E. Automatic brakes are stopping for no good reason. Popular Mechanics, (June 19, 2015); http://bitly/28XVSxP.

    5. Mayer-Schönberger, V. and Cukier, K. Big Data: A Revolution That Will Transform How We Live, Work, and Think. 2014, 16–17.

    6. New algorithm lets autonomous robots divvy up assembly tasks on the fly. Science Daily, (May 27, 2015); http://bit.ly/1FFCIjX.

    7. Phelan, M. Automatic braking coming, but not all systems are equal. Detroit Free Press, (Jan. 1, 2016); http://on.freep.com/2917nnZ.

    8. Weld, D. and Etzioni, O. The First Law of Robotics (a call to arms). In Proceedings of AAAI '94. AAAI, 1994; http://bit.ly/292kpSK

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More