Sign In

Communications of the ACM

Viewpoint

Designing AI Systems that Obey Our Laws and Values


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Designing AI Systems that Obey Our Laws and Values, illustrative photo

Credit: Willyam Bradberry

Operational AI systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems ("AI Guardians") as an approach to addressing this challenge, and to respond to the potential risks associated with increasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such second-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on 'strong') controls. Operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience and to be able to render at least semi-autonomous decisions (more about this later). However, all operational systems need some boundaries, both in order to not violate the law and to adhere to ethical norms. Developing such oversight systems, AI Guardians, is a major new mission for the AI community.

All societies throughout history have had oversight systems. Workers have supervisors; businesses have accountants; schoolteachers have principals. That is, all these systems have hierarchies in the sense that the first line operators are subject to oversight by a second layer and are expected to respond to corrective signals from the overseers. (These, in turn, are expected to take into account suggestions or even demands by the first line to change their modes of oversight). John Perry Barlow, in his famous "Declaration of the Independence of Cyberspace" in 1996, described the burgeoning online world as one that would be governed by a social contract formed among its users.b

Back to Top

Terra Incognita

AI systems not only need some kind of oversight, but this oversight must be providedat least in partnot by mortals, but by a new kind of AI system, the oversight ones. AI needs to be guided by AI.c

One reason is that AI operational systems are learning systems. These systems do not stop collecting data once they are launched; instead, continued data mining and experience are used to improve their performance. These AI systems may hence stray considerably from the guidelines their programmers initially gave them. But no mortal can monitor these changes, let alone in real time, and determine whether they are legal and ethical.

Second, AI systems are becoming highly opaque, "black boxes" to human beings. Jenna Burrell from the School of Information at UC-Berkeley distinguishes three ways that algorithms become opaque: Intentional opacity, for example with proprietary algorithms that a government or corporation wants to keep secret; Technical illiteracy, where the complexity and function of algorithms is beyond the public's comprehension; and Scale of application, where either "machine learning" and/or the number of different programmers involved renders an algorithm opaque even to the programmers.1


However smart a technology may become, it is still a tool to serve human purposes.


Finally, AI-guided systems have increasing autonomy in the sense they make numerous choices "on their own."5 That is, these instruments, using complex algorithms, respond to environmental inputs independently.6 They may even act in defiance of the guidelines the original programmers installed. A simple example is automatic emergency braking systems,3 which stop cars without human input in response to perceived dangers.7 Consumers complain of many false alarms, sudden stops that are dangerous to other cars,4 and that these brakes force cars to proceed in a straight line even if the driver tries to steer them elsewhere.

For all these reasons, AI oversight systems are needed. We call them AI Guardians. A simple dictionary definition of a guardian is: "a person who guards, protects, or preserves."d This definition captures well the thesis that oversight systems need not be strong because this would inhibit the innovative and creative development of operational AI systemsbut cannot be avoided. Indeed, a major mission for AI is to develop in the near future such AI oversight systems. We now describe whose duty it is to develop these oversight systems and to whom they are to report their findings and whose values they are to heed.

Back to Top

Different Kinds of AI Guardians Interrogators

After a series of crashes of drones manufactured by one corporation, another corporation that purchased several hundred drones is likely to try to determine the cause of the crashes. Were they intentional (for example, caused by workers opposed to the use of drones)? Unwitting flaws in the design of the particular brand of drones? Flaws in the AI operational system that serves as the drone's 'brain'? For reasons already discussed, no human agent is able to provide a definitive answer to these questions. One would need to design and employ an interrogator AI system to answer the said questions.

In recent years, several incidents show the need for such interrogation. In 2015, a team of researchers from Carnegie Mellon University and the International Computer Science Institute found that Google was more likely to display ads for high-paying executive jobs to users that its algorithm believed to be men than to women.e Google stated that there was no intentional discrimination but that the effect was due to advertisers' preferences.f

In 2014, Facebook conducted a study unbeknownst to its users wherein its algorithms manipulated users' posts to remove "emotional content" in order to gauge reactions from the posters' friends.g Facebook later apologized for not informing its users about the experiment. Twitter recently deleted 125,000 accounts, stating that these included only accounts that were linked to the Islamic State. If a committee of the board of these corporations or an outside group sought to verify these various claimsthey would need an AI monitoring system.

Auditor: Wendell Wallach, a scholar at Yale's Interdisciplinary Center for Bioethics points out that "in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care unitsoften those who are at the edge of death. Wallach points out that, though the doctor may seem to have autonomy, it could be very difficult in certain situations to go against the machineparticularly in a litigious society."h Hospitals are sure to seek audits of such decisions and they cannot do so without an AI auditing system.

Monitor: Because self-driving cars are programmed to learn and change, they need a particular kind of AI Guardian programan AI Monitorto come along for the ride to ensure the autonomous car's learning does not lead it to violate the law, for example learning from the fact that old-fashioned cars violate the speed limit and emulating this behavior.

Enforcer: In rare situations, an AI Guardian may help enforce a regulation or law. For instance, if the computers of a military contractor are repeatedly hacked, an AI enforcer may alert the contractor that it needs to shore up its cyber defenses. If such alerts are ignored, the AI enforcer task will be to alert the 'clients' of the contractor or to suspend its clearance.

Ethics bots: AI operational systems must not only abide by the law, but also heed the moral norms of society. Thus driverless cars need to be told whether they should drive at whatever speed the law allows, or in ways that conserve fuel to help protect the environment, or to stay in the slower lanes if children are in the car. Andif they should wake up a passenger in the back seat if they "see" an accident.

Several ideas have been suggested as to where AI systems may get their ethical bearings. In a previous publication, we showed that asking each user of these instruments to input his or her ethical preferences is impractical, and that drawing on what the community holds as ethical is equally problematic. We suggested that instead one might draw on ethics bots.2

An ethics bot is an AI program that analyzes many thousands of items of informationnot only information publicly available on the Internet but also information gleaned from a person's own computers about the acts of a particular individual that reveal that person's moral preferences. And then uses these to guide the AI operational systems (for instruments used by individuals, such as the driverless cars).

Essentially, what ethics bots do for moral choices is similar to what AI programs do when they ferret out consumer preferences and target advertising accordingly.i In this case, though, the bots are used to guide instruments that are owned and operated by the person, in line with their valuesrather than by some marketing company or (political campaign). For instance, such an ethics bot may instruct a person's financial program to invest only in socially responsible corporations, and in particular green ones, and make an annual donation to the Sierra Club, based on the bot's reading of the person's past behavior.

In short, there is no reason for the digital world to become nearly as hierarchical as the non-digital one. However, the growing AI realm is overdue for some level of guidance to ensure AI operational systems will act legally and observe the moral values of those who own and operate them.

It is not necessarily the case that AI guardians are more intelligent than the systems they oversee. Rather, the guardians need to be sufficiently capable and intelligent that they are not outwitted or short-circuited by the systems they are overseeing. Consider, for example, an electrical circuit breaker in a home: it is far less sophisticated than the full electrical system (and associated appliances) but it is quite reliable, and can be "tripped" by a person in an emergency.

AI researchers can work toward this vision in at least three ways. First, they can attempt to formalize our laws and values following an approach akin to that outlined in the work on formalizing the notion of "harm."8 Second, researchers can build labeled datasets identifying ethical and legal conundrums labeled by desired outcomes, and provide these as grist for machine learning algorithms. Finally, researchers can build "AI operating systems" that facilitate off switches as in the work on "safely interruptible agents" in reinforcement learning.j Our main point is that we need to put AI guardians on the research agenda for the field.

Back to Top

Who Will Guard the AI Guardians?

There are two parts to this question. One aspect concerns who will decide which AI oversight systems will be mobilized to keep in check the operational ones. Some oversight systems will be introduced by the programmers of the software involved at the behest of the owners and users of the particular technologies. For example, those who manufacture driverless cars and those who use them will seek to ensure that their cars will not speed ever more. This is a concern as the cars' operational systemswhich, to reiterate, are learning systemswill note that many traditional cars on the road violate the speed limits. Other AI oversight systems will be employed by courts and law enforcement authorities. For instance, in order to determine who or what is liable for accidents, and whether or not there was intent.

Ethics bots are a unique AI Guardian from this perspective. They are to heed the values of the user, not the owner, programmer, or those promoted by the government. This point calls for some explanation. Communities have two kinds of social and moral values. One kind includes values the community holds, which are of particular importance and hence their implementation cannot be left to individual choice; heeding them is hence enforced by coercive means, by the law. These values include a ban on murder, rape, theft and so on. In the AI world, heeding these is the subject of a variety of AI Guardians, outlined earlier. The second kind of values concern moral choices the community hold it can leave to each person to decide whether or not to follow. These values include whether or not to donate an organ, give to charity, volunteer, and so on. These are implemented in the AI world by ethics bots.

The question of who will guard the guardians rises. Humans should have the ultimate say about the roles and actions of both the AI operational and AI oversight systems; indeed, all these systems should have an on and off switch. None of them should be completely autonomous. Ultimately, however smart a technology may become, it is still a tool to serve human purposes. Given that those who build and employ these technologies are to be held responsible for their programming and use, these same people should serve as the ultimate authority over the design, operation, and oversight of AI.

Back to Top

References

1. Burrell, J. How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016).

2. Etzioni, A. and Etzioni, O. AI assisted ethics. Ethics and Information Technology 18, 2 (2016), 149156; http://bit.ly/28Yymx0

3. Kapnan, C. Auto-braking: A quantum leap for road safety. The Telegraph, (Aug. 14, 2012); http://bit.ly/2917jog.

4. Limer, E. Automatic brakes are stopping for no good reason. Popular Mechanics, (June 19, 2015); http://bitly/28XVSxP.

5. Mayer-Schönberger, V. and Cukier, K. Big Data: A Revolution That Will Transform How We Live, Work, and Think. 2014, 1617.

6. New algorithm lets autonomous robots divvy up assembly tasks on the fly. Science Daily, (May 27, 2015); http://bit.ly/1FFCIjX.

7. Phelan, M. Automatic braking coming, but not all systems are equal. Detroit Free Press, (Jan. 1, 2016); http://on.freep.com/2917nnZ.

8. Weld, D. and Etzioni, O. The First Law of Robotics (a call to arms). In Proceedings of AAAI '94. AAAI, 1994; http://bit.ly/292kpSK

Back to Top

Authors

Amitai Etzioni (etzioni@gwu.edu) is a University Professor of Sociology at The George Washington University, Washington, D.C.

Oren Etzioni (orene@allenai.org) is CEO of the Allen Institute for Artificial Intelligence, Seattle, WA, and a Professor of Computer Science at the University of Washington.

Back to Top

Footnotes

a. See T.G. Dietterich and E.J. Horvitz, "Rise of Concerns About AI: Reflections and Directions," Commun. ACM 58, 10 (Oct. 2015), 3840, for an in-depth discussion of the various risks.

b. See http://bit.ly/1KavIVC

c. See D. Weld and O. Etzioni, "The First Law of Robotics (a call to arms),"8 for an early attempt to formalize a solution to this problem.

d. See http://bit.ly/28Y3gGG

e. See http://bit.ly/28Xy0pT

f. See http://bit.ly/292qE9h

g. See http://bit.ly/23KzDS3

h. See http://bit.ly/1N4b5WY

i. Ted Cruz's campaign in Iowa relied on psychological profiles to determine the best ways to canvass individual voters in the state. T. Hamburger, "Cruz campaign credits psychological data and analytics for its rising success," The Washington Post (Dec. 13, 2015); http://wapo.st/1NYgFto.

j. See http://bit.ly/1RVnTA1


Copyright held by authors.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.


Comments


David Parnas

What would you need to change in this article if you replaced "AI" with "Computer System"?


Oren Etzioni

David's question/comment is a fair one, and the answer is that it's a continuum. Computer systems raise the same issues, but their gradually increasing autonomy, adaptability, and unpredictability when they are AI systems--makes this issue particularly pressing in this context.


Displaying all 2 comments