Opinion
Artificial Intelligence and Machine Learning Point/counterpoint

Point: Should AI Technology Be Regulated?: Yes, and Here's How

Considering the difficult technical and sociological issues affecting the regulation of artificial intelligence research and applications.
Posted
  1. Introduction
  2. Regulation Is a Tricky Thing
  3. Five Guidelines for Regulating AI Applications
  4. The Practical Application of Regulations
  5. Author
  6. Footnotes
robot hand and finger

Government regulation is necessary to prevent harm. But regulation is also a blunt and slow-moving instrument that is easily subject to political interference and distortion. When applied to fast-moving fields like AI, misplaced regulations have the potential to stifle innovation and derail the enormous potential benefits that AI can bring in vehicle safety, improved productivity, and much more. We certainly do not want rules hastily cobbled as a knee-jerk response to a popular outcry against AI stoked by alarmists such as Elon Musk (who has urged U.S. governors to regulate AI “before it’s too late”).

To address this conundrum, I propose a middle way: that we avoid regulating AI research, but move to regulate AI applications in arenas such as transportation, medicine, politics, and entertainment. This approach not only balances the benefits of research with the potential harms of AI systems, it is also more practical. It hits the happy medium between not enough and too much regulation.

Back to Top

Regulation Is a Tricky Thing

AI research is now being conducted globally, by every country and every leading technology company. Russian President Vladimir Putin has said “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” The AI train has left the station; AI research will continue unabated and the U.S. must keep up with other nations or suffer economically and security-wise as a result.

A problem with regulating AI is that it is difficult to define what AI is. AI used to be chess-playing machines; now it is integrated into our social media, our cars, our medical devices, and more. The technology is progressing so fast, and gets integrated into our lives so quickly, that the line between dumb and smart machines is inevitably fuzzy.

Even the concept of “harm” is difficult to put into an algorithm. Self-driving cars have the potential to sharply reduce highway accidents, but AI will also cause some accidents, and it’s easier to fear the AI-generated accidents than the human-generated ones. “Don’t stab people” seems pretty clear. But what about giving children vaccinations? That’s stabbing people. Or let’s say I ask my intelligent agent to reduce my hard disk utilization by 20%. Without common sense, the AI might delete one’s not-yet-backed-up Ph.D. thesis. The Murphy’s Law of AI is that when you give it a goal, it will do it, whether or not you like the implications of it achieving its goal (see the Sorcerer’s Apprentice). AI has little common sense when it comes to defining vague concepts such as “harm,” as co-author Daniel Weld and I first discussed in 1994.a

But given that regulation is difficult, yet entirely necessary, what are the broad precepts we should use to thread the needle between too much, and not enough, regulation? I suggest five broad guidelines for regulating AI applications.b Existing regulatory bodies, such as the Federal Trade Commission, the SEC, Homeland Security, and others, can use these guidelines to focus their efforts to ensure AI, in application, will not harm humans.

Back to Top

Five Guidelines for Regulating AI Applications

The first place to start is to set up regulations against AI-enabled weaponry and cyberweapons. Here is where I agree with Musk: In a letter to the United Nations, Musk and other technology leaders said, “Once developed, [autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.” So as a start, we should not create AI-enabled killing machines. The first regulatory principle is: “Don’t weaponize AI.”


A problem with regulating AI is that it is difficult to define what AI is.


Now that the worst case is handled, let’s look at how to regulate the more benign uses of AI.

The next guideline is an AI is subject to the full gamut of laws that apply to its human operator. You can’t claim, like a kid to his teacher, the dog ate my homework. Saying “the AI did it” has to mean that you, as the owner, operator, or builder of the AI, did it. You are the responsible party that must ensure your AI does not hurt anyone, and if it does, you bear the fault. There will be times when it is the owner of the AI at fault, and at times, the manufacturer, but there is a well-developed body of existing law to handle these cases.

The third is that an AI shall clearly disclose that it is not human. This means Twitter chat bots, poker bots, and others must identify themselves as machines, not people. This is particularly important now that we have seen the ability of political bots to comment on news articles and generate propaganda and political discord.c

The fourth precept is that AI shall not retain or disclose confidential information without explicit prior approval from the source. This is a privacy necessity, which will protect us from others misusing the data collected from our smart devices, including Amazon Echo, Google Home, and smart TVs. Even seemingly innocuous house-cleaning robots create maps that could potentially be sold. This suggestion is a fairly radical departure from the current state of U.S. data policy, and would require some kind of new legislation to enact, but the privacy issues will only grow, and a more stringent privacy policy will become necessary to protect people and their information from bad actors.

And the fifth and last general rule of AI application regulation is that AI must not increase any bias that already exists in our systems. Today, AI uses data to predict the future. If the data says, (in a hypothetical example), that white people default on loans at rates of 60%, compared with only 20% of people of color, that race information is important to the algorithm. Unfortunately, predictive algorithms generalize to make predictions, which strengthens the patterns. AI is using the data to protect the underwriters, but in effect, it is institutionalizing bias into the underwriting process, and is introducing a morally reprehensible result. There are mathematical methods to ensure algorithms do not introduce extra bias; regulations must ensure those methods are used.

A related issue here is that AI, in all its forms (robotic, autonomous systems, embedded algorithms), must be accountable, interpretable, and transparent so that people can understand the decisions machines make. Predictive algorithms can be used by states to calculate future risk posed by inmates and have been used in sentencing decisions in court trials. AI and algorithms are used in decisions about who has access to public services and who undergoes extra scrutiny by law enforcement. All of these applications pose thorny questions about human rights, systemic bias, and perpetuating inequities.

This brings up one of the thorniest issues in AI regulation: It is not just a technological issue, with a technological fix, but a sociological issue that requires ethicists and others to bring their expertise to bear.


We must recognize that regulations have a purpose: to protect humans and society from harm.


AI, particularly deep learning and machine reading, is really about big data. And data will always bear the marks of its history. When Google is training its algorithm to identify something, it looks to human history, held in those data sets. So if we are going to try to use that data to train a system, to make recommendations or to make autonomous decisions, we need to be deeply aware of how that history has worked and if we as a society want that outcome to continue. That’s much bigger than a purely technical question.

These five areas—no killing, responsibility, transparency, privacy, and bias—outline the general issues that AI, left unchecked, will cause us no end of harm. So it’s up to us to check it.

Back to Top

The Practical Application of Regulations

So how would regulations on AI technologies work? Just like all the other regulations and laws we have in place today to protect us from exploding air bags in cars, E. coli in our meat, and sexual predators in our workplaces. Instead of creating a new, single AI regulatory body, which would probably be unworkable, regulations should be embedded into existing regulatory infrastructure. Regulatory bodies will enact ordinances, or legislators will enact laws to protect us from the negative impacts of AI in applications.

Let’s look at this in action. Let’s say I have a driverless car, which gets in an accident. If it’s my car, I am considered immediately responsible. There may be technological defects that caused the accident, in which case the manufacturer starts to share responsibility, for whatever percentage of the defect the manufacturer is responsible. So driverless cars will be subject to the same laws as people, overseen by Federal Motor Vehicle Safety Standards and motor vehicle driving laws.

Some might ask: But what about the trolley problem; How do we program the car to make a choice between hitting several people or just killing the driver? That’s not an engineering problem, but a philosophical thought experiment. In reality, driverless cars will reduce the numbers of people hurt or killed in accidents; the edge cases where someone gets hurt because of a choice made by an algorithm are a small percentage of the cases. Look at Waymo, Google’s autonomous driving division. It has logged over two million miles on U.S. streets and has only been at fault in one accident, making its cars by far the lowest at-fault rate of any driver class on the road—approximately 10 times lower than people aged 60–69 and 40 times lower than new drivers.

Now, there are probably AI applications that will be introduced in the future, that may cause harm, yet no existing regulatory body is in place. It’s up to us as a culture to identify those applications as early as possible, and identify the regulatory agency to take that on. Part of that will require us to shift the frame through which we look at regulations, from onerous bureaucracy, to well-being protectors. We must recognize that regulations have a purpose: to protect humans and society from harm. One place to start having these conversations is through such organizations as the Partnership on AI, where Microsoft, Apple, and other leading AI research organizations, such as the Allen Institute for Artificial Intelligence, are collaborating to formulate best practices on AI technologies and serve as an open platform for discussion and engagement about AI and its influences on people and society. The AI Now Institute at New York University and the Berkman-Klein Center at Harvard University are also working on developing ethical guidelines for AI.

The difficulty of regulating AI does not absolve us from our responsibility to control AI applications. Not to do so would be, well, unintelligent.

uf1.jpg
Figure. Watch the author discuss this work in the exclusive Communications video. https://cacm.acm.org/videos/point-counterpoint-on-ai-regulation

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More