News
Artificial Intelligence and Machine Learning

Building Morality Into Machines

Posted
Considering which options are more right, and less wrong.
As technology grows increasingly autonomous, human decision-making will be replaced with machine algorithms.

As autonomous vehicles, robots, and drones take shape and replace human decision-making with machine algorithms, these devices must react to events in real time. In most cases, removing human input will reduce risks and save lives. For example, in the U.S. alone, about 92 people are killed every day in motor vehicle collisions; that number could drop to near zero with autonomous vehicles.

However, a reduction in fatalities does not mean these machines will consistently make the right decision at the right time, all the time. Complex and sometimes unpredictable events require programming and algorithms that can follow a desired path or predictable reaction. What happens if an autonomous car skids on ice, and it suddenly must choose whether to crash into a group of schoolchildren or a school bus? What happens if an autonomous drone has to choose between crashing into a crowd of people, or smashing into a building and raining debris on passersby below?

"It’s a topic that automakers and others are beginning to examine," says Bryce Pilz, clinical assistant professor at the University of Michigan Law School and co-lead of the Legal, Liability & Insurance working group at the school’s Mobility Transformation Center.

Artificial intelligence will be a crucial piece of the puzzle. Brad Stertz, director of Government Affairs for Audi of America, says, "Machine learning will be crucial as it provides a crowdsourcing capability, of sorts, to help identify and predict how certain actions by pedestrians, other cars, changing traffic, law officials, and others play out."

Kill the Driver

In the past, there was not a strong need to have technology ethics embedded into machines, because most systems had merely automated tasks, or operated under direct human control, explains John P. Sullins, a professor of philosophy at the Center for Ethics Value and Society at Sonoma State University in California. As autonomous machines step, roll, and fly into our lives, however, there is a growing need to draw on elements of psychology, sociology, philosophy, anthropology, theology, actuarial science, political policy, and law to guide actions and machine behavior. This translates into hierarchies or scoring systems that sort through real-world events and guide automated decision-making.  

The challenges of encoding moral judgments are enormous, and the stakes are potentially high:

  • Is one life ever worth more than another?
  • Should an autonomous vehicle swerve to avoid a dog, or hit the dog to avoid hitting another vehicle or pedestrian?
  • Is the probability of injury or death a factor?
  • Is there a way to quantify risks and probabilities?

A team of researchers from the Toulouse School of Economics, the Massachusetts Institute of Technology (MIT), and the University of Oregon found, in a study reported last October, that more than 75% of those they surveyed supported the concept of an autonomous vehicle sacrificing a passenger to save the lives of 10 people; around 50% supported self-sacrifice when saving just one person.

On the other hand, researchers have also found people might not want to buy a vehicle that is programmed to allow them to die under certain conditions.

Azim Shariff, an assistant professor of psychology at the University of Oregon (and part of the Toulouse/MIT/Oregon research team), says it is important to consider technology and ethics broadly–and ultimately to "balance risks, safety, and tradeoffs" as autonomous systems take shape. For example, if people perceive a machine might act against their best interests, they may not use an autonomous vehicle.  "If fewer people buy or use self-driving cars or other systems that might sacrifice their owners, then more people will die because conventional vehicles and human error account for far more injuries and fatalities," he says.

Future Tense

While there are no clear or obvious answers, researchers and manufacturers are beginning to put a microscope to the topic. "We have a range of groups exploring these issues. …The key factors are the development of algorithms that can understand a wide range of variables presented at any given moment," Audi’s Stertz explains.  

A recent patent filing from Google offers a glimpse into how an automated system might rank various practical, ethical, and legal considerations based on "risk magnitude." It suggests developers and engineers would rely on a scoring system to rank risks versus benefits. The machine, in sensing its environment, would evaluate conditions, rank them based on potential risk, and act accordingly. For example, a self-driving car would consider the possibility of a collision with a police car or fire truck to be riskier than collisions with other types of vehicles, or with a pedestrian.

Similarly, if the autonomous vehicle is stopped at a traffic light and has to move to "see" the road because another vehicle is blocking its view, the system would analyze various actions and make a decision. According to the patent filing, getting side-swiped by a truck scores 5,000 on the risk-magnitude scale, getting rear-ended scores 10,000, getting hit by an oncoming vehicle would elicit 20,000, and hitting a pedestrian would rank as 100,000.

Pilz says reaching agreement about machine ethics and scoring systems will require input from a diverse array of sources, adding that total consensus will almost certainly not occur. "Based on new technology or changing views, it will be necessary to reexamine things from time to time," he says. It will also be necessary to overlay that concensus with laws, regulations, and liability issues.

Vivek Wadhwa, a fellow at the Arthur & Toni Rembert Rock Center for Corporate Governance at Stanford University (and a member of the faculty, and an advisor, at Singularity University), says there is ultimately no way for engineers and developers to embed programming for every possible event. Over time, machine learning will help systems learn which actions are best, and tweak the algorithm accordingly.

The ultimate goal is to "build systems that eliminate getting into these situations in the first place," Wadhwa says.

Samuel Greengard is an author and journalist based in West Linn, OR.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More