acm-header
Sign In

Communications of the ACM

ACM TechNews

Mit's 'moral Machine' Crowdsources Decisions About Autonomous Driving, but Experts Call It Misguided


Should the autonomous vehicle injure five pedestrians (left) or swerve and kill its four occupants?

A "Moral Machine" platform developed by the Massachusetts Institute of Technology Media Lab allows the public to offer their opinions on some of the ethical decisions an autonomous vehicle could have to make.

Credit: MIT

The Massachusetts Institute of Technology's (MIT) Media Lab has developed a "Moral Machine" platform that enables the public to voice their opinions on what kind of ethical "decisions" autonomous vehicles should be programmed to make.

The Moral Machine lets participants view a "moral dilemma" and choose a preferred outcome, and then compare their decision with those of others for online discussion.

MIT professor Iyad Rahwan says the platform is designed to "further our scientific understanding of how people think about machine morality."

However, skeptics such as Gartner analyst Michael Ramsey see flaws in this model, because panicked human drivers typically do not and cannot make the kinds of moral choices the platform presents. "The most likely scenario is that the car will be programmed to avoid a collision, without regard to 'whom to save,'" Ramsey says.

Moreover, University of Southern California professor Jeffrey Miller believes the platform encompasses too few scenarios with autonomous vehicles. "There are more mundane decisions about breaking the law in order to be safe...[that] happen with frequency with human drivers," Miller notes.

Rahwan says the platform has collected 14 million decisions from 2 million participants, which will help cohere a global view of machine ethics and highlight cross-cultural differences.

From TechRepublic
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found