acm-header
Sign In

Communications of the ACM

ACM TechNews

Can a Machine Learn Morality?


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Artist's represenation of ethical decision making.

The Delphi system is an effort to address what some see as a major problem in modern artificial intelligence systems: they can be as flawed as the people who create them.

Credit: Pete Sharp

Morality is a thorny issue for machines, as scientists learned in testing Delphi, a system programmed by the Allen Institute for Artificial Intelligence (AI) to make moral judgments.

The neural network analyzed more than 1.7 million ethical judgments made by humans to establish a morality baseline for itself, and people generally agreed with its decisions when it was released to the open Internet.

Some, however, have found Delphi to be inconsistent, illogical, and insulting, highlighting how AI systems reflect the bias, arbitrariness, and worldview of their creators.

Delphi's developers hoping to build a universally applicable ethical framework for AI, but as Zeerak Talat at Canada's Simon Fraser University observed, "We can't make machines liable for actions. They are not unguided. There are always people directing them and using them."

From The New York Times
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found