Sign In

Communications of the ACM

ACM TechNews

AIs Could Debate Whether a Smart Assistant Should Snitch on You


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Can machines have morality?

Researchers at the University of Bergen in Norway suggest that since ethical behavior is not consistent across societies, artificial intelligence systems should be flexible to reflect local law and owner preferences.

Credit: dpa picture alliance/Alamy

Researchers at the University of Bergen in Norway propose that since ethical behavior is not consistent across societies or individuals, artificial intelligence (AI) systems should be flexible, allowing them to be geared to better reflect local law and the preferences of the owner.

The researchers presented the idea at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES 2019) in Hawaii last month.

The team thinks a few AI bots should debate the possibilities of ethical dilemmas before making a decision.

The moral AIs each represent one of the stakeholders, and they have individual priorities according to who they represent: to be lawful, to operate safely, or to prioritize individual autonomy.

The system maps out the various arguments from each stakeholder, noting which ones conflict with each other.

The conflicting demands are removed and the system decides on a course of action based on the remaining instructions.

From New Scientist
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found