Sign In

Communications of the ACM

ACM TechNews

Digital Genies


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Its important that such systems understand humans, lest they inadvertently harm their creators.

University of California, Berkeley professor Stuart Russell stresses that artificial intelligence must understand fundamental human values.

Credit: Andrea Danti/Thinkstock

In an interview, University of California, Berkeley professor Stuart Russell emphasizes the need to ensure artificial intelligence (AI) understands fundamental human values, a task he says is fraught with uncertainty.

"What we want is that the machine learns the values it's supposed to be optimizing as it goes along, and explicitly acknowledges its own uncertainty about what those values are," says Russell, recipient in 2005 of the ACM Karl V. Karlstrom Outstanding Educator Award.

He notes the addition of uncertainty actually makes the AI safer because it permits itself to be corrected instead of being singled-minded in the pursuit of its goals. "We've tended to assume that when we're dealing with objectives, the human just knows and they put it into the machine and that's it," Russell says. "But I think the important point here is that just isn't true. What the human says is usually related to the true objectives, but is often wrong."

Russell says the AI should only act in instances in which it is quite sure it has understood human values well enough to take the right action. "It needs to have enough evidence that it knows that one action is clearly better than some other action," he says. "Before then, its main activity should just be to find out."

From Slate
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found