Sign In

Communications of the ACM

ACM Opinion

Does AI Have an Alignment Problem?


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Brian Christian

"Theres this adage that is famous in statistical circles. And it says, all models are wrong, but some are useful. And I think part of the danger ... is that our models are wrong in the way that all models are wrong, but we have given them the power to enforce the limits of their understanding on the world."

Brian Christian is an author who holds degrees in philosophy, computer science, and poetry from Brown University and the University of Washington.

In an interview, Christian discusses ideas from his recent book on artifcial intelligence (AI), The Alignment Problem. The term "alignment problem" originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals.

That's a central worry with AI, too: that we will create something to help us that will instead harm us, in part because we didn't understand how it really worked or what we had actually asked it to do.

From The New York Times
View Full Article


 

No entries found