Sign In

Communications of the ACM

ACM News

Superintelligent AI May Be Impossible to Control; That's the Good News


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Illustration of a human brain.

A new study finds it may be theoretically impossible for humans to control a superintelligent artificial intelligence.

Credit: Eduard Muzhevskyi/SPL/Getty Images

It may be theoretically impossible for humans to control a superintelligent artificial intelligence (AI), a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it's on the verge of being created. 

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity. 

Alongside news of AI besting humans at games such as chess, Go, and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. "The question about whether superintelligence could be controlled if created is quite old," says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. "It goes back at least to Asimov's First Law of Robotics, in the 1940s."

The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story "Runaround," are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us, but also investigated potential control strategies for such a machine—and the reasons they might not work.

 

From IEEE Spectrum
View Full Article

 


 

No entries found