Sign In

Communications of the ACM

ACM Opinion

How to Know if Artificial Intelligence is About to Destroy Civilization


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Mechanical canaries in a metaphorical coal mine.

What would alert us that superintelligence is indeed around the corner?

Credit: MS TECH

Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences? Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent superintelligence is an existential risk for humanity.

But one can speculate endlessly. It's better to ask a more concrete, empirical question: What would alert us that superintelligence is indeed around the corner?

We might call such harbingers canaries in the coal mines of AI. If an artificial-intelligence program develops a fundamental new capability, that's the equivalent of a canary collapsing: an early warning of AI breakthroughs on the horizon.

Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can't distinguish conversing with a human from conversing with a computer. It's an important test, but it's not a canary; it is, rather, the sign that human-level AI has already arrived. Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones.

 

From Technology Review


View Full Article

 


 

No entries found