Sign In

Communications of the ACM

ACM TechNews

Smart Machines: What's The Worst That Could Happen?


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
robotic and human hands

Credit: Nils Jorgensen / Rex Features

A panel of 25 artificial intelligence (AI) scientists, roboticists, and ethical and legal scholars have spent the past year discussing the risks of developing machines with human-level intelligence. The panel, organized by the Association for the Advancement of Artificial Intelligence, examined the feasibility and ramifications of possible AI scenarios, such as the Internet becoming self aware, conscious computers, or a smartphone virus capable of mimicking the phone's owner.

The panel was focused on exploring what will happen when AI goes beyond assisting humans, including what breakthroughs are expected, what effects these advancements will have on society, and what precautions should be taken. Panel members unanimously agreed that creating human-level artificial intelligence is possible in principle, but estimates for achieving that objective ranged from 20 years to 100 years.

Panel member Tom Dietterich, from Oregon State University, noted that much of today's AI research is not focused on creating human-level AI, but rather on creating systems that excel at a single task. One realistic short-term concern the panel noted is a smartphone virus that mimics the digital behavior of humans, which could be used to impersonate that individual with little or no external guidance from its creators. Researchers say such a virus is already possible. "If we could do it, they could," says Carnegie Mellon University's Tom Mitchell.

From New Scientist
View Full Article

 

Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA


 

No entries found