Computer specialists are less concerned about the threat of computers becoming intelligent enough they decide to do away with the human race, than with the threat of programs rapidly overdoing a single task, with no context, at a global level.
"What you should fear is a computer that is competent in one very narrow area, to a bad degree," warns Massachusetts Institute of Technology professor Max Tegmark.
To prevent this, the Future of Life Institute has disbursed funding from entrepreneur Elon Musk into research to prevent autonomous systems from going rogue.
Allen Institute for Artificial Intelligence CEO Oren Etzioni says most perceived doomsday scenarios assume computers achieving human-like consciousness when in fact they are much more literal-minded. He says this confusion is rooted in the persistent popularization of artificial intelligence (AI) dating back to the 1950s, when people thought thinking machines "were around the corner."
The work of DeepMind, a Google subsidiary, focuses on deep learning, a form of machine learning that entails identifying patterns, suggesting actions, and making predictions. However, it is still automation and not human-like thinking.
"People in AI know that a chess-playing computer still doesn't yearn to capture a queen," notes University of California, Berkeley professor Stuart Russell.
From The New York Times
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found