Sign In

Communications of the ACM

ACM News

Do Computers Really Think?


Are smart assistants really intelligent?

Do smart assistants demonstrate, or just mimic, intelligence?

Credit: CNN Money

In 1950, British computer scientist and mathematician Alan Turing conceived of a test to answer the question, "can machines think?" If you carefully read his proposal in the paper Computing Machinery and Intelligence, according to Jim Hendler, director of Rensselaer Polytechnic Institute's Data Exploration and Applications, Turing believed language differentiated humans from animals, so if a computer could convincingly use language, then it could be considered intelligent.

Today, there are plenty of voice recognition programs, such as Nuance's Dragon and Google's Voice Search, as well as voice-recognizing smart home assistants like Amazon's Alexa and Apple's Siri, but none of them are even trying to fulfill Turing's goal of thinking computers. Rather, they provide quick, transparent access to the vast storehouse of online data. "So far, all these attempts are just computerized idiot savants," says Hendler. "We are still no closer to understanding what intelligence is."

Turing's test for genuine machine intelligence could be boiled down to what he called the "imitation game": a computer and a man communicate by teletype (using text only); third-party judges observe the textual conversation, then decide which party made the most convincing human. (Interestingly, Turing allowed both the humans and the computers to lie, so the computer would not unwittingly give itself away by making a lightning-fast numerical calculation.)

Continued failures at passing the test prompted American inventor Hugh Loebner to establish a prize of $100,000 in 1990 for anyone producing a computer that could pass the Turing Test. No artificial intelligence (AI) has passed the test yet, resulting in the downgrading of the annual Loebner Prize to the "best of the contestants" (the closest yet was in 2014, when one computer fooled 30% of the judges).

Garrett Kenyon, a computer scientist in the Information Sciences group at Los Alamos National Laboratory specializing in neurally inspired computing, believes that even if all the judges are someday fooled, the computer will remain an idiot savant—specialized at fooling human judges.

At Los Alamos, Kenyon studies the brain by modeling its neural networks on the facility's high-performance computers (including real-time simulations using D-Wave's quantum computer and, more recently, IBM's Quantum Experience quantum computers in the cloud). He believes that to unravel machine intelligence, we need to unravel the secrets to more than just the brain; we also need to mimic the human senses, such as stereoscopic vision and binaural hearing, so the signals processed by AI are most like those processed by human brains.

"To solve intelligence, we need to bring more neuroscience to bear, but not so much as to make it untenable to emulate in real time. For instance, all our algorithms are designed to process dynamic data such as motion, since human sensors are always looking for changes, not just recording events. Traditional AI is based on static representations, whereas everything in the brain works on dynamic data; nothing in the brain works on static representations," said Kenyon.

Jennifer Colegrove, CEO and principal analyst at market research firm Touch Display Research, says devices like Alexa and Siri are not aimed at mimicking human intelligence, but rather in outperforming the preferred man-machine interface, which today is touch displays. Voice recognition is not even aimed at passing the Turing Test, according to Colegrove, who adds that the ecosystem of voice recognition intellectual property (IP) and application-specific integrated circuits (ASICs) will be used to form more general-purpose AIs that inevitably will pass the Turing Test someday.

Not all analysts agree. Paul Erickson, Senior Analyst at IHS Markit, agrees that most of human intelligence can be imitated for specific applications, such as for customer support, but the understanding of human intelligence that Turing sought will never be achieved.

"The Turing Test is remarkably difficult, and even more so when natural speech recognition is mixed in," says Erickson. "Alexa is better at taking orders, but not as good as Google Voice Search for unstructured queries. And the first layer for voice-based systems—recognizing accents—is done much better by Nuance. None of these interfaces are particularly good at conversations, nor are they meant to be. In the AI area, even if you still believe that passing the Turing Test proves something about machine intelligence, its achievement is about 50 slots down on the list of intelligent-assistant tasks now being developed. It will be a long time in coming."

As summed up by Jonathan Collins, research director at ABI Research, "the long-term prospects for AI are enormous. As billions of devices are connected and available to the network, automation of evaluating that data, and our machine's reactions to it, is mandatory. All those connections will likely lead to a level of AI that can determine those reactions in real-time."

However, the Turing Test is actually opposed to that goal, according to Collins. The kind of AI needed to pass the test is not one that fools people with the imitation game, but one which is transparent to the user.

"The Turing Test disappears for the vast majority of these interactions and responses, as AI decisions and management become just another part of the network fabric," Collins explains. "It's not about human-machine interfaces anymore, as much as it is about human- to-'invisible' computer interfaces that preform their tasks on enormous streams of data in real time, without human intervention."

R. Colin Johnson is a Kyoto Prize Fellow who has worked as a technology journalist for two decades.


 

No entries found