Computers interacting with, not imitating, humans is the way forward.
The following letter was published in the Letters to the Editor in the April 2013 CACM (http://cacm.acm.org/magazines/2013/4/162502).
Robert M. French's main argument in his article "Moving Beyond the Turing Test" (Dec. 2012) is that the Turing test is "unfair" because we cannot expect a machine to store countless facts "idiosyncratic" to humans. However, the example behavior he cited does not hold up, as I outline here. He was careful in selecting it, as it came from one of his own articles, so, we might be justified inferring that other "quirky" facts about human behavior that might "trip up" a computer are, likewise, also no reason to discard the Turing test.
The example involved the "idiosyncrasy" that humans cannot separate their ring fingers when their palms are clasped together with fingers up-right and middle fingers are bent to touch the opposite knuckle. He then asked, "How could a computer ever know this fact?" How indeed? We did not know it either but discovered it only by following French's invitation to try to separate our own ring fingers. So, too, a computer can discover facts by simulating behavior and compiling results. The simulation would use the computer's model of the anatomy and physiology of human hands and fingers, together with the laws of related sciences (such as physics and biology), to compute the "open and close" behavior of each pair of fingers from some initial configuration.
If the model encapsulates our understanding well enough, the open-and-close motion would be 0 only for the pair of ring fingers. Moreover, following a combination of visualization and logic, an explanatory model might reason why separating the two ring fingers is not possible and under what conditions it might be. One could ask whether French ever asked a competent specialist why the motion is not possible; I myself have not asked but assume there is some explanation.
Idiosyncratic facts about human behavior are not "unfair." That any behavior can be understood (described computationally) is the fundamental assumption of science.
Most of French's argument about the way forward in AI evolving from brute force with unprecedented volumes of data, speed of processing, and new algorithms should be weighed with a caveat: Trying to side-step "Why?" belongs in the category of "type mismatch."
Turing thought computers could eventually simulate human behavior. He never proposed the Turing test as the way forward in AI, suggesting instead abstract activities (such as playing chess) and teaching computers to understand and speak English, as a parent would normally teach a child. He said, "We can only see a short distance ahead, but we can see plenty there that needs to be done." I say, let's not be in such a hurry to bid farewell to the Turing test.
New London, NH
The following letter was published in the Letters to the Editor in the March 2013 CACM (http://cacm.acm.org/magazines/2013/3/161185).
Exploring non-human intelligencereal and artificialis fascinating. Consider novels like Arthur C. Clarke's 2001: A Space Odyssey and stories like Isaac Asimov's I, Robot, as well as cinematic adaptions like Blade Runner based on Philip K. Dick's novel Do Androids Dream of Electric Sheep? The plot invariably revolves around machines with an intelligence level comparable to that of humans that communicate with humans, so not far from a Turing test. Fascinating, because deep down, we, as humans, believe we are unique in our level of cognition and ability to emote.
A credible intelligent agent must be able to relate to human perception, reasoning, communication, and life experience, including emotion. In "Moving Beyond the Turing Test" (Dec. 2012), Robert M. French argued this is impossible, outlining a scenario only a human could truly understand, backed up with an example involving a series of instructions for manipulating one's fingers. He implied that answering a question about a particular step in the sequence is, and always will be, out of bounds for machines. His assertion (about answering out-of-bounds questions) was: "Don't try; accept that machines will not be able to answer them and move on."
I must disagree. My company, North Side Inc. (http://www.northsideinc.com/), pursues research and development toward endowing machines with verbal ability anchored in real-world knowledge. Work in this direction requires that we account for (and simulate) human perception, motor function, cognition, and emotion. Though still far from being able to pass the Turing Test, we are making good progress; for descriptions of our recent work on embodied intelligent agents with conversational ability, see our video at http://www.botcolony.com and my paper at http://lang.cs.tut.ac.jp/japtal2012/special_sessions/GAMNLP-12/papers/gamnlp12_submission_3.pdf. Credible high-fidelity agents with human-like behavior promise great technological and economic benefit in such fields as entertainment, mobile computing, e-commerce, and training. We have also found that an agent attempting to emulate human behavior (often failing) has a quirky, humorous side that makes it endearing. Why go for a humorless computer in a world where marketers dream of intelligent assistants connecting (emotionally) with their human owners? In 1996, Byron Reeves and Clifford Nass offered ample evidence for the theory that people tend to treat computers and other media as if they were real people in The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places (http://csli-publications.stanford.edu/site/1575860538.shtml).
We must keep trying to make intelligent agents as credible and human-like as we know how. However, the premise of French's article was that it is time for the Turing Test to take a bow and leave the stage. Embodied artificial cognition is an extremely difficult (but fascinating) endeavor, and the benefits of success are enormous. It is way too early to even contemplate giving up.
Joseph claims his robots are "making good progress" toward passing a full-blown Turing Test. This is delusional, cynical (perhaps in order to attract financing), or shows he does not fully understand how incredibly difficult it would be for a machine to actually pass a carefully constructed Turing Test. My point in the article was that intelligent robots, capable of meaningful interaction with humans, do not have to be Turing-Test indistinguishable from humans. Just ask Jimmy [North Side's robot] if Ayame [North Side's nominal adult human] can put her little finger all the way up her nose.
Robert M. French
Displaying all 2 comments