In 1950, Alan Turing wrote a paper entitled "Computing Machinery and Intelligence."a He proposed a test in which a human attempts to distinguish between a human and a computer by exchanging text messages with each of them. If the human is unable to distinguish between the two, the computer is said to have passed the "Turing Test." In fact, there were variations, including one in which a human interrogator interacting with a man and a woman was to try to tell which was the man and which was the woman. Turing called this the "Imitation Game." The first version is sometimes now called the "Standard Turing Test."
In this modern era, in which the Internet and the World Wide Web play such visible roles, a different problem arises. In this version, which I will call "Turing Test 2," a computer program undertakes textual interactions with a human and another computer. The task of the computer program is to distinguish between the human and the computer. If the computer program successfully identifies which correspondent is a human and which is a computer, it has successfully passed Turing Test 2. If it cannot, then it fails the test. One particular form of this test is called a CAPTCHAb (Completely Automated Public Turing test to tell Computers and Humans Apart). These tests take many forms, but a popular variation is to display a distorted image of a word or random string of numbers and characters. In theory, a human interacting with the CAPTCHA will successfully respond with the correct alphanumeric string while a computer program, interacting with the same image will not succeed. There are other variations, for example, in which an image of an equation is displayed and the solution to the equation must be entered in response. Assuming the image is just a set of pixels, the challenge for the computer program trying to appear human is to correctly identify the equation and solve it.
For Turing Test 2, a computer program undertakes textual interactions with a human and another computer. The task of the program is to distinguish the human from the computer.
Much has been written about the increasingly sophisticated ability of computer programs to pass the CAPTCHA tests or a variation in which the program sends the image to a human on the Internet who is given some benefit or payment for solving the problem, which is then relayed by the imitating program to the computer program running the CAPTCHA test. This is not merely an amusing game. As computer programs have grown capable of more sophisticated behavior, they are being used to emulate humans to fool less-sophisticated programs into treating computer-generated actions as if they originate from a human. This is an important practical problem because failure to make this distinction may mean malicious programs can register millions of fake identities on an email system for purposes of sending phishingc email messages or making comments on social media Web pages. One reason this is now a serious matter is that such programs (called "bots") are being used to distort news and social media to trick humans into accepting false information ("fake news") as true or simply reinforcing incorrect or biased beliefs through confirmation bias and "echo chamber" effects. Of course, bots can also be used to launch denial-of-service attacks or to pollute crowdsourcing systems. The technical challenge is that a computer program may be hard-pressed to distinguish between input from a human or from a computer because the same paths and media are used to carry the input.
On the other hand, increasingly difficult CAPTCHA practices can drive humans crazy. "Which pictures do NOT contain traffic signs?" "Confirm this statement, 'there are no images or partial images of automobiles in this set of pictures.'"
Humans may justifiably want to throw their computers through the nearest window when poorly executed CAPTCHAs prevent them from legitimately accessing online services.
a. Turing, A. Computing machinery and intelligence. Mind 49, 236 (Oct. 1950), 433–460; doi: 10.1093/mind/LIX.236.433
c. Messages intended to fool a human user into clicking on a hyperlink leading to the ingestion of malware into the user's computer or smartphone or taking an action such as sending money to the account of a person committing a fraud.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
No entries found