Opinion
Letters to the Editor

On the Significance of Turing’s Test

Posted
  1. Introduction
  2. ACM SIG Health and Its Implications
  3. Secure Software Costs No More
  4. Trust Personal Data to Social Scientists?
  5. For Better Security, Try a Random-Point Password Sequence
  6. Footnotes
Letters to the Editor

Contrary to what the first sentence of Alan Turing’s 1950 paper "Computing Machinery and Intelligence" might suggest, the paper was not about the question "Can machines think?" Turing quickly rejected that question because its meaning is undefined, replacing it with a vaguely related alternative that is relatively unambiguous. He said he considered "Can machines think?" too meaningless to deserve discussion and did not claim his replacement question was, in any sense, equivalent to the original question.

Nobody interested in the so-called "Turing Test" should neglect the work of the late MIT professor Joseph Weizenbaum in the mid-1960s. Using a computer that was extremely limited in computing power by today’s standards, he created a simple program called Eliza that could carry out an apparently interesting conversation with a user. Nobody who examined Eliza’s code would consider the program to be "intelligent." It clearly had no information about the topic being discussed. Over the years I have met at least several people who viewed Eliza as a serious attempt to pass the so-called Turing test; some actually worked to improve it. Weizenbaum found this surprising and insisted they were wrong. He had written the program and related paper to show that passing the Turing test was trivial and that the test should not be used as a measure of intelligence.

Moshe Y. Vardi was correct in his Editor’s Letter "Would Turing Have Passed the Turing Test?" (Sept. 2014) when he suggested that Turing’s "imitation game" should be regarded as nothing more than a game. I would go further. Computer scientists have wasted far too much time and resources trying to answer "big" but vague philosophical questions (such as "Can a machine be intelligent?"). Their effort would be better spent on answering "little" questions about specific issues (such as "Can a computing machine be trusted to park a car?"). Discussion of such practical matters would be far more useful than endless debates about the Turing test and who or what might pass it.

David Lorge Parnas, Ottawa, Canada

Back to Top

ACM SIG Health and Its Implications

The interesting summary of ACM’s strategic planning retreat in November 2013 by John White in "ACM’s Challenges and Opportunities," his "From ACM’s Chief Executive Officer" column (Oct. 2014), prompts a few questions. The SIG structure was described as "relatively healthy." But by what measures? In 1990, ACM had more than 100,000 SIG members. Computer professionals increased by one or two orders of magnitude over the ensuing 25 years, but collective SIG membership declined steadily to fewer than 40,000, setting a new record low in 2014. This raises questions about two of the three crosscutting topics identified at the retreat: community, obviously, and practitioners, who accounted for much of the decline, as 100,000 academics were not employed in 1990. The third major topic, quality, could be implicated in the damage to practitioner participation and sense of community. Relentless conference demands for technical quality drove out urgent but unpolished practitioner observations and informal research thoughts, first in papers, then in workshops. A quality focus favors incremental work with its unimpeachable literature reviews, methods, and analyses. To find realistic paths forward, ACM must understand more deeply the forces that brought us here. How much time do we have?

Jonathan Grudin, Redmond, WA

Author’s Response: Membership is not the only (or even the right) metric to use when assessing the health of the technical communities represented by ACM SIGs. Looking at the number, value, and reach of SIG technical activities is far more important. ACM SIGs run almost 200 conferences, workshops, and symposia each year. These events are attended by tens of thousands of academics, researchers, and practitioners. And the results are heavily downloaded from the ACM Digital Library. Such reach and impact is how we assess the health of our technical communities, and most are healthy. However, there is still much work to do in rethinking conferences, serving practitioners, and finding the right balance in publications; see also "Dealing with the Deep, Long-Term Challenges Facing ACM (Part I)" by Alexander L. Wolf, ACM President (Nov. 2014).

John White, ACM CEO

Back to Top

Secure Software Costs No More

Poul-Henning Kamp failed to provide data for his claim in his article "Quality Software Costs Money—Heartbleed Was Free" (Aug. 2014) that quality software costs more and the implication that for-profit software (such as Windows) is more secure than free and open source software (FOSS). One usually measures the time between serious FOSS bugs affecting many millions of machines in years rather than a month or two between Windows vulnerabilities affecting most Windows versions allowing remote execution of arbitrary code. The Microsoft policy of "Patch [every] Tuesday" is admission of this imperative.

I encourage for-profit companies and others to donate more toward FOSS development, much of which is already done for free by individuals and companies donating the time of their people; for example, I donated the concept and first implementation of keyboard locking, LOCK(I), to Berkeley Software Distribution (BSD) Unix.

Wikipedia reports the Heartbleed security bug affected few major sites directly; for example, major banks, Amazon, Apple, eBay, the IRS, PayPal, and Target were not affected, even as Gmail, Google, HealthCare.gov, Netflix, Yahoo, and YouTube were. Good security practices require companies and users alike to use different passwords for different high-security purposes, as in, say, individual online banking or shopping sites, to limit potential cybercrime damage to one’s other accounts using the same password.

Bob Toxen, Duluth, GA

Author’s Response: I am surprised Toxen read an endorsement of closed-source quality into my article about funding open source, as no such thing was my intent. I wrote about the horrible quality issues in closed source in a previous Communications article, "The Software Industry is the Problem" (Nov. 2011) and suspect he will find we are in violent agreement if he reads it.

Poul-Henning Kamp, Stagelse, Denmark

Back to Top

Trust Personal Data to Social Scientists?

Jon P. Daries et al.’s article "Privacy, Anonymity, and Big Data in the Social Sciences" (Sept. 2014) included a nice explanation of several considerations for masking data that had not occurred to me before. However, suggesting we trust social scientists’ ethics to preserve personal privacy is naïve. The continuing Edward Snowden disclosures make clear that even highly motivated, ethical people can betray their principles despite the "best" of intentions. I suspect civilization will need millennia more before it begins to let data security and privacy "depend on the kindness of strangers" or even on social scientists.

Steven L. Newton, Milwaukee, WI

Back to Top

For Better Security, Try a Random-Point Password Sequence

In "Neuroscience Meets Cryptography: Crypto Primitives Secure Against Rubber Hose Attacks" (May 2014), Hristo Bojinov et al. described how authorized users can be trained on a 30-character password sequence, with those requesting access to a secure system required to select it from among three 30-character sequences. Authorized users would identify themselves by clearly performing better on the training sequence than on the other two sequences.

The article also described one type of attack on such a system in terms of an unauthorized user deliberately underperforming on two of the sequences, giving the user a 1/3 chance of doing better on the training sequence. The article suggested an alternative would be to train the user on multiple sequences, then intermix these trained sequences with other sequences.

However, a simpler and more effective way to provide system security also suggests itself. Three 30-character sequences could be viewed as equivalent to a single 90-character sequence. Suppose a system administrator responsible for training a system’s authorized users selects a random point within the two random sequences and begins the 30-character training sequence at that point. A potential attacker would then have to identify that point. This added complexity would reduce the attacker’s 1/3 chance of selecting the right sequence to 1/61, thereby strengthening access security.

Intermixing character sequences promises even more possibilities for hiding the right sequence. In fact, it might not be necessary for the system administrator to get authorized users to train on multiple sequences but rather to break the 30-character training sequence into three 10-character subsequences. Fairly standard combinatorial procedures suggest there would be 238,266 ways to place the 10-character subsequences in the 60-character random sequence.

Hiding the correct sequence in other sequences also suggests new secure-access research directions. In particular, sequence hiding assumes training on a 30-character sequence translates to improved performance when a 30-character sequence is embedded in a longer sequence. But does such an approach to hiding passwords hold for all possible sequences and access privileges? Moreover, would training on the 30-character sequence translate to improved authorized-user performance on a 10-character sub-sequence?

Jeffrey Suzuki, Brooklyn, NY

Authors’ Response: Suzuki points out several possible improvements and extensions that may make user-authentication systems that leverage implicit sequence learning more practical. The authentication method is not sensitive to the starting point of the sequence, so, sadly, we cannot take advantage of that suggestion. However, embedding parts of the trained sequence among noise sequences during the authentication challenge seems to work quite well. Under National Science Foundation sponsorship, we are continuing research on this topic and hope to be able to share more of our findings in the near future.

Hristo Bojinov, Daniel Sanchez, Paul Reber, Dan Boneh, and Patrick Lincoln

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More