Sign In

Communications of the ACM

Communications of the ACM

Who Begat Computing?


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Communications Editor-in-Chief Moshe Y. Vardi

The Turing Centenary with its furious pace is now behind us and we can afford some reflection on what has transpired. What started as an idea that the centenary of one of the founding figures of computing should be celebrated has turned into a global social phenomenon. A quick perusal of the Turing Centenary Web page (http://www.turingcentenary.eu/) reveals an amazing explosion of meetings, lectures, exhibitions, and volumes.

There is a risk, however, that in our focus on highlighting Turing's seminal contributions we may have gone from celebration to hagiography. Listening to so many speakers extol Turing's accomplishments, one could end up believing that Turing single-handedly begat computing, being the father of computability, universal machines, stored-program computers, crypto-analysis, and artificial intelligence. This picture is simplistic and does not do justice to the richness of the story of how computing emerged between 1930 and 1950. We do not have one founding figure, we have several, and we should recognize and celebrate all of them.

The study of computability was launched at Princeton University, where Alonzo Church, together with his students Stephen Kleene and Barkley Rosser formalized computability in the early 1930s first in terms of the lambda-calculus, and then in terms of recursive functions (proposed by Jacques Herbrand and Kurt Gödel). They also proved the equivalence of the two formalisms, which led to Church's identification of computability with recursiveness. Yet, this characterization of computability was not compelling enough and described as "thoroughly unsatisfactory" by Gödel. It was then Turing's influential analysis of computability in terms of finite machines and its equivalence to the lambda-calculus and recursiveness that led to our current accepted understanding of computability, referred to as the Church-Turing Thesis. (Emil Post independently formulated another notion of machines, which turned out to be equivalent to Turing machines.)

Turing was a leading scientist in deciphering the German Enigma code at Bletchley Park in the early 1940s. Yet, unlike his computability work, which was done independently of the Princeton effort, breaking the Enigma was a collective effort. To start with, Turing was building on previous work by Polish and British code-breakers. I.J. Good played a key role in the Bayesian statistical analysis of Enigma messages and Gordon Welchman made key contributions to the design of the Bombe, the machine that used brute-force search to identify correct Enigma rotor positions. Overall, one must remember that the British code-breaking project was a huge effort; 12,000 people toiled at Bletchley Park during the war.

The claims that Turing invented the stored-program computer, which typically refers to the uniform handling of programs and data, are simply ahistorical. One can trace the kernel of the idea of handling programs and data uniformly back to Gödel's arithmetization of provability in 1931. The idea then showed up again in the lambda-calculus, recursive functions, and Turing machines. Turing invented a universal machine, a machine that can simulate all other machines, but he was preceded by the Princeton group, who constructed a universal lambda-term and a universal recursive function. While these ideas undoubtedly influenced the efforts of John von Neumann and his collaborators at the University of Pennsylvania in the 1940s, we should not confuse a mathematical idea with an engineering design. It was the EDVAC Report of 1945 that offered the first explicit exposition of the stored-program computer. Turing's ACE Report, which elaborated on this idea and cited the EDVAC Report, was submitted in early 1946. The first embodiments of the stored-program computer were the Manchester Baby and the Cambridge EDSAC, put into operation in 1949 and preceding the Pilot ACE, which was based on Turing's design and first run in 1950.

Turing was not the first to think about artificial intelligence (AI). The philosopher Charles S. Peirce wrote in 1887: "Precisely how much the business of thinking a machine could possibly be made to perform, and what part of it must be left to the living mind is a question not without conceivable practical importance." Nevertheless, Turing's 1950 paper "Computing Machinery and Intelligence" is indeed the first deep philosophical investigation of the possibility of artificial intelligence. While the Turing Test, referred in the paper as the "Imitation Game," has been rather under-influential in the history of AI, Turing does deserve the credit for putting the question of general machine intelligence so squarely on the table.

Computing emerged during the 1930–1950 period because the time was right. Many people played key roles in this development; assigning precise credit is quite impossible. Turing was a great computing pioneer, and his place in the computing pantheon is secure, but he is not alone there.

Moshe Y. Vardi, EDITOR-IN-CHIEF


©2013 ACM  0001-0782/13/01

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.


Comments


Marcelo Carvalho

Pierre Levy tell us that the history of computing (as indeed perhaps any history) is more like a distribution of indefinite creative moments and places, a sort of meta-network rutted, unmade, irregular, in which each node, each actor, defined in terms of its ends to its own network topology and interprets in its own way everything that comes from neighbors. [...] In this view of things, the notions of precursor or founder, taken in an absolute sense, have little relevance. On the other hand, can discern whether certain operations on the part of actors who want to impose themselves as founders or designating the near past or recent, prestigious ancestors who appropriated proclaiming themselves his descendants. There is no unique "cause" or "factor", but circumstances occasions, people or groups to which individuals give different meanings. There are no "bloodlines" calm, peaceful succession, but sword blows from all sides, attempts escheates and endless processes around heritage.


View More Comments