Sign In

Communications of the ACM

ACM Opinion

The Turing Trap

View as: Print Mobile App Share:
Illustration of a digital scan of a futuristic person.

The risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, business people, and policymakers.

In 1950, Alan Turing proposed an "imitation game" as the ultimate test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions are indistinguishable from those of a human's. Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs.

But not all types of artificial intelligence (AI) are human-like. In fact, many of the most powerful systems are very different from humans, and an excessive focus on developing and deploying human-like AI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created.

From Stanford Digital Economy Lab
View Full Article


No entries found