BLOG@CACM
Artificial Intelligence and Machine Learning

Nobel Prizes and AI: The Promise, the Peril, and the Path Forward

Artificial intelligence was a key part of the story behind the Nobel Prizes recently awarded to computer scientists.

Posted
entrance to the Nobel Prize Museum, Stockholm

There is no Nobel Prize for computer science, but this year the Nobel Committee made three awards with deep ties to computing and innovation. Viewed together, the Nobel Committee may also be making a statement about the current state of Artificial Intelligence (AI) and the challenges ahead.

Demis Hassabis, a former chess prodigy and the founder of Google DeepMind, received the Nobel Prize in Chemistry for his work, along with John Jumper, on protein folding. Hassabis and Jumper constructed an AI model to predict the structure of virtually all 200 million proteins. As the Nobel Committee explained, “Researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.”[1]

Developing AI programs for chess loomed large in Hassabis’s work. At DeepMind, Hassabis revolutionized the world of game programming with the introduction of reinforcement learning algorithms using neural networks. Traditionally, chess programs relied on the expertise of Grandmasters. They would refine evaluation algorithms (now called “weights”) to select the best move among several options. With self-learning techniques, AlphaZero, developed by Hassabis, ignored the wisdom of the Grandmasters, assessed the rules of the game, and became the strongest computer program in the world after four hours of play.

AlphaFold2 followed from AlphaZero, built to understand the “building blocks of life,” the proteins found in every cell of the human body.

Geoffrey Hinton, along with John Hopfield, received the Nobel Prize in Physics for foundational discoveries and inventions that enable machine learning with artificial neural networks.”[2] Hinton had earlier received the Turing Award, along with Yoshua Bengio and Yann LeCun, for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.[3] 

Hinton is now leading efforts to establish limits on AI systems. Hinton resigned from Google in 2023 so he could speak more freely about his concerns with the rapidly developing technology. In numerous interviews since, he has warned about the risks of unregulated AI. Hinton told CNN’s Jake Tapper, “I want to blow the whistle and say we should worry seriously about how we stop these things getting control over us.”[4] Hinton endorsed SB 1047, California legislation that would establish accountability for large AI systems and require the creation of a mechanism to stop AI systems no longer under human control.

Hinton follows in a line of distinguished scientists who are on the front lines of innovation and also the front lines of the call for regulation. In the 1980s, it was computer scientists in Silicon Valley who warned of the risk of AI warfare and established Computer Professionals for Social Responsibility (CPSR). Today it is Geoffrey Hinton, Yoshua Bengio, Stuart Russell, and others who are making the breakthroughs and simultaneously calling for accountability. The need to maintain human control is foundational for safe, secure, and trustworthy AI.

Daron Acemoglu, an MIT professor, and two others received the Nobel Memorial prize in Economic Sciences “for studies of how institutions are formed and affect prosperity.”[5] The Nobel Committee explained that Acemoglu and his colleagues found inclusive institutions create long-term benefits for everyone, but extractive institutions provide short-term gains for only the people in power.

Acemoglu’s work spans economic systems across regions and centuries. His most recent book, Power and Progress, argues that societies only receive the benefits of technological innovation when there are political institutions that help ensure broad social benefits. His work goes back to the Dutch windmills and British steam engines, and then to the present-day tech industry of Silicon Valley.

In Acemoglu’s work, strategies and social systems for social prosperity can also be found. According to his studies, without such measures, increasing concentrations of wealth and power would be the natural outcome of innovation. And so it is notable that Professor Acemoglu recently told an audience at UNESCO headquarters in Paris, “the future of AI depends on the choices we make as individuals, regulators, and society.”

Taken together, the three awards send a remarkable message from the Nobel Committee about the current moment in Artificial Intelligence. Hassabis’s practical application of deep learning techniques points to a future of medical breakthroughs and scientific innovation. Hinton’s award and his subsequent advocacy is a reminder of the perils of AI. Acemoglu provides a path forward, based on a examination of how innovation impacts societies and how societies create institutions to enable prosperity.

The timing of these three awards is also significant. 2024 is by far the most consequential year in the development of norms for AI governance. Earlier this year, the European Union finalized the first comprehensive regulation for Artificial Intelligence.[6] Later in the year, the Council of Europe opened the first internationally binding AI treaty for signature, seeking to promote fundamental rights, democracy, and the rule of law.[7] And the United Nations recently adopted a Global Digital Compact to establish an Independent International Scientific Panel to promote scientific understanding of AI, and its risks and opportunities.[8]

Marc Rotenberg

Marc Rotenberg is founder and executive director of the Center for AI and Digital Policy.


[1] https://www.nobelprize.org/prizes/chemistry/2024/press-release/

[2] https://www.nobelprize.org/prizes/physics/2024/press-release/

[3] https://awards.acm.org/about/2018-turing.

[4] https://www.cnn.com/2023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html

[5] https://www.nobelprize.org/prizes/economic-sciences/2024/press-release/

[6] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[7] https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature

[8] https://www.un.org/techenvoy/global-digital-compact

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More