It is always worth paying attention when someone wins a Nobel Prize for their life’s work, but a couple of awards this year are drawing more attention than usual—primarily because computer science played a role in making both of them possible.
The 2024 Nobel Prize in Chemistry was jointly awarded. Half of it went to David Baker, a professor at the University of Washington, “for computational protein design,” based on Baker’s lifelong work using computation to design novel proteins from scratch. The other half went to Demis Hassabis and John Jumper at Google DeepMind “for protein structure prediction,” according to the Nobel Committee. Both were instrumental in creating AlphaFold2, an AI model launched in 2020 that has already predicted the structure of almost all known proteins that scientists have discovered—nearly 200 million in all.
Similarly, the 2024 Nobel Prize in Physics also was awarded jointly. Half went to scientist John Hopfield, who created “an associative memory that can store and reconstruct images and other types of patterns in data,” according to The Nobel Prize committee. That invention is named a “Hopfield network” after him. The other went to Geoffrey Hinton, an ACM A.M. Turing laureate and one of the “godfathers” of modern artificial intelligence, whose work built on Hopfield’s to invent the “Boltzmann machine,” an early type of neural network that helped form the foundation of today’s AI revolution.
Both awards literally would not have been possible without some serious computer science. But what, from a computational perspective, did these recipients do to merit their prizes? We talked to computer scientists and domain experts to better understand the technical details of the work that led to both prizes—including hearing directly from one of the Nobel winners.
The 2024 Nobel Prize in Chemistry
The work that David Baker did to earn his Nobel is ongoing. He’s being recognized for his work in “computational protein design,” something he’s been working on for decades. His big break in this field came back in 2003, when he achieved a major breakthrough in protein design with the creation of Top7, the first entirely new protein unlike any other in nature.
Ever since, he’s been working on “de novo” computational protein design, he said. “My lab at the University of Washington has been working on the problem of de novo protein design, where one seeks to go beyond studying existing proteins to create new ones with novel functions, from scratch, using a combination of computational methods and web lab techniques.”
There were plenty of milestones in his career after the initial 2003 breakthrough, including the creation of Rosetta, one of the seminal early computer programs to predict protein structures. But things got even more interesting once the work of fellow prize winners, Hassabis and Jumper, started to show results more than a decade-and-a-half after Baker’s initial breakthrough.
“While great progress was made, the field was significantly accelerated when AlphaFold proved just how powerful deep learning methods could be for protein science,” said Baker.
AlphaFold (the latest iteration of which is AlphaFold3) uses state-of-the-art AI, mainly deep learning and transformer architecture, to achieve “unprecedented accuracy” in predicting the structure of proteins, said Baker. AlphaFold2, the iteration released in 2020, is the one that won Hassabis and Jumper this year’s prize. Since its release, it has been able to predict the structure of nearly all known proteins.
Baker very quickly saw the potential of DeepMind’s approach. “My lab quickly developed several new deep-learning-based methods for protein design,” he said, identifying a few as particularly notable.
One is RFdiffusion, which uses a denoising diffusion generative model to create novel protein structures. Another, ProteinMPNN, is a structure-encoder-sequence-decoder built off a graph neural network architecture, and excels at the “inverse design problem,” or generating a protein sequence given a protein structure. And there’s RF All-Atom, which goes beyond protein structure prediction and predicts how proteins will interact with small molecules, metal ions, and nucleic acid complexes.
“These tools have been demonstrated to be extremely effective at a broad range of protein design tasks, and the novel proteins generated with these methods have the potential to be transformative in a range of fields, including biology, medicine, materials science, industrial chemistry, green tech, and more,” said Baker.
The 2024 Nobel Prize in Physics
In perhaps a twist of fate, the work that won this year’s physics prize helped kick off the deep learning revolution that made DeepMind’s work possible.
The work on the Boltzmann machine for which Hinton was awarded took place in the 1980s, but it paved the way for the AI advancements we’re seeing today. That’s because it took a huge step forward by introducing a way for computers to mimic human brain processes, said Mohammad Alothman, the founder of AI Tech Solutions and an expert in AI development.
“The Boltzmann machine was one of the first models capable of learning internal representations and pattern recognition through a process of stochastic learning,” said Alothman. “What’s important from a computational view is that this allows machines to deal with incomplete or noisy data, much like how the human brain works.”
Before Hinton’s work, algorithms required clean, structured data to produce accurate results. The Boltzmann machine, however, allows machines to learn in a more flexible fashion. Its stochastic learning uses principles from statistical physics. These were layered on top of Hopfield’s advances, hence Hinton’s sharing the prize in physics with Hopfield. This is also what allowed the machine to learn patterns and train on examples, making it an early precursor to the neural networks that make today’s more powerful AI possible.
“His work created a computational foundation that supports the vast AI systems we’re building now,” said Alothman.
Like Baker, Hinton’s work didn’t stop after his first big breakthrough. In 2012, he supervised the development of AlexNet, a convolutional neural network that achieved unprecedented image recognition performance, kicking off a fresh wave of deep learning innovation that continued throughout the next decade.
Not to mention, he spent that next decade working for the same company as Hassabis and Jumper: Google.
Logan Kugler is a freelance technology writer based in Tampa, Florida. He is a regular contributor to CACM and has written for nearly 100 major publications.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment