In late 2018, Google AI researchers Anna Goldie and Azalia Mirhoseini got the go-ahead to test an elegant idea. Google had invented powerful computer chips called tensor processing units, or TPUs, to run machine learning algorithms inside its data centers—but, the pair wondered, what if AI software could help improve that same AI hardware?
The project, later codenamed Morpheus, won support from Google's AI boss Jeff Dean and attracted interest from the company's chipmaking team. It focused on a step in chip design when engineers must decide how to physically arrange blocks of circuits on a chunk of silicon, a complex, months-long puzzle that helps determine a chip's performance. In June 2021, Goldie and Mirhoseini were lead authors on a paper in the journal Nature that claimed a technique called reinforcement learning could perform that step better than Google's own engineers, and do it in just a few hours.
The results won media coverage and notice in the world of semiconductors. In a commentary on the Nature paper, Andrew Kahng, a professor at UC San Diego, predicted the technique would be quickly adopted by chipmakers. "To long-time practitioners," he wrote, "Mirhoseini and colleagues' results can indeed seem magical." Google's data centers now contain TPU chips created with help from Morpheus. Samsung and Nvidia have independently said they also use reinforcement learning to optimize chip designs.
View Full Article
No entries found