News
Architecture and Hardware

Updates Spark Uproar

Google’s AlphaChip “updates” reignite criticisms the company is overstating the benefits of its AI-based chip design method.

Posted
circuit board vias, illustration

When Google Chief Scientist Jeff Dean recently publicized what he called “some exciting updates” on the company’s AI-based chip design method, he was probably not surprised that critics re-emerged to take Google to task over its claims, first made in 2021 in a Nature paper.

“Our AI method has accelerated and optimized chip design, and its superhuman chip layouts are used in hardware around the world,” Dean proclaimed in a post on X on Sept. 26. In the post, he revealed Google has now given the AI method a name — AlphaChip — and that the company is now making available “a pre-trained AlphaChip checkpoint for the open source release that makes it easier for external users to get started using AlphaChip for their own chip designs.”

What Dean did not provide was something that many experts have been calling for for over three years: proof that Google’s AlphaChip is even equal to, let alone better than, any chip design methods available today from commercial vendors such as Cadence, Synopsys, and Siemens, and from the academic world.

“I think they have a new name, but I don’t know that they’re selling anything different,” said Patrick Madden, associate professor of computer science at Binghamton University in Binghamton, NY.

“The surprising thing to me is they did not say anything new,” echoed electronic design automation expert Igor Markov, a former professor with the University of Michigan. “It’s a nothing burger.” Markov says that major concerns about Google’s Nature paper have not been addressed.

“The community has not been able to verify the claims by Google,” said Moshe Vardi, professor of computational engineering at Rice University in Houston. “People cannot say the claims are wrong, but Google could have chosen to publish the evidence of public benchmarks.”

Vardi described Google’s assertions of AlphaChip’s performance as “hype of research.”

AlphaChip was developed by Google’s London-based DeepMind group and by a previous group, Google Brain. It uses an AI technique called reinforcement learning to improve the chip design process.

Communications asked Google and DeepMind to share benchmarks showing AlphaChip’s performance. They did not oblige.

Observers like Madden and Markov, however, produced data showing that AlphaChip is inferior to many other tools.

One of the critical objectives when “floorplanning” a chip is to minimize the amount of wiring that connects the many components placed on the chip. “The better placement you have, the less wiring,” noted Binghamton’s Madden.

For more than 20 years, Madden has charted the wiring requirements devised using a number of different academic design tools. Those tools include one that Madden considers “state of the art” in academia, called RePlAce, from the University of California, San Diego (UCSD). They also include NTUplace from National Taiwan University, as well as a classic optimization method known as simulated annealing, and several others, going back to one called Capo+Parquet from 2002.

Madden has averaged their wiring outcomes when used in designing 18 different chips and found that many of them reduced wiring more than the Google approach does. “RePlAce is 30 to 35% better,” he said.

With RePlAce setting the standard with a score of “1,” the Google approach averaged 1.35, which equates to 35% more wiring, Madden explained. In fact, many of the other tools on Madden’s chart outperformed Google’s. On a results chart assembled by Madden, two different versions of NTUplace scored 1.01 in one case and 1.04 in the other (in other words, 1% off and 4% off of RePlAce); a method called feng shui scored 1.13 (13% more wiring than RePlAce), as did one called Uplace. Simulated annealing registered at 1.17. Those are just some examples of the techniques that bested Google’s 1.35.

So how did Madden come up with a score for Google, if Google is not releasing benchmark results? He plugged in results provided by Andrew Kahng and Chung-Kuan Cheng at UCSD, who worked from Google’s partial public release of the code, known as Circuit Training.

“Kahng actually implemented what the Google paper said it did, run on an army of GPUs,” Madden said. “Kahng and Cheng and grad students worked on it for about a year, in contact with Google, trying to get it right as much as they could. It was a sincere best effort.”

Madden further pointed out that the “30 to 35%” advantage of RePlAce was consistent with findings reported in a leaked paper by internal Google whistleblower Satrajit Chatterjee, an engineer who Google fired in 2022 when he first tried to publish the paper that discredited the “superhuman” claims Google was making at the time for its AI approach to chip design.

With Google’s AlphaChip, it has done nothing to prove any real advancements over the earlier version, Madden repeated.

“The thing I’m perplexed by: If AlphaChip does in fact work, Google could run the benchmarks themselves and we would all be astonished at how good it is, and we’d (have) a ticker tape parade,” he said.

Like Madden, former University of Michigan academic Markov also provided some hard numbers showing that AlphaChip does not deliver the astonishing advances that Google claimed.

Rather than measuring wiring, Markov looked at the length of time to design a chip scheme known as macro placement. Based on work by Kahng and Cheng, he found that Google’s Circuit Training takes 32.31 hours, simulated annealing takes 12.5 hours, and a commercial tool from Cadence took 0.05 hours.

Markov does not question whether Google has added features to its RL in the new AlphaChip version. For example, AlphaChip now incorporates an optimization method known as Coordinate Descent, and uses a version of DREAMPlace, the design tool developed by the University of Texas.

But those additions do not seem to have enhanced performance.

“Is it a breakthrough?” Markov questioned. “It doesn’t produce better solutions than what was known before. It’s not better than annealing. And these newer methods like Cadence are much better. It’s (AlphaChip) very slow. It’s slower than annealing. And it’s much slower than Cadence and other companies.”

“Some of the deficiencies of the work in the Nature paper still apply, even with these new additions,” Markov said.

AlphaChip detractors also took issue with Google applying the “Alpha” moniker to the chip design method, labeling it as a questionable effort to trade on the solid reputation of Google’s AlphaGo game playing AI software, and AlphaFold, an AI method for protein research for which Google DeepMind researchers Demis Hassabis and John Jumper won the 2024 Nobel Prize in Chemistry, shared with David Baker of the University of Washington in Seattle.

“It’s nonsensical to compare AlphaChip to AlphaFold, whose results have been validated by many academic research groups as part of an open competition,” Markov said.

Mark Halper is a freelance journalist based near Bristol, England. He covers everything from media moguls to subatomic particles.

Join the Discussion (1)

Become a Member or Sign In to Post a Comment

  1. The ongoing discussion surrounding Google’s Nature paper underscores the critical need for transparency in research. The published partial source code and the recently published pretrained AlphaChip checkpoint are positive steps in this direction. Given the strong claims, it would have been advantageous for the Google team to present results on public benchmarks.

    The ablation study in the recent Nature addendum, which seeks to justify the undisclosed use of a commercial tool for an initial placement, raises further concerns. It is questionable to draw a conclusion from a single new test case not used in the original paper, featuring a wire length that is 4-12 times smaller than previously reported. Furthermore, this instance is sparsely placed (23.83%
    density), indicating that finding a poor floorplan might be unlikely. In contrast, Cheng et al. ’23 reported a significant dependency on the initial placement in their assessment paper. Instead of alleviating concerns, the ablation study raises suspicions again.

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More