Sign In

Communications of the ACM

ACM News

A Stanford Proposal Over AI's 'Foundations' Ignites Debate


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
A controversy over "foundational models" of artificial intelligence.

These models are really castles in the air; they have no foundation whatsoever, said Jitendra Malik, a professor at the University of California, Berkeley, who studies artificial intelligence.

Credit: Sam Whitney/Getty Images

Last month, Stanford researchers declared that a new era of artificial intelligence had arrived, one built atop colossal neural networks and oceans of data. They said a new research center at Stanford would build—and study—these "foundational models" of AI.

Critics of the idea surfaced quickly—including at the workshop organized to mark the launch of the new center. Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter.

"I think the term 'foundation' is horribly wrong," Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion.

Malik acknowledged that one type of model identified by the Stanford researchers—large language models that can answer questions or generate text from a prompt—has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world.

From Wired
View Full Article

 


 

No entries found