Sign In

Communications of the ACM

ACM Opinion

Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Yoshua Bengio teaching.

To ACM A.M. Turing Award recipient Yoshua Bengio, "What matters to me as a scientist is what needs to be explored in order to sol"ve the problems. Not whos right, whos wrong, or whos praying at which chapel.

Credit: Maryse Boyce

Yoshua Bengio is known as one of the "three musketeers" of deep learning, the type of artificial intelligence (AI) that dominates the field today. 

Bengio, a professor at the University of Montreal, is credited with making key breakthroughs in the use of neural networks—and just as importantly, with persevering with the work through the long cold AI winter of the late 1980s and the 1990s, when most people thought that neural networks were a dead end.

He was rewarded for his perseverance in 2018, when he and his fellow musketeers (Geoffrey Hinton and Yann LeCun) won the Turing Award, which is often called the Nobel Prize of computing.

Today, there's increasing discussion about the shortcomings of deep learning. In that context, IEEE Spectrum spoke to Bengio about where the field should go from here. He'll speak on a similar subject tomorrow at NeurIPS, the biggest and buzziest AI conference in the world; his talk is titled "From System 1 Deep Learning to System 2 Deep Learning."

IEEE Spectrum: What do you think about all the discussion of deep learning's limitations?

Yoshua Bengio: Too many public-facing venues don't understand a central thing about the way we do research, in AI and other disciplines: We try to understand the limitations of the theories and methods we currently have, in order to extend the reach of our intellectual tools. So deep learning researchers are looking to find the places where it's not working as well as we'd like, so we can figure out what needs to be added and what needs to be explored.

This is picked up by people like Gary Marcus, who put out the message: "Look, deep learning doesn't work."* But really, what researchers like me are doing is expanding its reach. When I talk about things like the need for AI systems to understand causality, I'm not saying that this will replace deep learning. I'm trying to add something to the toolbox.

What matters to me as a scientist is what needs to be explored in order to solve the problems. Not who's right, who's wrong, or who's praying at which chapel.

 

From IEEE Spectrum
View Full Article

 


 

No entries found