Sign In

Communications of the ACM

Last byte

Reaching New Heights with Artificial Neural Networks


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
2018 Turing Award recipients Yoshua Bengio, Geoffrey Hinton, and Yann LeCun

Credit: Alexander Berg

Once treated by the field with skepticism (if not outright derision), the artificial neural networks that 2018 ACM A.M. Turing Award recipients Geoffrey Hinton, Yann LeCun, and Yoshua Bengio spent their careers developing are today an integral component of everything from search to content filtering. So what of the now-red-hot field of deep learning and artificial intelligence (AI)? Here, the three researchers share what they find exciting, and which challenges remain.

There's so much more noise now about artificial intelligence than there was when you began your careers—some of it well-informed, some not. What do you wish people would stop asking you?

GEOFFREY HINTON: "Is this just a bubble?" In the old days, people in AI made grand claims, and they sometimes turned out to be just a bubble. But neural nets go way beyond promises. The technology actually works. Furthermore, it scales. It automatically gets better when you give it more data and a faster computer, without anybody having to write more lines of code.

uf1.jpg
Figure. Geoffrey Hinton

YANN LECUN: That's true. The basic idea of deep learning is not going away, but it's still frustrating when people ask if all we need to do to make machines more intelligent is simply scale our current methods. We need new paradigms.

YOSHUA BENGIO: The current techniques have many years of industrial and scientific application ahead of them. That said, the three of us are researchers, and we are always impatient for more, because we are far from human-level AI, and the dream of understanding the principles of intelligence, natural or artificial.

uf2.jpg
Figure. Yoshua Bengio

What isn't discussed enough?

HINTON: What does this tell us about how the brain works? People ask that, but not enough people are asking that.

uf3.jpg
Figure. Yann LeCun

BENGIO: It's true. Unfortunately, although deep learning takes inspiration from the brain and from cognition, many engineers involved with it these days don't care about those topics. It makes sense, because if you're applying things in industry, it doesn't matter. But in terms of research, I think it's a big loss if we don't keep the connection alive with people who are trying to understand how the brain works.

HINTON: That said, neuroscientists are now taking it seriously. For many years, neuroscientists said, "artificial neural networks are so unlike the real brain, and they're not going to tell us anything about how the brain works." Now, neuroscientists are taking seriously the possibility that something like backpropagation is going on in the brain, and that's a very exciting area.

LECUN: Almost all the studies now of human and animal vision use convolutional nets as the standard conceptual model. That wasn't the case until relatively recently.

HINTON: I think it's also going to have a huge impact, slowly, on the social sciences, because it's going to change our view of what people are. We used to think of people as rational beings, and what was special about people was that they used reasoning to derive conclusions. Now we understand much better that people are basically massive analogy-making machines. They develop these representations quite slowly, and then the representations they develop determine the kinds of analogies they can make. Of course, we can do reasoning, and we wouldn't have mathematics without it, but it's not the fundamental way we think.

For pioneering researchers, you seem unusually unwilling to rest on your laurels.

HINTON: I think there's something special about people who invented techniques that are now standard. There was nothing God-given about them, and there could well be other techniques that are better. Whereas people who come to a field when there's already a standard way of doing things don't understand quite how arbitrary that standard way is.

BENGIO: Students sometimes talk about neural nets as if they were describing the Bible.

LECUN: It creates a generation of dog-matism. Nevertheless, it's very likely that some of the most innovative ideas will come from people much younger than us.

The progress in the field has been amazing. What would you have been surprised to learn was possible 20 or 30 years ago?

LECUN: There's so much I've been surprised by. I was surprised by how late the deep learning revolution was, but also by how fast it developed once it started. I would have expected things to happen more progressively, but people abandoned the whole idea of neural nets between the mid-1990s and mid-2000s. We had evidence that they were working before, but then, once the demonstrations became incontrovertible, the revolution happened really fast, first in speech recognition, then in image recognition, and now in natural language understanding.

HINTON: I would have been amazed, 20 years ago, if someone had said that you could take a sentence in one language, carve it up into little word fragments, feed it into a neural net that starts with random connections, and train the neural net to produce a translation of the sentence into another language with no knowledge at all of syntax or semantics—just no linguistic knowledge whatsoever—and it would translate better than anything else. It's not perfect, it's not as good as a bilingual speaker, but it's getting close.

LECUN: It's also amazing how quickly these techniques became so useful for so many industries. If you take deep learning out of Google or Facebook today, both companies crumble; they are completely built around it. One thing that surprised me when I joined Facebook is that there was a small group using convolutional nets for face recognition. My first instinct about convolutional nets was to think they would be useful for, maybe, category-level recognition: car, dog, cat, airplane, table, not fine-grained things like faces. But it turned out to work very well, and it's completely standard now. Another thing that surprised me came out of Yoshua's lab on generative adversarial networks—that you can basically use neural nets as generative models to produce images and sound.

BENGIO: When I was doing my Ph.D., I was struggling to expand the idea that neural nets could do more than just pattern recognition—taking a fixed-size vector as input and producing categories. But it's only recently with our translation work that we escaped this template. As Yann said, the ability to generate new things has really been revolutionary. So has the ability to manipulate any kind of data structure, not just pixels and vectors. Traditionally, neural nets were limited to tasks that humans can do very quickly and unconsciously, like recognizing objects and images. Modern neural nets are different in nature from what we were thinking about in the 1980s, and they can do things that are much closer to what we do when we reason, what we do when we program computers.

In spite of all the progress, Yoshua, you've talked about the urgency of making this technology more accessible to the developing world.

BENGIO: I think it's very important. I used to not think much about politics, but machine learning and AI have come out of universities, and I think we have a responsibility to think about that and to participate in social and political discussions about how they should be used. One issue, among many, is where is the know-how and wealth and technology are going to be concentrated. Are they going to be concentrated in the hands of a few countries, a few companies, and a small class of people, or can we find ways to make them more accessible, especially in countries where they could make a bigger difference for more people?

HINTON: Google has open-sourced its main software for developing neural nets, which is called TensorFlow, and you can also use the special Google hardware for neural nets on the cloud. So Google is trying to make this technology accessible to as wide a set of people as possible.

LECUN: I think that's a very important point. The deep learning community has been very good at promoting the idea of open research, not just within academia, where conferences distribute papers, reviews, and commentaries in the open, but also in the corporate world, where companies like Google and FB are open-sourcing the vast majority of the software that they write and providing the tools for other people to build on top of it. So anyone can reproduce anyone else's research, sometimes within days. No top research group is ahead of any other by more than a couple of months on any particular topic. The important question is how fast the field as a whole is progressing. Because the things we really want to build—virtual assistants that can answer any question we ask them and can help us in our daily lives—we just don't just lack the technology, we lack the basic scientific principles for it. The faster we can foster the entire research community to work on this, the better it is for all of us.

uf4.jpg
Figure. Watch the recipients discuss this work in the exclusive Communications video. https://cacm.acm.org/videos/2018-acm-turing-award

Back to Top

Author

Leah Hoffmann is a technology writer based in Piermont, NY, USA.


©2019 ACM  0001-0782/19/06

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.


Comments


Reinout Korbee

Google and Facebook basically are ad-companies. They sell advertisement to get people to consume more. It is a shame these gentlemen think that that is in any way useful to society and that developing countries need better targeted advertisement campaigns. The basic scientific principles to get a better Alexa or Google Assistant? It is laudable that computer scientists get more involved in the social sciences, because their understanding of the social world seems to be limited to increasing debt-driven consumption. It would be great if AI could be used to increase access to vaccination, increase food production without producing more waste, reducing pollution, increasing access to clean water, I don't know, anything related to feeding humanity without destroying the planet? Siri, my baby is starving and has the measles, what should I do? "Go to the nearest doctor." And then show a couple of ads for baby milk that you can't afford.


Displaying 1 comment