Sign In

Communications of the ACM

Panels in print

Artificial Intelligence


Raj Reddy, Jeff Dean, David Blei, and Pedro Felzenszwalb

Since its inauguration in 1966, the ACM A.M. Turing Award has recognized major contributions of lasting importance to computing. Through the years, it has become the most prestigious award in computing. To help celebrate 50 years of the ACM Turing Award and the visionaries who have received it, ACM has launched a campaign called "Panels in Print," which takes the form of a collection of responses from Turing laureates, ACM award recipients and other ACM experts on a given topic or trend.

ACM's celebration of 50 years of the ACM Turing Award will culminate with a conference June 23–24, 2017 at the Westin St. Francis in San Francisco to highlight the significant impact of the contributions of ACM Turing laureates on computing and society, to look ahead to the future of technology and innovation, and to help inspire the next generation of computer scientists to invent and dream.

For the first Panel in Print, we invited 1994 ACM Turing laureate RAJ REDDY, 2012 ACM Prize in Computing recipient JEFF DEAN, 2013 ACM Prize in Computing recipient DAVID BLEI and 2013 ACM Grace Murray Hopper recipient PEDRO FELZENSZWALB to respond to several questions about Artificial Intelligence.

What have been the biggest break-throughs in AI in recent years and what impact is it having in the real-world?

RAJ REDDY: Ten years ago, I would have said it wouldn't be possible, in my lifetime, to recognize unrehearsed spontaneous speech from an open population but that's exactly what Siri, Cortana and Alexa do. The same is happening with vision and robotics. We are by no means at the end of the activity in these areas, but we have enough working examples that society can benefit from these breakthroughs.

JEFF DEAN: The biggest break-through in the last five or so years has been the use of deep learning, a particular kind of machine learning that uses neural networks. Stacking the network into many layers that learn increasingly abstract patterns as you go up the layers seems to be a fundamentally powerful idea, and it's been very successful in a surprisingly wide variety of applications—from speech recognition, to image recognition, to language understanding. What's interesting is we don't seem to be near the limit of what deep learning can do; we'll likely see many more powerful uses of it in the coming years.

PEDRO FELZENSZWALB: Among the biggest technical advances I would include the development of scalable machine learning algorithms and the computational infrastructure to process and interact with huge datasets. The latest example of these advances is deep learning. In computer vision deep learning has led to break-throughs in object recognition. The accuracy of object recognition in popular benchmarks has increased way beyond what most of us expected to see in the last few years. The impact of this progress still remains to be seen but I expect it will play an important role in building intelligent systems that can interact directly with our physical world.

What specific AI applications will most improve our quality of life in the next five years, 10 years?

JEFF DEAN: Three areas stand out for me: healthcare, self-driving cars, and general-purpose robotics. Machine learning systems will be able to offer suggestions and advice to doctors in ways that are very complementary to the strengths of human medical professionals, resulting in better care for patients, and more efficient healthcare systems. Self-driving cars will be incredibly transformative as well: our urban environments are built around the idea that people own cars and need to park them, etc., and we'll start to see dramatic changes in even things like how cities and neighborhoods are designed as self-driving cars become more widespread. General purpose robots that can operate in messy, uncontrolled environments like households or offices will also start to have a big impact in this time frame.

DAVID BLEI: I believe that we are now making major progress in two areas that will significantly improve our quality of life. The first is in natural language processing, both in language understanding and language generation. The second is in personalization, in developing software and methods that adapt to user behavior.

These two threads of innovation will result in a more seamless interface between people and AI software, enabling AI to help our lives and society in more ways. For example, we will be able to carry on intelligent and useful conversations with an algorithm, especially around question answering of existing facts. The seamless interface—powered by natural language understanding and personalization—will change how we interact with knowledge bases such as libraries and the internet and thus change how we are able to access, find, and use information.

PEDRO FELZENSZWALB: I believe medicine and public health are areas where the potential for AI is very big and we may see significant impact in the next 10 years. Consider the problem of medical diagnosis. Conceptually this is a simple problem, involving figuring out what condition someone has based on their symptoms. But in practice the problem is very hard.

We rely on specialists, as no one doctor can master all the complexities of the human body. An AI doctor will have access to a database with all of our medical knowledge and the necessary computational capabilities to reason about this data. This AI doctor could be much more easily accessible than the best doctors in the world. The bottom line is that medical diagnosing requires doing statistical inference with lots of data, something that computers can probably do better than humans.

What are some of the major hurdles that AI still needs to overcome in the next 10 years?

DAVID BLEI: Right now, AI is revolutionizing technology through prediction, e.g., "What will I buy next?" or "What face is this in the picture?" I believe that AI will next revolutionize science and scholarship, i.e., how we understand our world through observation. In the context of many fields—astronomy, genetics, sociology, history, and many others—AI can help us analyze massive collections of data to form an understanding of what happened and how things work.


"In my opinion, we are still quite far from realizing the potential of AI. one meta-hurdle is to define what we mean by intelligence."


But there is a significant hurdle to this vision. Finding causal connections, e.g., for science and history, is a deep statistical problem. We must develop the field of causal inference in the context of modern AI to realize its potential in this way.

I will add that using AI to find causal connections will also have an impact technologically. Problems around medical personalization—such as how will a particular patient respond to a medicine—might seem "predictive" at first, but are ultimately causal questions. Indeed, using AI for causal inference will only bolster our predictive capabilities.

PEDRO FELZENSZWALB: In my opinion, we are still quite far from realizing the potential of AI. One meta-hurdle is to define what we mean by intelligence. In the history of AI we have had some specific goals, such as building a computer that can play chess as well as any human, or getting a computer to recognize objects in pictures. However, the AI community has often looked down upon practical solutions to such problems, citing among other things that large engineering efforts and special purpose solutions have little resemblance to intelligence and will not generalize to other problems. It appears that as soon as we figure out how to solve a classical problem in AI we no longer consider the problem to be part of AI. Perhaps the solution simply demystifies the problem too much. It is not clear if we will ever attribute intelligence to a system that we fully understand.

Much has been made of the potential for AI in pop culture. What are some of the biggest myths you've seen? Can you think of examples where science fiction is getting close to reality?

JEFF DEAN: Probably the biggest myth is that AI is one singular thing that you can just "flip on" like a switch, and suddenly you've got human-style intelligence. In fact, AI is a huge field involving many techniques, only very loosely inspired by human intelligence. The good news is these techniques are already quite practical for some kinds of real-world applications today—this is why you can talk to Google on your phone, and it understands what you mean and can give you good answers. It's not magic, but it already works well enough that it's really impressive compared to what we could do just a few years ago.

RAJ REDDY: The best example is Ray Kurzweil and Vernor Vinge's description of the singularity which I believe will happen. Where we disagree is on "when" it will happen. I think it won't happen for at least another 100 years, if not longer.

Two of my favorite examples of science fiction in the movies are Minority Report and Her, not because they are completely realistic, but because they provide a plausible scenario of things that could happen. In my Turing talk, I speak about teleportation, time travel, and immortality, but then I go on to redefine what I mean by those terms. For example, if we can observe things happening in 3D Virtual Reality without physically being there, that, in my mind, is teleportation, but of course that's not the same definition you get from things like Star Trek. The same thing happens in mathematics. If mathematicians don't like a particular outcome, they will define a new complex number world where such facts tend to be true. The issue is, if you don't like the world that you are in, then make a world where what you are imagining is true. There are lots of possibilities, some are reasonable and others may not be, but that depends on the date and time when you ask the question.

Back to Top

Figures

UF1Figure. 1994 ACM Turing laureate Raj Reddy

UF2Figure. 2012 ACM Prize in Computing recipient Jeff Dean

UF3Figure. 2013 ACM Prize in Computing recipient David Blei

UF4Figure. 2013 ACM Grace Murray Hopper recipient Pedro Felzenszwalb

Back to top


©2017 ACM  0001-0782/17/02

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents:
  • Article
  • Figures
  • ACM Resources
    Read CACM in a free mobile app!
    Access the latest issue, plus archived issues and more
    ACM Logo
    • ACM CACM apps available for iPad, iPhone and iPod Touch, and Android platforms
    • ACM Digital Library apps available for iOS, Android, and Windows devices
    • Download an app and sign in to it with your ACM Web Account
    Find the app for your mobile device
    ACM DL Logo