Since the 1950s, artificial intelligence has repeatedly overpromised and underdelivered. While recent years have seen incredible leaps thanks to deep learning, AI today is still narrow: it's fragile in the face of attacks, can't generalize to adapt to changing environments, and is riddled with bias. All these challenges make the technology difficult to trust and limit its potential to benefit society.
On March 26 at MIT Technology Review's annual EmTech Digital event, two prominent figures in AI took to the virtual stage to debate how the field might overcome these issues.
Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, is a well-known critic of deep learning. In his book Rebooting AI, published last year, he argued that AI's shortcomings are inherent to the technique. Researchers must therefore look beyond deep learning, he argues, and combine it with classical, or symbolic, AI—systems that encode knowledge and are capable of reasoning.
Danny Lange, the vice president of AI and machine learning at Unity, sits squarely in the deep-learning camp. He built his career on the technique's promise and potential, having served as the head of machine learning at Uber, the general manager of Amazon Machine Learning, and a product lead at Microsoft focused on large-scale machine learning. At Unity, he now helps labs like DeepMind and OpenAI construct virtual training environments that teach their algorithms a sense of the world.
During the event, each speaker gave a short presentation and then sat down for a panel discussion. The disagreements they expressed mirror many of the clashes within the field, highlighting how powerfully the technology has been shaped by a persistent battle of ideas and how little certainty there is about where it's headed next.
From Technology Review
View Full Article
No entries found