Sign In

Communications of the ACM

ACM Careers

How Researchers Are Teaching AI to Learn Like a Child


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
girl profile, binary code, illustration

Credit: Getty Images

Researchers in machine learning argue that computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules. These experts "have a blind spot, in my opinion," says Gary Marcus, a developmental cognitive scientist at New York University. He says computer scientists are ignoring decades of work in the cognitive sciences and developmental psychology showing that humans have innate abilities—programmed instincts that appear at birth or in early childhood—that help us think abstractly and flexibly. He believes AI researchers ought to include such instincts in their programs.

Yet many computer scientists, riding high on the successes of machine learning, are eagerly exploring the limits of what a naïve AI can do. "Most machine learning people, I think, have a methodological bias against putting in large amounts of background knowledge because in some sense we view that as a failure," says Thomas Dietterich, a computer scientist at Oregon State University in Corvallis. He adds that computer scientists also appreciate simplicity and have an aversion to debugging complex code.

In the longer term, computer scientists expect AIs to take on tougher tasks that require flexibility and common sense. Some computer scientists are already trying.

From Science
View Full Article


 

No entries found