Sign In

Communications of the ACM

ACM TechNews

Framework Improves 'Continual Learning' for AI


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
learning brain, illustration

Credit: SilverBlu3

Researchers at North Carolina State University have developed a new framework for deep neural networks that allows artificial intelligence (AI) systems to better learn new tasks, while "forgetting" less of what they learned during previous tasks. In addition, the Learn to Grow framework helps AI systems become better at performing previous tasks, a phenomenon known as backward transfer.

The framework starts by conducting an explicit neural architecture optimization via search, meaning that as the network comes to each layer in its system, it can decide to do one of four things: skip the layer; use the layer in the same way that previous tasks used it; attach a lightweight adapter to the layer, which changes it slightly; or create an entirely new layer.

"We’ve run experiments using several datasets," says NC State researcher Xilai Li, "and what we've found is that the more similar a new task is to previous tasks, the more overlap there is in terms of the existing layers that are kept to perform the new task."

From North Carolina State University
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found