Researchers from the University of Maryland and National ICT Australia (NICTA) have built a self-learning robot that was able to improve its cooking skills by watching YouTube videos.
The group utilized convolutional neural networks to identify the way a hand grasps an item and to recognize specific objects. The system predicts the action involving the object and the hand. The team made use of recent advances in deep neural networks.
"The lower level of the system consists of two convolutional neural network-based recognition modules, one for classifying the hand grasp type and the other for object recognition," the researchers note. "The higher level is a probabilistic manipulation action grammar-based parsing module that aims at generating visual sentences for robot manipulation."
To train their model, the researchers selected data from YouTube videos of people cooking and then generated commands that a robot could execute. The groups' experiments showed the system learned manipulative actions with high accuracy.
The researchers will present their work at the 29th annual conference of the Association for the Advancement of Artificial Intelligence, which takes place Jan. 25-30 in Austin, TX.
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found