Sign In

Communications of the ACM

ACM TechNews

How Google Wants to Solve Robotic Grasping By Letting Robots Learn For Themselves


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Robot arms learning to grasp objects.

Google Research are letting robots figure out for themselves how to grasp objects.

Credit: Google Research

Google Research has taken a unique approach to robotic grasping: instead of trying to teach robots how to pick things up, researchers are letting them learn for themselves.

With assistance from colleagues at X, the Google team has tasked a 7-DoF robot arm to pick up objects in clutter using monocular visual servoing, and used a deep convolutional neural network (CNN) to predict the outcome of the grasp.

The CNN continuously retrained itself, and the team dedicated 14 robots to the task in parallel to speed the process along. More data was collected faster, but more unintentional variation was introduced into the experiment. Cameras are positioned slightly differently, lighting is a bit different for each machine, and each of the underactuated two-finger grippers exhibits different types of wear, affecting performance. However, the robots ended up with a tolerance for things such as minor hardware variation and camera calibration differences, making the grasping more robust.

"In essence, the robot is constantly predicting, by observing the motion of its own hand, which kind of subsequent motion will maximize its chances of success," the researchers note. "The result is continuous feedback: what we might call hand-eye coordination."

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found