News
Architecture and Hardware

Hand Jive: A Robot Hand Learns to Spin

Posted
A five-fingered robot hand spins a tube.
Robotics scientists at the University of Washington developed this five-fingered robotic hand, which has taught itself the complex task of spinning a tube.

Robots do not yet have the chops to jam out on guitar like Dave Matthews, but researchers are getting closer.

Specifically, robotics scientists at the University of Washington (UW) have developed a five-fingered robotic hand that has taught itself the complex task of spinning a tube, dexterity that can take a human baby months to master.

"We programmed in the goal, to spin a tube," says Vikash Kumar, a doctoral student in computer science and engineering at UW, "but it was the robot itself that learned, with the help of an artificial intelligence algorithm, how to spin it."

The UW team’s approach—creating a robot that grows adept at a complex task by teaching itself how to do it, rather than mindlessly executing pre-programmed commands—represents a departure from traditional robotics. For decades, Kumar says, commercial robots were created to perform highly specific tasks by software engineers who pre-imagined every movement needed to perform a job, and then transformed that sequence of movements into a computer program.

These days, with advances in artificial intelligence regularly grabbing headlines, robotics engineers are interested in creating a robot that is more of a generalist, capable of learning multiple tasks via trial and error. "It’s like the robot is a baby, learning as it goes," Kumar says.

UW professor Emo Todorov, a key member of the robotics team, agrees. "Usually people look at a motion and try to determine what exactly needs to happen," Todorov says. "What we are using is a universal approach that enables the robot to learn from its own movements and requires no tweaking from us."

Relying on artificial intelligence, UW’s robotic hand ‘learns’ how to spin a tube via simple trial and error. Using the goal of the perfect spin that researchers embed in its software, the hand tries a spin and then ‘observes’ how close that spin is to its goal using data that is fed to it from more than 140 joint, tendon, and pressure sensors, as well as numerous cameras.

The AI software compares the trial spin against the goal of a perfect spin, recalibrates for another trial spin that is closer to the goal spin, and then tries again and again until the goal of the perfect spin has been reached.

"Over time, a robot like this could become better at a particular task than a human," Kumar says, adding, "we want to train a deep neural network that interpolates between multiple learned movements, and hopefully generalizes to the space of all possible task conditions."

UW’s approach "enabled some remarkable in-hand manipulation capabilities," says Pieter Abbeel, an associate professor in the Berkeley Artificial Intelligence Research laboratory at the University of California, Berkeley (UC Berkeley). "This is a strong indication of the promise of this line of work, and could inspire others to also further develop this direction."

The promise of the work is that Kumar, and researchers like him, will be able to create robots that can teach themselves multiple tasks. The goal is to create robots that can be generalists: machines able to perform a wide variety of tasks by using AI software to teach themselves how to perform them optimally.

Ken Goldberg, a professor in UC Berkeley’s College of Engineering, agrees the UW team is "working at the frontiers of robot learning. They are an extremely sharp team."

Other researchers are pursuing self-teaching robots, and Abbeel is collaborating with a number of other researchers on robots that can teach themselves to move.

UW’s team hopes to significantly reduce the cost of its five-fingered wunderkind. Currently the hand, with all its sensors and cameras, costs a commercially unpalatable $300,000; the team plans to tweak the robot’s AI software over time until the hand only needs a relatively few sensors and cameras to achieve the same learning capability, Kumar says.

As for robots evolving to the sophistication where they can be attached to mechanical bodies and start playing musical instruments, Kumar suggests that will not happen any time soon. True creativity, improvisational skills, and artistic sense are qualities that still elude today’s robots, Kumar says.

"There is something hard to define about human movement that needs to be there in order to appeal to other humans — and we cannot yet capture it in a formal way," he adds.

Indeed, some robotics researchers believe we will never see the day when a robot lays down a scorching guitar lick, pens a sublime sonnet, or stars in a Parisian ballet. "I don’t think robots will ever be truly creative," Goldberg says.

Joe Dysart is an Internet speaker and business consultant based in Manhattan.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More