Sign In

Communications of the ACM

ACM News

A Novel Way to Optimize Robots


The researchers found that the most successful unimals learned tasks in half the time that their oldest ancestors had taken, and that those which evolved in the toughest arenas were the most successful of all.

Credit: Agrim Gupta

It might sound obvious that if you want to improve a robot's software, you should improve its software. Agrim Gupta of Stanford University, however, begs to differ. He thinks you can also improve a robot's software by improving its hardware—that is, by letting the hardware adapt itself to the software's capabilities.

As they describe in Nature Communications, he and his colleagues have devised a way of testing this idea. In doing so, they have brought to robotics the principles of evolution by natural selection. They have also cast the spotlight on an evolutionary idea that dates from the 1890s, but which has hitherto proved hard to demonstrate.

There is a wrinkle. The team's robots, which they dub "unimals", are not things of metal and plastic. Rather, they are software entities that interact with a virtual environment in the way that metal-and-plastic devices might interact with a real one. Unimals are pretty simple, having spheres for heads and cylinders for limbs (see picture). The environments through which they roamed were also simple, and came in three varieties: flat arenas, arenas filled with hills, steps and rubble, and ones that had the complexities of the second sort, but with added props like cubes that needed to be moved around.

From The Economist
View Full Article



No entries found