News
Artificial Intelligence and Machine Learning News

Animals Teach Robots to Find Their Way

Navigation research demonstrates bio-machine symbiosis.
Posted
  1. Article
  2. Author
Animals Teach Robots to Find Their Way, illustration

Ademonstration video that veteran University College, London neuroscientist John O’Keefe often presents in lectures shows a rat moving around the inside of a box. Every time the rat heads for the top-left corner, loud pops play through a speaker; those sounds are the result of the firing of a specific neuron attached to an electrode. The neuron only fires when the rat moves to the same small area of the box. This connection of certain neurons to locations led O’Keefe and student Jonathon Dostrovsky to name those neurons “place cells” when they encountered the phenomenon in the early 1970s.

Today, researchers such as Huajin Tang, director of the Neuromorphic Computing Research Center at Sichuan University, China, are using maps of computer memory to demonstrate how simulated neurons fire in much the same way inside one of their wheeled robots. As it moves around a simple cruciform maze, the machine associates places with pictures of milk cartons, cheese, and apples that it encounters. When asked to find those objects, the same neurons fire. Although the robot looks in the direction of each object when it moves to the center of the maze as part of its hunt, Tang says analysis of the simulated neuron shows “the movement is driven by this stored information, rather than visual recognition of the shape.”

uf1.jpg
Figure. Example cells and a graphic representation of their anatomical distribution in the rat brain. At top left, the firing rate heat map of a place cell recorded as a rat explored a circular arena. Top center, a head direction cell firing rate plot. Top right, firing rate map of a grid cell.

Researchers see synthetic models inside robots as crucial for guiding biological research, as well as for the design of more capable machines. Physical experiments can only measure the activity of a few neurons at a time, which makes it difficult to build a broad overview of how an animal thinks about a problem. Computer models make it possible to test hypotheses about the brain’s behavior by seeing how similar a robot’s reaction to a problem is to that of the animal. Neuron-level tests in the creature can then confirm or contradict the computer model.

Barbara Webb, a professor of bio-robotics at the University of Edinburgh who has been investigating the navigational abilities of insects, favors building computer models even where biological data is limited. More than a decade ago, her team developed a computer model for path integration, a technique used by ants and bees among others to memorize a route. The idea had little anatomical basis at the time, but seemed to be a viable behavioral model. Recent experiments have confirmed similar activity taking place in collections of insect neurons.

Although insects have simple navigational structures, mammalian research has underpinned the key models used in robot development. Analogs of neural networks found in the rat’s brain underpin what is today the most widespread model for biologically inspired navigation.

Michael Milford and colleagues at the Queensland University of Technology in Australia developed the RatSLAM architecture almost 15 years ago. Released in open source form, the relative accessibility of the techniques it uses has helped promulgate RatSLAM. Numerous experiments by Milford and other groups, such as the one based at Sichuan University, have demonstrated the ability of the system to work in many scenarios, up to the level of city streets. However, in such large-scale environments, it has to compete with more conventional GPS-enabled navigation systems.


Mammalian research has underpinned the key models used in robot development. Analogs of neural networks found in the rat’s brain underpin the most widespread model for biologically inspired navigation.


Says Milford, “Where our work remains competitive is in areas where we don’t have a lot of computing power, or in situations such as an underground mining site; places where you don’t have access to satellites for GPS or access to the cloud. We have also regularly had conversations with manufacturers of products such as robot vacuum cleaners, or people who deploy autonomous robots in sites where you have limited sensing.”

What the rat’s brain brings to this research is the ability to navigate without external aids, and in dark places where the animal loses the ability to rely on visual cues. The rat seems to use information from its own movement, coupled with memories of past journeys, to work out how to get from one place to another.

The question for researchers is how close to an actual rat brain do robots need to be, to be as effective at navigation. Robots have the advantage of being able to sense their own motion far more accurately than an animal, and to take advantage of a wide range of accurate motion sensors, whereas a rat may make less-reliable estimates of how far and in which direction its legs have moved it.

The models that researchers build run the gamut from relatively simple structures to intensively detailed models. Milford’s group opted for simplicity. “To model a single neuron to the detail that we know takes incredible amounts of computational power. We didn’t want to do that, as we wanted to create something useful in the short term. As we became very familiar with the navigation problem and mapping problem, we couldn’t find a compelling reason to go to a higher level of fidelity,” Milford says.

Milford and colleagues developed what they call “pose cells,” which shared some characteristics with the place cells found by O’Keefe decades earlier, but which added information on the direction in which the robot faced, and the distance of travel recorded by internal sensors. Such pose cells can represent multiple physical locations; the robot determines the difference by adding information from cells that record the visual scene at each location.

The pose cells turned out to share characteristics with a class of neurons called “grid cells” discovered several years later by neuroscientists Edvard and May-Britt Moser, then working at the Norwegian University of Science and Technology. The Mosers shared the 2014 Nobel Prize in Physiology or Medicine with O’Keefe for their study of the multiple types of navigational cells of mammals.

“Grid cells display strikingly regular firing responses to the animal’s locations in 2D (two-dimensional) space. Existing studies suggest place-cell responses may be generated from a subset of grid-cell inputs,” says Tang, pointing to projects conducted by his team in which simulations of place and grid cells helped improve robot navigation. Grid cells appear to become more important as the area covered by the machine increases.

A key facet of grid cell behavior for large-scale navigation is its ability to store information about multiple locations. “The assumption is that this is a very clever way to map data into a very compact storage representation. The data so far suggest you can do immense amounts of data compression,” Milford says, pointing to work his group is doing for the U.S. Air Force in this subject area.

As well as the functions of individual types of neurons, a common link between robot design and biology lies in the way they are structured. RatSLAM is one of a number of systems that use the competition between groups of simulated neurons to move activity to the most appropriate location. In these attractor networks, neurons excite those close to them and inhibit those further away. However, sometimes new sensor information causes activity to rise elsewhere until that group of neurons takes over and, in turn, inhibits its competitors.


In addition to the functions of individual types of neurons, a common link between robot design and biology lies in the way they are structured.


Clusters of neurons that seem to operate as attractor networks have now been found in the navigation centers of insects that help with path integration and steering. Insects lack the rich collections of cells that mammals use for navigation, but Lund University biology researcher Stanley Heinze is impressed by the way insects can recall complex routes that are sometimes miles long, making it possible to find their way home easily. Working with Webb’s team from the University of Edinburgh and colleagues at Lund in Sweden, Heinze developed a robot to test ideas of how honeybees navigate.

Webb says ants, bees, and other insects appear to use a combination of path integration and visual memory to store routes. She points out that if you move an ant away from one of the routes it has memorized and drop it in a new location, it will adopt a search pattern; as soon as it encounters a point on one of its known paths, it will orient itself and find its way home.

In the cluttered environments through which they fly, bees appear to rely more on direction and speed than the local landmarks that guide ants. The species chosen for study by Heinze and Webb has receptors in its eyes that respond to polarized light, and tend to forage at times when this polarization is most apparent. Tests with grid patterns demonstrated how bees can use these cells to sense speed accurately even when a strong wind forces them to one side.

Heinze and colleagues built versions of the path-integration and speed-sensor cells into a ring-shaped attractor network to reduce noisy inputs from multiple sources into a single packet of activity that could shift around the ring. Sent out on random routes, the network helped the machine find its way back to the starting point, demonstrating the viability of the concept.

Through such simple models, researchers hope to continue the long journey towards understanding how intelligence works and how it can be emulated in computers and robots. Milford says, “I always regard spatial intelligence as a gateway to understanding higher-level intelligence. It’s the mechanism by which we can build on our understanding of how the brain works.”

*  Further Reading

Milford, M., Jacobsen, A., Chen, Z., and Wyeth, G.
RatSLAM: Using Models of the Rodent Hippocampus for Robot Navigation and Beyond. Robotics Research: The 16th International Symposium (2013).

Galluppi, F., Davies, S., Furber, S., Stewart, T., and Eliasmith, C.
Real Time On-Chip Implementation of Dynamical Systems with Spiking Neurons. IEEE World Congress on Computational Intelligence (WCCI) 2012, Brisbane, Australia.

Hu, J., Tang, H., Tan, K.C., and Li, H.
How the Brain Formulates Memory: A Spatial-Temporal Model. IEEE Computational Intelligence, (2016) Volume 11, Issue 2.

Stone, T., Webb, B., Adden, A., Weddig, N.B., Honkanen, A., Templin, R., Wcislo, W., Scimeca, L., Warrant, E., and Heinze, S.
An Anatomically Constrained Model for Path Integration in the Bee Brain. Current Biology (2017), Volume 27, Issue 20.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More