acm-header
Sign In

Communications of the ACM

Home/News/Eye Robot/Full Text
ACM TechNews

Eye Robot


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

As technology for robots continues to evolve, their wider use is being held back by the inability to see better. New York University's Yann LeCun has pioneered an approach to computer vision called convolutional neural networks (ConvNets), which tries to mimic the hierarchical way the visual cortex is wired. A ConvNet starts by swiping a number of software filters, each several pixels across, resulting in a set of feature maps that show which patches of the original image contain the sought-after element. Next, the maps are swiped again, producing a new set of maps in lower resolution.

LeCun's artificial visual cortex lights on the appropriate filters automatically as it learns to distinguish the different types of objects. When a ConvNet with unsupervised pre-training is shown the images from a database it can learn to recognize the categories more than 70 percent of the time.

LeCun also tested his system on a small roving robot from the U.S. Defense Advanced Projects Agency, which learned to navigate a course with large obstacles.

University of Toronto's Geoffrey Hinton says ConvNet's approach could be applied to any other hierarchical system, such as language processing.

From The Economist
View Full Article

 

Abstracts Copyright © 2010 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account