Researchers at the Massachusetts Institute of Technology's McGovern Institute of Brain Research have devised a computational model describing how the primate brain recognizes objects visually. The model maps out visually interesting features for a given image and makes predictions on which image elements will draw a viewer's attention.
The model was deployed in software, and its predictions were tested against experimental data from human subjects. The subjects were asked first to consider a street scene displayed on a computer screen, then to count the cars in the scene, and then to count the pedestrians, while their eye movements were recorded by an eye-tracking system. The software had a high degree of accuracy in predicting which regions of the image the participants would focus on during each task. The software is capable of immediately adjusting its object and location models.
If asked to search an image for a specific kind of object, the system will downgrade the interestingness of features not found in that object and proportionally upgrade the interestingness of features found in the object. This enables the system to anticipate the eye movements of people viewing a digital image, which could be helpful in the design of computer object-recognition systems.
From MIT News
View Full Article
Abstracts Copyright © 2010 Information Inc., Bethesda, Maryland, USA
No entries found