Brown University researchers have demonstrated a flaw in modern computer vision algorithms that makes them consistently poor at differentiating between two objects in an image.
Brown's Thomas Serre and colleagues used cutting-edge computer vision algorithms to analyze monochrome images containing two or more randomly produced shapes to identify a same-or-different relationship.
The researchers determined the algorithms' recognition of this relationship did not improve after many training examples, but by removing object-individuation tasks from the computer, the algorithms could easily learn this relationship, provided they did not have to view the two objects in the same image.
Serre says the object-individuation flaw is embedded in the framework of convolutional neural networks driving the algorithms, which are exclusively fed information in one direction, unlike the workings of the human brain's virtual systems.
He suggests making computer vision smarter will require neural networks that can better approximate the recurrent nature of human visual processing.
From Brown University
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found