A core challenge in computer vision is identifying the boundaries of objects, and Massachusetts Institute of Technology researchers Jason Chang and John Fisher have developed an algorithm that can determine object boundaries in digital images with at least 50,000 times greater efficiency than its predecessors.
"We want an algorithm that's able to segment images like humans do," Chang says. "But because humans segment images differently, we shouldn't come up with one segmentation. We should come up with a lot of different segmentations that kind of represent what humans would also segment."
The algorithm produces its set of candidate segmentations by striking different balances between a pair of segmentation quality measures, one of which is the difference between the parts of the image on opposite sides of each boundary. The other quality measure is segmentation simplicity, and the algorithm assigns each segmentation a score based on these two measures.
The program is designed to identify candidates with very high total scores to ensure that none of the candidates will be inordinately poor. Georgia Institute of Technology professor Anthony Yezzi thinks the same method could be applied to object-tracking and pattern-matching challenges.
From MIT News
View Full Article
No entries found