The researchers' new algorithm takes depth information (red) about a visual scene and determines the orientation of the objects depicted (red, blue, and green). That makes the problem of plane segmentationdeciding which elements of the scene lie in which planes, at what depthmuch simpler (multiple colors).
Credit: The researchers
Suppose you're trying to navigate an unfamiliar section of a big city, and you’re using a particular cluster of skyscrapers as a reference point.
From MIT News Office
View Full Article
No entries found