Carnegie Mellon University (CMU) researchers combined iPhone videos shot "in the wild" by separate cameras to produce four-dimensional (4D) visualizations that allow viewers to watch action from various vantage points, or even delete people or objects that temporarily occlude sight lines.
CMU's Aayush Bansal and colleagues employed as many as 15 iPhones to capture various scenes, then used scene-specific convolutional neural networks to compose different parts of scenes.
The system can restrict playback angles to make incompletely rebuilt areas invisible, maintaining the illusion of three-dimensional imagery.
The method also could be used to record actors in one setting, then insert them into another.
Bansal said, "The point of using iPhones was to show that anyone can use this system. The world is our studio."
From Carnegie Mellon University School of Computer Science
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found