Sign In

Communications of the ACM

ACM TechNews

Watch Real Football Matches in Miniature Played on Your Desk


Tiny footballers are coming to a tabletop near you

Researchers at the University of Washington have used a machine learning neural network algorithm to render two-dimensional video clips in three dimensions.

Credit: Konstantin Rematas

University of Washington researchers led by Konstantinos Rematas have taught a machine learning neural network algorithm to render two-dimensional (2D) video clips posted on YouTube as three-dimensional (3D) images.

The researchers collected footage from the FIFA football video game as a training dataset, since the game estimates the position in three dimensions of each player, yielding data about their actual location as well as how they are displayed in 2D.

Once that training was complete, the researchers were able to use the algorithm to transform YouTube clip imagery into three dimensions.

Viewers using an augmented reality headset can see the enhanced versions of the clips as though it were positioned on a flat surface in front of them.

Once certain technical issues are addressed, Adrian Leu of U.K. digital agency Inition believes the process could increase the accessibility of virtual and augmented reality applications.

From New Scientist
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account