acm-header
Sign In

Communications of the ACM

ACM TechNews

Researchers Demonstrate 'mind-Reading' Brain-Decoding Tech


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Different parts of the brain become active when viewing videos.

Purdue University researchers used artificial intelligence to interpret functional magnetic resonance imaging scans from people viewing videos, to decode what the brain is seeing.

Credit: healthyplace.com

Researchers at Purdue University have demonstrated a method for decoding what the brain is seeing, using artificial intelligence to interpret functional magnetic resonance imaging (fMRI) scans from people viewing videos.

The researchers say the technique for the first time employs a convolutional neural network to observe how the brain processes videos of natural scenes.

The team obtained 11.5 hours of fMRI data from each of three subjects watching 972 video clips, including those displaying people or animals in action and nature scenes. The data was initially used to train the network model to predict activity in the visual cortex while the subjects were watching the videos.

The researchers then used the model to decode fMRI data from the subjects to reconstruct the videos, even those the model had never viewed before.

The model precisely decoded the fMRI data into specific image categories, and the team determined how certain brain regions were associated with specific visual input.

From Purdue University News
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found