Computing Applications News

Reading Brains

The first steps have been taken toward enabling a computer to perceive one's thoughts.
  1. Introduction
  2. A Simple Model
  3. Probing Thoughts
  4. Protecting Privacy
  5. Further Reading
  6. Author
  7. Figures
patient wearing cap with electrodes
A patient wears a cap studded with electrodes during a demonstration of a noninvasive brain-machine interface by the Swiss Federal Institute of Technology of Lausanne in January 2013.

Mind reading has traditionally been the domain of mystics and science fiction writers. Increasingly, however, it is becoming the province of serious science.

A new study from the laboratory of Marcel van Gerven of Radboud University Nijmegen in the Netherlands demonstrates it is possible to figure out what people are looking at by scanning their brains. When volunteers looked at handwritten letters, a computer model was able to produce fuzzy images of the letters they were seeing, based only on the volunteers’ brain activity.

The new work—which builds on an earlier mathematical model by Bertrand Thirion of the Institute for Research in Computer Science and Control in Gif-sur-Yvette, France—establishes a simple, elegant brain-decoding algorithm, says Jack Gallant, a neuroscientist at the University of California, Berkeley. Such decoding algorithms eventually could be used to create more sophisticated brain-machine interfaces, he says, to allow neurologically impaired people to manipulate computers and machinery with their thoughts.

As technology improves, Gallant predicts, it eventually will be possible to use this type of algorithm to decode thoughts and visualizations, and perhaps even dreams. “I believe that eventually, there will be something like a radar gun that you can just point at someone’s head to decode their mental state,” he says. “We have the mathematical framework we need, for the most part, so the only major limitation is how well we can measure brain activity.”

Back to Top

A Simple Model

In the new study, slated to appear in an upcoming issue of the journal Neuroimage, volunteers looked at handwritten copies of the letters B, R, A, I, N, and S, while a functional magnetic resonance imaging (fMRI) machine measured the responses of their primary visual cortex (V1), the brain region that does the initial, low-level processing of visual information. The research team then used this fMRI data to train a Bayesian computer model to read the volunteers’ minds when they were presented with new instances of the six letters.

“It’s a very elegant study,” says Thomas Naselaris, a neuroscientist at The Medical University of South Carolina in Charleston.

According to Bayes’ Law, to reconstruct the handwritten image most likely to have produced a particular pattern of brain activity, it is necessary to know two things about each candidate image: the “forward model,” the probability that the candidate image would produce that particular brain pattern; and the “prior,” the probability of that particular image cropping up in a collection of handwritten letters. Whichever candidate image maximizes the product of these two probabilities is the most likely image for the person to have seen.

To create the forward model, the research team showed volunteers hundreds of different handwritten images of the six letters while measuring their brain activity, then used machine-learning techniques to model the most likely brain patterns that any new image would produce. To construct the prior, the team again set machine learning algorithms to work on 700 additional copies of each letter, to produce a model of the most likely arrangements of pixels when people write a letter by hand. Both models used simple linear Gaussian probability distributions, making brain decoding into a straightforward calculation, van Gerven says.

“We’ve shown that simple mathematical models can get good reconstructions,” he says.

The research team also experimented with limiting the model’s prior knowledge of the world of handwritten letters. If the model’s prior information consisted only of images of the letters R, A, I, N, and S, for example, it could still produce decent reconstructions of the letter B, though not as good as when the prior included images of all six letters. The results, van Gerven says, demonstrate the decoding algorithm’s ability to generalize—to reconstruct types of letters it has never “seen” before.

The human brain is, of course, the master of this kind of generalization, and this ability goes much farther than simple reconstruction of unfamiliar images. “The visual system can do something no robot can do,” Naselaris says. “It can walk into a room filled with things it has never seen before and identify each thing and understand the meaning of it all.”

While van Gerven’s paper deals only with reconstructing the image a person has seen, other researchers have taken first steps toward deciphering the meanings a brain attaches to visual stimuli. For example, Gallant’s group (including Naselaris, formerly a postdoc at Berkeley) has combined data from V1 and higher-order visual processing regions to reconstruct both the image a person has seen and the brain’s interpretation of the objects in the image. More recently, in partially unpublished work, the team has done the same thing for movies, instead of still images.

“We are starting to build a repertoire of models that can predict what is going on in higher levels of the vision hierarchy, where object recognition is taking place,” Naselaris says.

Other researchers are working on reading a brain’s thoughts as it responds to verbal stimuli. For example, in 2010, the laboratory of Tom Mitchell at Carnegie Mellon University in Pittsburgh developed a model that could reconstruct which noun a person was reading. Van Gerven’s lab is currently working on decoding the concepts volunteers consider as they listen to a children’s story while inside an fMRI scanner.

Back to Top

Probing Thoughts

Most mind-reading research to date has focused on reconstructing the external stimuli creating a particular pattern of brain activity. A natural question is whether brain-decoding algorithms can make the leap to reconstructing a person’s private thoughts and visualizations, in the absence of any specific stimulus.

The answer depends on the extent to which, for example, the brain processes mental images and real images in the same way. “The hypothesis is that perception and imagery activate the same brain regions in similar ways,” van Gerven says. “There have been hints that this is largely the case, but we are not there yet.”

If, Naselaris says, “highly visual processes get evoked when you are just reasoning through something—planning your day, say—then it should be possible to develop sensitive probes of internal thoughts and do something very much like mind-reading just from knowing how V1 works,” he says. “But that is a big ‘if.’ “

Even if mind-reading turns out not to be as simple as decoding V1, Naselaris predicts that as neuroscientists develop forward models of the brain’s higher-level processing regions, the decoding models will almost certainly provide a portal into people’s thoughts. “I don’t think there is anything that futuristic about the idea that in five to 20 years, we will be able to make pictures of what people are thinking about, or transcribe words that people are saying to themselves,” he says.

What may prove more difficult, Gallant says, is digging up a person’s distant memories or unconscious associations. “If I ask you the name of your first-grade teacher, you can probably remember it, but we do not understand how that is being stored or represented,” he says. “For the immediate future, we will only be able to decode the active stuff you’re thinking about right now.”

Dream decoding is likely to prove another major challenge, Naselaris says. “There is so much we don’t understand about sleep,” he says. “Decoding dreams is way out in the future; that’s my guess.”

Part of the problem is that with dreams, “you never have ground truth,” Gallant says. When it comes to building a model of waking thoughts or visions, it is always possible to ask the person what he or she is thinking, or to directly control the stimuli the person’s brain is receiving, but dreams have no reality check.

The main option available to researchers, therefore, is to build models for reconstructing movies, and then treat a dream as if it were a movie playing in the person’s mind. “That is not a valid model, but we use it anyway,” Gallant says. “It is not going to be very accurate, but since we have no accuracy now, having lousy accuracy is better than nothing.”

In May 2013, a team led by Yukiyasu Kamitani of ATR Computational Neuroscience Laboratories in Kyoto, Japan, published a study in which they used fMRI data to reconstruct the categories of visual objects people experienced during hypnagogic dreams, the ones that occur as a person drifts into sleep. “They are not real dreams, but it is a proof of concept that it should be possible to decode dreams,” Gallant says.

Back to Top

Protecting Privacy

The dystopian future Gallant pictures, in which we could read each other’s private thoughts using something like a radar gun, is not going to happen any time soon. For now, the best tool researchers have at their disposal, fMRI, is at best a blunt instrument; instead of measuring neuronal responses directly, it can only detect blood flow in the brain—which Gallant calls “the echoes of neural activity.” The resulting reconstructions are vague shadows of the original stimuli.

What is more, fMRI-based mind reading is expensive, low-resolution, and the opposite of portable. It is also easily thwarted. “If I did not want my mind read, I could prevent it,” Naselaris says. “It is easy to generate noisy signals in an MRI; you can just move your head, blink, think about other things, or go to sleep.”

These limitations also make fMRI an ineffective tool for most kinds of brain-machine interfaces. It is conceivable fMRI could eventually be used to allow doctors to read the thoughts of patients who are not able to speak, Gallant says, but most applications of brain-machine interfaces require a much more portable technology than fMRI.

“The hypothesis is that perception and imagery activate the same brain regions in similar ways,” van Gervan says.

However, given the extraordinary pace at which technology moves, some more effective tool will replace fMRI before too long, Gallant predicts. When that happens, the brain decoding algorithms developed by Thirion, van Gerven, and others should plug right into the new technology, Gallant says. “The math is pretty much the same framework, no matter how we measure brain activity,” he says.

Despite the potential benefit to patients who need brain-machine interfaces, Gallant is concerned by the thought of a portable mind-reading technology. “It is pretty scary, but it is going to happen,” he says. “We need to come up with privacy guidelines now, before it comes online.”

Back to Top

Further Reading

Horikawa, T., Tamaki, M., Miyawaki, Y., Kamitani, Y.
Neural Decoding of Visual Imagery During Sleep, Science Vol. 340 No., 6132, 639–642, 3 May 2013.

Kay, K. N., Naselaris, T., Prenger, R. J., Gallant, J. L.
Identifying Natural Images from Human Brain Activity, Nature 452, 352–355, March 20, 2008.

Mitchell, T., Shinkareva, S., Carlson, A., Chang, K-M., Malave, V., Mason, R., Just, M.
Predicting Human Brain Activity Associated with the Meanings of Nouns, Science Vol. 320 No. 5880, 1191–1195, 30 May 2008.

Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., Gallant, J. L.
Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Current Biology Vol. 21 Issue 19, 1641–1646, 11 October 2011.

Schoenmakers, S., Barth, M., Heskes, T. van Gerven, M.
Linear Reconstruction of Perceived Images from Human Brain Activity, Neuroimage 83, 951-61, December 2013.

Thirion, B., Duchesnay, E., Hubbard, E., Dubois, J., Poline, J. B., Lebihan, D., Dehaene, S.
Inverse Retinotopy: Inferring the Visual Content of Images from Brain Activation Patterns, Neuroimage 33, 1104–1116, December 2006.

Back to Top

Back to Top


UF1 Figure. A patient wears a cap studded with electrodes during a demonstration of a noninvasive brain-machine interface by the Swiss Federal Institute of Technology of Lausanne in January 2013.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More