Sign In

Communications of the ACM

ACM News

Direct Communication

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Receiver Theodros Haile is outfitted with transcranial magnetic stimulation, as well as electroencephalograms.

One day, a brain-to-brain interface could make it possible for your history and science professors to download all their knowledge about the Battle of Britain or The Origin of Species into your head.

Credit: Mark Stone/University of Washington

Wouldn't it be great if you could siphon storehouses of information from the brain of your favorite expert? Maybe it would be an amazing Roman history scholar, so you'd never have to look up the Siege of Carthage again, or an astounding jazz musician and his vast permutations of chords and scales, or a photographer and her intricate knowledge of f-stops, or the quiz geek who knows all the obscure details of 1960s sitcoms.

Perhaps you'd like to wire your brain together with your colleagues' to boost collaborative powers by directly sharing thoughts, ideas, and instructions; no speaking, typing, or screen-tapping required.

If that's the case, a brain-to-brain interface could be the thing for you.

As the name suggests, a brain-to-brain interface (BBI) transmits information from one person's brain to the brain of another. It is similar to, and in fact uses, the better known technology of brain-to-computer interface (BCI), which holds for so many things such as giving accident victims fine motor control of prosthetic limbs.

With BBI, the senders and receivers are human brains.

First, full disclosure: computers are still involved, at least in the context of this article, which is not about mental telepathy or things of that ilk. In BBI, processors, BCIs, and computer networks (wired or wireless) all intervene between brains to do things that computers and networks typically do, such as encode, transmit, receive, and decode information.

Yet make no mistake about it: the brains are the terminals. One day, a BBI could make it possible for your history and science professors to download all their knowledge about the Battle of Britain or The Origin of Species into your head.

For now, though, you'll have to settle for something far simpler, like deciding whether or not to rotate an onscreen block in a game of Tetris.

That's what Rajesh Rao  has done. Rao, who is CJ and Elizabeth Hwang Professor in the Paul G. Allen School of Computer Science and Engineering and Department of Electrical and Computer Engineering at the University of Washington in Seattle, and co-director of the university's Center for Neurotechnology, is one of several BBI experts around the world who has proven the concept.

Mind game

In a 2018 project, Rao non-invasively connected the brains of individuals playing a rudimentary Tetris-like computer game, in which some players had to pass along instructions to other players in order to complete a simple task. None of the gamers used physical controls, only their minds.

A total of 15 players engaged in the study, with five teams of three people each executing 16 maneuvers. Each squad had two senders and one receiver, all in different rooms. Some of the participants delivered "yes" and "no" commands instructing others whether or not to flip a block upside down in order to succeed in completing a line of tiles in the game. The senders of these commands could see the entire screen of the game; those receiving the instructions could see only the falling block, but not the line toward which it was falling. Rather than relying on chance, the message recipients accepted the sender's instructions via the brain-to-brain network, which Rao and his team called BrainNet.

The result? As Rao published in a 2019 Nature report, message receivers implemented the correct move 81.25% of the time, well above the 50% of chance. By networking their heads together, they conquered Tetris far more often than not.

It started with a flash.

How exactly did the BBI process work?

It essentially started with the natural neurological network between the sender's eyes and brain, and the reaction of those two organs to light emitted by one of two white LEDs mounted on either side of the senders' computer screens modulating at different frequencies.

The sender looked at the game's screen and decided whether to send a "yes" or "no" signal instructing the receiver to rotate a block or not. If "yes," the player looked at the LED light on the left side of his screen, which flashed at 17Hz. If "no", then he or she would stare at the 15-Hz LED on the right-hand side of the screen. Viewing one of the LEDs sent neurological signals to the brain, triggering natural signals known as steady state visually evoked potentials (SSVEPs), which resonate at the same frequency (17Hz or 15Hz, depending on which light the sender set his/her gaze).

Non-invasive electroencephalography (EEG) nodes attached to the sender's head picked up that signal, routed it a TCP/IP network through a BCI gateway, and then to another gateway called a computer-to-brain interface (CBI). The CBI translated the signal and sent that translation to the receiving player, whose head also was wired non-invasively for transcranial magnetic stimulation (TMS). The original 17Hz (yes) signals were translated into pulses strong enough to stimulate the appearance of light to the receiving individual, while the original 15Hz signals were translated into pulses found to be too weak to elicit a similar reaction.

In response to that signal, the receiver (who saw only the falling block, not the line at the bottom) executed a rotate or do-not-rotate instruction via the same method the sender had used. To rotate the block, he or she looked at a 17Hz LED on the left hand side of the screen; to not rotate, he or she looked at the 15-Hz LED on the right side of the screen. This decision would trigger another SSVEP response picked by EEGs (the receiver wore both TMS and EEG connections) and sent it to a BCI, which would execute the rotate (or not) instruction through a software connection.

All in all, the resulting 81.25% accuracy rate demonstrated the potential of computer-assisted brain-to-brain connections.

"If a person has both a BCI and CBI, the person has the capacity to do completely brain-only interfacing," said Rao. "They don't need to use any motor signals. They don't need to use their body to convey any information."

During the experiment, Rao made one of the two senders in a team more reliable than the other, and demonstrated that the receiver was able to learn from experience which of the two was more reliable. That, he said, has positive implications for brain-to-brain collaboration and communications in a world rapidly permeating with social media and its reliability range.

Of rats, men, and cockroaches

The Tetris experiment was not the first proof of concept for brain-to-brain interfaces.

Rao and a University of Washington team conducted one of the earliest studies back in 2013 with a one-way, yes-or-no trial in which an individual would mentally send a signal instructing another individual whether or not to manually fire a cannon to save a city in a video game of pirates and rockets. The communication was strictly unidirectional, and the instruction was sent to the receiving person's motor cortex (rather than to the visual cortex as in Rao's most recent experiment), causing an involuntary muscle response in the receiver's finger on a touchpad, which prompted a cannon in the game to fire.

The 2013 project was ground-breaking, but had three notable "shortcomings." Its one-way flow meant the receiver and sender could not go back-and-forth in their decision-making. The use of a motor action meant the game was completed with manual actions, rather than the all-mental methods of the Tetris game five years later. The involuntary motor stimulation carried ethical implications of a sort not present in the Tetris game, in which the receiver consciously made the ultimate decision of whether or not flip a block.

A few years later, Rao and a University of Washington team advanced the BBI concept team with a "20 Questions" experiment in which sender and receiver were wired and communicating in both directions in a classic process of narrowing down possible answers for the game 20 Questions. However, the trial also used mouse and keyboard entries along the way, so it wasn't completely free of manual controls. It was also limited to two people, rather than the more collaborative three involved in the Tetris study.

In 2015, Duke University neuroscience professor Miguel Nicolelis and a couple of teams published two different reports, one on rats, the other on monkeys. In both studies, the researchers built what they called a "Brainet" (not to be confused with Rao's BrainNet) in which they wired brains together as "organic computers" to facilitate the cooperative exchange of information in real time, in one case between rats and in the other between monkeys. The rats successfully cooperated to receive water, and the monkeys collaborated to move an arm on a monkey avatar.

Both instances involved invasive implants (wired into the subjects' brains), rather than the non-invasive EEG and TMS in Rao's Tetris game.

Also using an invasive approach, Guangye Li and Dingguo Zhang of Shanghai Jiao Tong University surgically attached microstimulators to cockroaches, turning them into cyborgs as the scientists described in a March 2016 paper. In that study, humans sent light-triggered SSVEP brain signals wirelessly via a BCI to the insects, helping to steer them along a path.

A 2013 Harvard University study by Seung-Schick Yoo and others used non-invasive SSVEP and EEGs from a human sender to stimulate involuntary motion in the tail of a rat, which was implanted with receivers.

Rao's Tetris experiment would appear to mark the greatest advance, in terms of combining non-invasiveness, voluntary actions by the receiver, the replacement of manual controls, bidirectional information flow, and collaboration by multiple people.

Language is so old-fashioned

These are still very early days for BBI; Rao describes it as "Kittyhawk," where the Wright brothers first demonstrated powered aviation.  Advances are required in so many aspects of BBI, Rao readily acknowledges, along with significant ethical considerations.

Rao frames consideration of possible applications for BBI in a broad evolutionary perspective.

"Humans have developed tools for communicating,  starting from language and speech, as ways to communicate thoughts to other human beings, all the way to technology-assisted communication with things like writing, telephones, which increase the  distance over which we can communicate," he notes. "You can think of brain-to-brain interfaces as another tool along this progression of communication."

Rao observes that brain-to-brain communication "could really transform education." For example, he says, BBI could eliminate the need for exams, because brain-to-brain interfaces would allow an instructor to recognize whether a student's brain possesses all the information that a test would seek to confirm.

Rao even envisions using BBIs to help communicate motor skills say, by an athlete, a surgeon or a musician, in way that language cannot address.

There is certainly more work to do. Among the challenges: every brain has its own idiosyncrasies.

"Brain-based knowledge would potentially be facilitated if you were able to map information from one person's brain, in terms of decoding their brain signal and encoding in an appropriate way for the second person," says Rao. He adds, however, "It requires a lot more advances in neuroscience, a lot more advances in precise stimulation and precise recording from one brain and precise stimulation in another brain."

Once we're there, though, you might never have to study for another exam.

Mark Halper is a freelance journalist based near Bristol, England. He covers everything from media moguls to subatomic particles.


No entries found