Research and Advances
Architecture and Hardware

Adaptive Brain Interfaces

Users interact with physical devices through nothing more than the voluntary control of their own mental activity.
Posted
  1. Introduction
  2. EEG-based Interfaces
  3. Core Neural Network Classifier
  4. Brain-Actuated Applications
  5. Prospects
  6. References
  7. Author
  8. Footnotes
  9. Figures

Severely disabled people are largely excluded from the benefits information and communication technologies have brought to our industries, economies, appliances, and general quality of life. But what if that technology would allow them to communicate their wishes or control electronic devices directly through their thoughts alone? This is the goal and promise of the Adaptive Brain Interfaces (ABI) project, which aims to augment natural human capabilities by enabling people to interact with computers (after a brief training period) through the direct control of their thoughts.

Researchers and designers of human-computer interfaces are motivated by a growing interest in the use of physiological signals for communication among and operation of devices by physically handicapped people, as well as by their able-bodied counterparts. Combining neuroscience and computer science, recent experimentation has demonstrated the possibility of analyzing brainwaves online to derive information about a subject’s mental state that could then be mapped onto some external action (such as selecting a letter from a virtual keyboard or moving a robotic device). A brain-computer interface (BCI) is an alternative communication and control channel that does not depend on the brain’s normal output pathway of peripheral nerves and muscles [11].

Figure.

Although BCI prototypes are recent developments [2, 4–8, 12], the basic ideas were laid out in the 1970s; early successful experiments were based on analyzing brain electrical activity generated in response to the direction of a subject’s gaze [10]. A decade later saw the first experiments involving the offline analysis of brain electrical signals independent of muscle control and external stimulation [3].

BCIs can monitor a variety of brainwave phenomena, usually through electroencephalogram (EEG) signals; the brain’s electrical activity is monitored through electrodes placed on the scalp. The main source of an EEG is the synchronous activity of thousands of cortical neurons. Some scientists exploit evoked potentials, or the automatic responses of the brain to external stimuli [11]; evoked potentials are, in principle, easily picked up but require subjects to synchronize themselves to the external machinery. A more natural and practical alternative is to rely on components associated with spontaneous mental activity. Thus, in one such experiment, the researchers calculated a component of the EEG known as slow cortical potential [2], measuring it over the top of the scalp to indicate the overall preparatory excitation level of a cortical network. Other experiments have looked at local variations of EEG rhythms; their most popular uses involve imagining physical movement, as recorded from the central region of the scalp overlying the sensorimotor cortex [6, 12]. Other cognitive mental tasks (besides motor-related rhythms) have also been explored [1, 3, 5, 7]; for example, a number of neurocognitive studies have found that different mental tasks (such as imagining movement, arithmetic operations, and language) activate local cortical areas to various degrees. Rather than looking for predefined EEG phenomena, as when using slow cortical potentials or movement rhythms, these researchers seek mental-specific EEG patterns embedded in the continuous EEG signals.


Individual subjects choose the mental tasks they find easiest, as well as the preferred strategies they need to accomplish them.


Measuring the EEG is a simple noninvasive way to monitor brain activity. However, it does not provide detailed information on the activity of individual neurons (or of small clusters of neurons) that could be recorded through microelectrodes surgically implanted in the cortex. Such direct measurement of brain activity might, in principle, enable quicker recognition of mental states, as well as more complex interaction, as demonstrated by neuroscientists recording neurons in the motor cortex [4, 8]. Between the simple EEG and the extremely invasive direct recording of neurons, a researcher might reasonably consider using another established brain-imaging technique (such as magnetoencephalography, functional magnetic resonance imaging, and positron emission tomography). Nevertheless, all such techniques require sophisticated devices that can be operated only in specially designed medical facilities.

The most common BCI systems are based on the analysis of spontaneous EEG signals. The ABI approach involves a range of brain-actuated applications, from communication to control; key issues include the design of direct interfaces via implanted microelectrodes.

Back to Top

EEG-based Interfaces

Unlike ABI, most BCIs are based on synchronous experimental protocols whereby the subject follows a fixed repetitive scheme, switching from one mental task to another [2, 6, 12]. A trial consists of two parts: a cue telling the subject to get ready and, after a fixed period of several seconds, a second cue telling the subject to perform the desired mental task for some predefined length of time. The EEG phenomena to be recognized are time-locked to the second cue; the BCI responds with the average decision during the second period of time. In such synchronous BCI systems, a trial lasts from 4 to 10 or more seconds. This relatively long period is necessary because the EEG phenomena of interest need time to recover. Other BCIs employ more flexible asynchronous protocols whereby the subject makes self-paced decisions as to when to stop doing a mental task and when to begin the next one [5, 7]. In the case of asynchronous protocols, the BCI can respond quickly; for example, the ABI system responds every half second.

Some researchers have demonstrated that subjects can learn to control their brain activity through intensive training in order to generate fixed EEG patterns the BCI then transforms into external actions. Others employ machine-learning approaches to train the classifier embedded in the BCI [1, 3]; most involve a mutual learning process whereby the user and the BCI are coupled together and adapt to each other [5–7]. This mutual learning process should accelerate training time. The ABI approach allows subjects to achieve good performance in just a few hours of training in the presence of feedback [5]; analysis of learned EEG patterns confirms that personal BCIs must fit users’ individual features.

BCIs usually make binary decisions as they seek to recognize two different mental states (such as positive vs. negative slow cortical potentials and imagination of left- vs. right-hand movements) [2, 3, 6, 7, 12]producing accuracies of about 90%. Other scientists have tried to simultaneously recognize three or more tasks, though most report recognition errors above 15% [1, 6]. The ABI approach achieves error rates under 5% for three mental tasks and correct recognition of 70% [5]. These classification rates, together with the number of recognizable tasks and the duration of the trials, yield transmission rates of approximately 0.15b/s to 2.0b/s, depending on the approach.

Back to Top

Core Neural Network Classifier

At the core of ABI is a neural network classifier that recognizes which mental task the subject—wearing a portable EEG system—is concentrating on by analyzing continuous variations of EEG rhythms over several cortical areas of the brain. The subject might concentrate on a range of mental states, from motor-related (such as imagining a limb movement) to cognitive (such as arithmetic).

In the ABI’s mutual learning process, the neural network learns subject-specific EEG patterns describing desired mental tasks, while subjects learn to think in ways that enable the ABI interface to better understand them. Individual subjects choose the mental tasks they find easiest, as well as the preferred strategies they need to accomplish them. Building individual interfaces greatly increases the likelihood of success, enabling people to quickly master their own personalized brain interface, as demonstrated for all ABI subjects (more than 10), despite limited training times.

The analyzed mental states (or tasks) are relatively abstract and engage different local cortical areas at different amplitudes and frequencies. Subjects have been asked to select three of the following tasks: relax, imagine left- or right-hand (or arm) movement, cube rotation, subtraction, and word association. They first relax their minds, imagining repetitive self-paced movements of a particular limb, visualizing a spinning cube, performing successive elementary subtraction by a fixed number (such as 64−3=61, 61−3=58, and 58−3=55), and concatenating related words, respectively. Mental relaxation is done with eyes shut; the other tasks are performed with eyes open.

Each unit of ABI’s built-in neural classifier represents a prototype of one of the mental states to be recognized. Once trained, the response of the network toward an arriving EEG sample is the class with the greatest posterior probability, provided it is above a given confidence threshold, normally between 0.8 and 0.9. Otherwise, the response is “unknown” to avoid risky decisions concerning uncertain samples. Incorporating rejection criteria to avoid such decisions is an important concern in any BCI. For practical reasons, low classification error is a critical BCI performance criterion; otherwise, users would be frustrated and reject the interface. Some researchers also apply Bayesian techniques for rejection purposes, helping to recognize and avoid uncertain responses [7].

Users of ABI-based systems are usually given a half hour of training per day; feedback (see Figure 1) is provided directly through the interface. The computer screen includes three buttons, each identified by a different color and associated with one of the mental tasks to be recognized. A button lights when an arriving EEG sample is classified as belonging to the corresponding mental task. EEG potentials are recorded at the eight fronto-central-parietal locations on the scalp—F3, F4, C3, Cz, C4, P3, Pz, and P4—as in Figure 1. The sampling rate is 128Hz. An EEG sample corresponds to the power spectrum of each channel (location) over the second of time just passed.

Experimental results show that, at the end of the training period (normally five days), the recognition rate (percentage of times the system correctly classified the subject’s mental task) is 70% (or higher)—a figure more than twice the random classification, which for three tasks is 33.3%. This modest recognition rate is largely compensated by two properties: errors are less than 5% (in many cases less even than 2%), and decisions are made every half second. Some subjects have pursued consecutive training sessions (up to seven) in a single day and, even if lacking prior experience with BCIs, achieved the same performance within just two hours. Worth noting is that one such subject (a man in his 40s living in London) suffers from spinal muscular atrophy, a physical disease of the cells of the spinal cord affecting the muscles controlling voluntary limb, head, and neck movement. Since ABI makes decisions every half second, modest classification accuracy (along with a low error rate) does not preclude practical operation.

ABI is thus characterized by the following key performance factors:

  • Reliability. It rarely produces incorrect classifications (less than 5%) while producing 70% (or more) correct classification and not responding to the remaining EEG samples;
  • Fast response. It tries to recognize mental tasks every half second;
  • Rapid training. As a consequence of the mutual learning approach and the specific neural network, users achieve satisfactory control in a few hours;
  • Scalable. The number of recognizable mental tasks (currently three) depends entirely on engaging the cortical areas differently, as ABI does not look at specific EEG phenomena in particular areas; and
  • Natural interaction. The subject makes spontaneous and self-paced decisions (when to switch between mental tasks and how to perform them) without having to wait for or respond to external cues.

Back to Top

Brain-Actuated Applications

ABI researchers have developed several interfaces illustrating the range of possible brain-actuated applications, including a virtual keyboard, new forms of education and entertainment, and the robotic operation of physical devices (such as a wheelchair).

ABI can enable people to select letters from a virtual keyboard on a computer screen and write messages (see Figure 2). Initially, as they decide what they want to write, the keyboard is divided into three parts, each associated with one of the mental tasks ABI has been trained to classify. Then, as ABI’s neural network recognizes which task the subject is concentrating on, the keyboard splits successively into smaller segments until a single letter is selected; this letter goes into the message, and the process begins again. As an additional measure of reliability, a segment of the keyboard is selected only when the corresponding mental task is recognized by ABI three times in a row. Users can undo a wrong selection by immediately concentrating on another desired mental task. Thus the system waits a short time after each selection (3.5 seconds) before moving on to the next decision. The mental task used to undo selections is the one for which the user exhibits the most dependable performance. Trained subjects have taken 22.0 seconds on average to select a letter, including recovery from errors; specially designed aids (such as automatic word suggestions) will eventually accelerate the writing process.

Other research groups [2, 6] have also developed brain-actuated keyboards, allowing subjects to write a letter every two minutes and every one minute, respectively. In one, a patient user implanted with a special electrode (described in [4]) achieved a spelling rate of about three letters per minute using a combination of neural and EMG signals.


Trained subjects have taken 22.0 seconds on average to select a letter, including recovery from errors.


A BCI can also be used to control external devices (such as to open and close a hand orthosis) [6, 11]. One group of researchers [8] implanted microelectrodes in a monkey’s brain to record activity from its motor cortex neurons, decoding it into a signal the monkey uses to drive a cursor to desired screen targets. ABI continuously guides a mobile robot, closely mimicking the operation of a motorized wheelchair with built-in sensory capabilities. Users’ mental states are associated with high-level commands the robot executes autonomously. Moreover, users can issue high-level commands at any moment, including move forward (and, if in front of a doorway, cross it), stop, turn right, and turn left. Such options are possible because ABI’s operation is asynchronous and does not need to wait for external cues, unlike synchronous approaches.

The robot relies on reactive controllers to implement the high-level commands and make it move in a safe (avoiding collisions) and smooth way. Onboard sensors are read constantly to determine which action should come next. The mapping from the user’s mental states to the robot’s high-level commands is not simply one-to-one; in order to achieve flexible control, the mental states represent just one of the inputs for a finite-state automaton involving six states, or high-level commands. Transitions between commands are determined by the three mental states, five perceptual states of the environment (as described by the robot’s sensory readings), and memory variables.

Subjects have mentally driven the robot along nontrivial trajectories in an office environment, visiting up to four rooms in desired order. Having demonstrated the control of a complex device (such as a robot), operating smart house appliances (such as lights, TV, and doors) is trivial.

Finally, computer games can also be controlled through thought alone (see Figure 3). The example here is Pacman, but other, perhaps more educational, software can be used. Two mental tasks are enough to direct the Pacman character to turn left or right, changing direction whenever one of the tasks is recognized twice in a row. In the absence of commands, it moves forward until reaching a wall where it stops and waits for further instructions.

The three brain-operated ABI applications—virtual keyboard, robot control, and game interfaces—have been demonstrated publicly at a number of workshops and IT exhibitions, as well as on European TV. During one live demonstration of the virtual keyboard, a subject wrote words and sentences suggested by the public; several have tried ABI and achieved good performances after only two hours of training. These experiences confirm ABI’s adaptivity and demonstrate good performance in noncontrolled conditions, including electromagnetic fields, ambient noise, people moving and talking nearby, and the user’s own significant stress.

Our subject volunteered to validate ABI at his home in London in 2000. After two hours of training, he was able to write with the virtual keyboard. “This is the first technology I have tried, including voice recognition,” he told the BBC, “that has made me feel independent.” (Please note, as a researcher, I qualify this statement, as extensive validation studies are necessary before making any definitive claim.)

Back to Top

Prospects

BCIs enable people to communicate and control appliances using their own brain activity; therefore, subjects have to be conscious of their thoughts and concentrate on the mental expression of the commands required to carry out desired tasks. The immediate application today is helping physically handicapped people increase their independence and facilitate their participation in the information society. The aforementioned volunteer said, “I have been waiting for this for years. I could think of 40 to 50 people off the top of my head who would benefit from it straight away.” The technology might also make possible new kinds of interaction paradigms for able-bodied people, at least in certain domains. Beyond such real-world applications as controlling machines when manual operation is problematic, detecting material fatigue to prompt the system to increase its level of automatic control, and detecting mental states to augment the richness of virtual interaction, these interfaces might help the human brain develop new skills while making computer systems complement their users, instead of requiring passive conformance to the technology.

BCI technology is in its infancy; its bit-rate is still far slower than other interaction modalities (such as speech) and body movement (such as eye tracking and hand gestures). But recent experiments involving monkeys with electrodes implanted in their brains support the feasibility of the real-time control of complex devices (such as computer cursors and prosthetic limbs) directly through brain activity [8]; the monkeys quickly learned to use their own neural activity to control the cursor. However, given the invasive nature of the approach, the number of human users might be limited to only the most severely disabled. Thus, to reach a wider population, the research challenge is to achieve similar results with noninvasive technologies.

Portable high-resolution EEG systems (possibly in combination with optical devices) might help produce detailed information on the activity of specific cortical areas. It would then be crucial for this noninvasive approach to add real-time algorithms to transform scalp potentials into brain activity maps and select relevant areas of interest for recognition tasks. The neural classifier embedded in the BCI would work on these brain maps instead of on EEG features.

Another key concern for the deployment of BCIs is how to adapt on-the-fly their embedded classifiers while a user operates a brain-actuated application. Gaining experience, subjects develop new abilities and alter their brain activity patterns. Moreover, spontaneous brain signals change naturally over time. Such continuous adaptation should be allowed at any time, even if the subject’s intention is not immediately known. To address this issue, we might employ reinforcement learning techniques [9], especially if the user controls a robotic device, a task for which this machine-learning technique is particularly effective.

Independent of whether brain activity is used exclusively or only as part of a multimodal interface along with other body signals (such as speech, hand gestures, and heart rate), determining the user’s intentions from brain signals will hopefully lead to more direct, natural, and personalized human-computer interaction.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Brain vessels acquired in-vivo using Magnetic Resonance Angiography and visualized using a modified Maximum Intensity Projection methodology; note blockage lower left. (Georgios Sakas, Fraunhofer Institute for Computer Graphics, Darmstadt, Germany)

F1 Figure 1. Portable EEG system. Wearing a cap with integrated electrodes (white dots) placed according to the standard International 10-20 system, the user receives feedback via the three buttons on the computer screen, each associated with a desired mental task. All signals collected through the eight fronto-central-parietal electrodes are recorded with respect to a linked-ear reference (average potentials measured in both ear lobes).

F2 Figure 2. Virtual keyboard during the writing of a message. Beginning in the top-left panel, the keyboard is divided in three segments, each associated with a different mental task and using the same colors as during the training sequence. The neural classifier’s recognition of the same mental task three times in a row selects the corresponding segment of the keyboard (top center); the green area is shadowed for 3.5 seconds to allow the user to undo the selection. This segment is divided again (top right). A selected block is split in three again to offer a choice among three letters (bottom left). After the user selects the letter in red (h), writing it into the message, the whole process starts over (bottom center). The final decision is the last letter in the message (bottom right).

F3 Figure 3. A user interacting with a computer game (Pacman) using only two commands: turn character left and turn character right. Otherwise, the character moves forward until it reaches a wall where it stops.

Back to top

    1. Anderson, C. Effects of variations in neural network topology and output averaging on the discrimination of mental tasks from spontaneous EEG. J. Intelli. Syst. 7 (1997), 165–190.

    2. Birbaumer, N., Ghanayim, N., Hinterberger, T., Iversen, I., Kotchoubey, B., Kübler, A., Perelmouter, J., Taub, E., and Flor, H. A spelling device for the paralysed. Nature 398 (1999), 297–298.

    3. Keirn, Z. and Aunon, J. A new mode of communication between man and his surroundings. IEEE Trans. Biomed. Engineer. 37 (1990), 1209–1214.

    4. Kennedy, P., Bakay, R., Moore, M., Adams, K., and Goldwaithe, J. Direct control of a computer from the human central nervous system. IEEE Trans. Rehabil. Engineer. 8 (2000), 198–202.

    5. Millán, J. del R., Mouriño, J., Franzé, M., Cincotti, F., Varsta, M., Heikkonen, J., and Babiloni, F. A local neural classifier for the recognition of EEG patterns associated to mental tasks. IEEE Trans. Neural Nets. 13 (2002), 678–686.

    6. Pfurtscheller, G. and Neuper, C. Motor imagery and direct brain-computer communication. Proceed. IEEE 89 (2001), 1123–1134.

    7. Roberts, S. and Penny, W. Real-time brain-computer interfacing: A preliminary study using Bayesian learning. Med. Biolog. Engineer. Comput. 38 (2000), 56–61.

    8. Serruya, M., Hatsopoulos, N., Paninski, L., Fellows, M., and Donoghue, J. Instant neural control of a movement signal. Nature 416 (2002), 141–142.

    9. Sutton, R. and Barto, A. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.

    10. Vidal, J. Real-time detection of brain events in EEG. Proceed. IEEE 65 (1977), 633–664.

    11. Wolpaw, J., et al. Brain-computer interface technology: A review of the first international meeting. IEEE Trans. Rehabili. Engineer. 8 (2000), 164–173.

    12. Wolpaw, J. and McFarland, D. Multichannel EEG-based brain-computer communication. Electroencephalog. Clinic. Neurophys. 90 (1994), 444–449.

    This work has been supported in part by the ESPRIT Programme of the European Commission (LTR project number 28193-ABI). The following people have contributed to the ABI project: J. Mouriño, M. Franzé, and S. Chiappa of the Joint Research Centre (Ispra, Italy); F. Babiloni, F. Cincotti, and M. Marciani of the Fondazione Santa Lucia (Rome, Italy); T. Nykopp, M. Varsta, J. Heikkonen; and K. Kaski of the Helsinki University of Technology; F. Renkens and A. Hauser of the Swiss Federal Institute of Technology (Lausanne); and F. Topani of Fase Sistemi (Rome, Italy).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More