News
Architecture and Hardware News

Neuromorphic Chips Take Shape

Chips designed specifically to model the neurons and synapses in the human brain are poised to change computing in profound ways.
Posted
  1. Introduction
  2. Modeling the Brain
  3. Getting Smarter
  4. A New Model
  5. Author
brain as CPU, illustration

The ability of the human brain to process massive amounts of information while consuming minimal energy has long fascinated scientists. When there is a need, the brain dials up computation, but then it rapidly reverts to a baseline state. Within the realm of silicon-based computing, such efficiencies have never been possible. Processing large volumes of data requires massive amounts of electrical energy. Moreover, when artificial intelligence (AI) and its cousins deep learning and machine learning enter the picture, the problem grows exponentially worse.

Emerging neuromorphic chip designs may change all of this. The concept of a brain-like computing architecture, conceived in the late 1980s by California Institute of Technology professor Carver Mead, is suddenly taking shape. Neuromorphic frameworks incorporate radically different chip designs and algorithms to mimic the way the human brain works—while consuming only a fraction of the energy of today’s microprocessors. The computing model takes direct aim at the inefficiencies of existing computing frameworks—namely the von Neumann bottleneck—which forces a processor to remain idle while it waits for data to move to and from memory and other components. This causes slow-downs and limits more advanced uses.

uf1.jpg
Figure. Intel combines 64 of its Loihi “brain-on-a-chip” neuromorphic chips to form a “Pohoiki Beach” neuromorphic system featuring eight million artificial neurons.

“Neuromorphic chips introduce a level of parallelism that doesn’t exist in today’s hardware, including GPUs and most AI accelerators,” says Chris Eliasmith, a professor in the departments of Systems Design Engineering, and Philosophy, of the University of Waterloo in Ontario, Canada. Although today’s deep learning systems rely on software to run basic neuromorphic systems using conventional field-programmable gate arrays (FPGA), central processing units (CPUs), and graphics processing units (GPUs), chips specifically designed to accomplish these tasks could revolutionize computing. Neuromorphic chips are packed with artificial neurons and artificial synapses that mimic the activity spikes that occur within the human brain—and they handle all this processing on the chip. This results in smarter, far more energy-efficient computing systems.

The impact of commercial neuromorphic computing could be enormous. The technology has repercussions across a wide swath of fields, including image and speech recognition, robotics and autonomous vehicles, sensors running in the Internet of Things (IoT), medical devices, and even artificial body parts.

As Adam Stieg, associate director of the California NanoSystems Institute at the University of California at Los Angeles (UCLA) puts it: “The ability to perform computation and learning on the device itself, combined with ultra-low energy consumption, could dramatically change the landscape of modern computing technology.”

Back to Top

Modeling the Brain

The human brain is a remarkable product of evolution. It has a baseline energy footprint of about 20 watts, while processing complex tasks in milliseconds. While today’s CPUs and GPUs can dramatically outperform the human brain for serial processing tasks, the process of moving data from memory to a processor and back not only creates latency, it expends enormous amounts of energy. A typical desktop computer burns through approximately 200 watts, while some supercomputers pull as much as 20 megawatts.

The value of neuromorphic systems is they perform on-chip processing asynchronously. Just as the human brain uses the specific neurons and synapses it needs to perform any given task at maximum efficiency, these chips use event-driven processing models to address complex computing problems. The resulting spiking neural network—so called because it encodes data in a temporal domain known as a “spike train”—differs from deep learning networks on GPUs. Existing deep learning methods rely on a more basic brain model for handling tasks, and they must be trained in a different way than neuromorphic chips.

“If we look at biology, we see incredible energy efficiency. This is something we’re hoping to emulate in artificial systems,” says Garrick Orchard, a researcher in Intel’s Neuromorphic Computing Lab. The artificial neurons and synapses in neuromorphic chips can be stacked into layers and inserted in multiple cores. “The idea is that by taking inspiration from biology and by trying to better understand what principles are crucial for low-power computation, we can mimic these characteristics in silicon and push the boundaries of what’s possible.”

However, it isn’t just slashing energy consumption that’s appealing. Today’s CPUs and GPUs—especially when they are used in autonomous vehicles and other independent systems—typically rely on external systems, primarily clouds, to handle some of the processing. The resulting latency is a problem for on-board systems that must make split-second decisions. “You can’t collect a frame, pass it through to a deep neural net, and wait for the response when you’re traveling down a freeway at 70 miles an hour,” explains Abu Sebastian, Principal Research Staff Member at IBM Zurich. “Everything has to happen instantaneously, and that requires fast on-board processing.”

So, while the need for clouds and edge networks won’t disappear with neuromorphic chips, autonomous systems will be able to handle additional critical computing tasks on board. In areas such as image processing, this could produce exponential improvements. The latency gain of a spike-based neural network is a fundamental benefit—and it evolves beyond today’s GPU systems. “Due to the asynchronous data-driven mode of computing, the salient information propagates in a fast manner through multiple layers of the network. The spikes begin to propagate immediately to higher layers once the lower layer provides sufficient activity. This is very different from conventional deep learning, where all layers have to be fully evaluated before the final output is obtained,” Sebastian says.


Neuromorphic systems perform on-chip processing asynchronously, using event-driven processing models to address complex computing problems.


Neuromorphic chips also have the ability to learn continuously. “Because of their synaptic plasticity and the way they learn, they can continue to adapt and evolve,” says Sebastian. In practical terms, for example, a robotic arm could learn to recognize different objects and pick them up and move them in a nuanced way. If a heavier grip is needed, the system would adjust accordingly, and if a lighter touch is required, it would also adapt. New items wouldn’t throw a neuromorphic system off-kilter; it would simply “evolve” and “at a much faster rate than a CPU could,” Orchard says.

By combining improved energy efficiency, reduced latency, and improved on-board learning, neuromorphic chips could push image recognition and speech processing to new levels of speed, efficiency, and accuracy. The technology could seed speech processing on virtually every type of device and produce new types of video cameras that operate at lower power and detect patterns and events more efficiently, Eliasmith says. Still another possible gain could take place in datacenters, which consume vast amounts of power and produce enormous carbon footprints.

The sum of these gains could produce revolutionary breakthroughs. Researchers have begun to explore the possibility of developing prosthetics that would give amputees the sensation of touch, brain-implanted chips that could aid stroke or Alzheimer’s victims, self-healing electronic skin, and even vision sensors—essentially retinal implants—that could restore vision to the blind. Scientists also are exploring probabilistic neuromorphic systems that could predict the odds of an earthquake or recession with a high level of accuracy.

Says Eliasmith, “Neuromorphic designs allow scaling that hasn’t been possible the past. We’re able to go far beyond what today’s systems can do.”

Back to Top

Getting Smarter

Neuromorphic chips won’t replace today’s CPUs and GPUs; they are more likely to be embedded next to them as separate cores. This would expand the way we use existing digital technology—particularly on the edge of the network—and provide an accelerator for niche tasks. “Today’s computers are very good at what they do. They will continue to outperform neuromorphic computing systems for conventional processing tasks. The technologies are complementary and so they will coexist,” says G. Dan Hutcheson, CEO of VLSI Research, an independent market analysis and consulting firm that tracks the semiconductor industry.

Research and development efforts are beginning to produce tangible results. For instance, Intel Labs has developed Loihi, a research chip that uses a spiking neural network architecture. The processor contains 128 neuromorphic cores, three Lakemont (Intel Quark) CPU cores, and an off-chip communications network. The chip is designed with a high level of configurability, along with cores that can be optimized for specific tasks. This makes it appealing for specialized devices. More than 80 members of Intel’s Neuromorphic Research Community—including universities, government labs, neuromorphic startup companies, and Fortune 500 firms, are now experimenting with Loihi.

IBM has developed a neuromorphic chip named TrueNorth. It has 4,096 cores, each with 256 neurons that, in turn, contain 256 synapses each. The microprocessor has 1/10,000th of the power density of a conventional von Neumann processor. It achieves this efficiency with a spiking neural network. Activity in the synthetic neurons occurs only when and where it is needed. This makes the chip particularly suited to high-speed and low-energy image processing and classification tasks. Although TrueNorth is an experimental chip, IBM is continuing to actively research neuromorphic technology, including approaches that focus on learning in the chip, Sebastian says.

More than 50 other AI startups around the world are actively developing neuromorphic chips and technology for a wide array of purposes, Hutcheson says. While all of this is taking place, others are developing software and systems to optimize chip performance. For instance, Eliasmith, who also heads a startup company called Applied Brain Research, develops algorithms and software used to program neuromorphic chips. This includes algorithms used for deep spiking networks, spiking and non-spiking adaptive controls, recurrent neural networks, on-chip learning, and spiking and non-spiking hierarchical reinforcement learning.


Neuromorphic technology remains in its infancy; there are no commercial products or killer applications. Yet the field is advancing rapidly and radically.


Meanwhile, in research labs, scientists are experimenting further with the technology. For example, at UCLA, Stieg and chemistry professor James Gimzewski have developed neuromorphic systems that can recognize rewards—similar to a rat in a maze—and artificial synapses that can “forget” by using varying input waves. Borrowing methods from human psychology, “We’re building circuits that can adapt more efficiently by forgetting what isn’t important,” Gimzewski explains. The pair also have developed nano-wire technology that mimics millions of connections in the brain. “This introduces a level of impermanence that allows the devices to be far more flexible,” says Stieg.

Back to Top

A New Model

For now, neuromorphic technology remains in its infancy. There are no commercial products, there are no killer applications. Yet, the field is advancing rapidly and radically. Commercially available chips should begin appearing within the next year or two, and the technology will likely take off in earnest within the next three to five years. Neuromorphic chips are likely to have a significant impact on edge devices and IoT systems that “must integrate dynamic and changing information that doesn’t necessary run on a single algorithm—all while conserving energy,” Stieg says.

“The world is not linear. It’s not deterministic. It doesn’t give definitive answers,” he concludes. “Conventional von Neumann-based computing systems deal mostly with high-speed, predictable, deterministic processes. They perform these tasks well, but struggle when things become more complex. Neuromorphic computing aims to open up an entirely new and unexplored area of computing. It could allow us to do things with computers that we couldn’t have imagined in the past.”

*  Further Reading

Demis, E.C., Aguilera, R., Sillin, H.O., Scharnhorst, K., Sandouk, E.J., Aono, M., Stieg, A.Z., and Gimzewski, J. K.
Atomic Switch Networks — Nanoarchitectonic Design of a Complex System for Natural Computing. Nanotechnology, Volume 26, Number 20. April 27, 2015. https://iopscience.iop.org/article/10.1088/0957-4484/26/20/204003/meta

Davies, M., Srinivasa, N., Lin, T. Chinya, G., Cao, Y., Choday, S., Dimou, G., Joshi, P., Imam, N., Jain, S., Liao, Y. Lin, C., Lines, A., Liu, R., Mathaikutty, D., McCoy, S., Paul, A., Tse, J., Venkataramanan, G., Weng, Y., Wild, A., Yang, Y., and Wang, H.
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning, IEEE Micro Volume: 38, Issue: 1, January/February 2018, pp. 82–99. https://ieeexplore.ieee.org/abstract/document/8259423

DeBole, M.V., Taba, B., Amir, A., Akopyan, F., Andreopoulos, A., Risk, W., Kusnitz, J., Otero, C.O., Nayak, T.K., Appuswamy, R., Carlson, P.J., Cassidy, A.S., Datta, P., Esser, S.K., Garreau, G.J., Holland, K.L., Lekuch, S., Mastro, M., McKinstry, J., di Nolfo, C., Paulovicks, B., Sawada, J., Schleupen, K., Shaw, B.G., Klamo, J.L., Flickner, M.D., Arthur, J.V., and Modha, D.S.
TrueNorth: Accelerating From Zero to 64 Million Neurons in 10 Years. Computer, Volume: 52, Issue: 5, May 2019, pp. 20–29. https://ieeexplore.ieee.org/abstract/document/8713821

Blouw., P., Choo, X., Hunsberger, E., and Eliasmith, C.
Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware. NICE ’19: Proceedings of the 7th Annual Neuro-inspired Computational Elements Workshop March 2019 Article No.: 1 pp. 1–8. https://doi.org/10.1145/3320288.3320304

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More