News
Architecture and Hardware

Brain-Emulating Chips Get Smarter, Smaller, More Efficient

Posted
An artificial synapse devised  by the U.S. National Institute for Standards and Technology
An artificial synapse developed by the U.S. National Institute of Standards and Technology is a superconducting niobium electrode and a manganese-silicon matrix, which mimics the operation of a switch between two brain cells with one artificial synapse at

Neuromorphic microprocessors, sometimes referred to as neurosynaptic processors or electronic brain (e-brain) chips, are gaining momentum, as technology organizations ranging from IBM to the U.S. National Science Foundation (NSF) and National Institute of Standards and Technology (NIST) to Intel to the Massachusetts Institute of Technology (MIT) are pushing developments in the category.

Neuromorphic e-brain chips offer a better way to manage "many cores" than conventional computers do—namely, by using artificial synapses connecting artificial neurons, mirroring the way the human brain operates. E-brain chips combine processor/memory operations into a single step; running the latest artificial intelligence (AI) algorithms, such as deep learning, on these chips has emboldened some pundits to predict they will out-perform the fastest digital mega-watt/gigahertz supercomputers while consuming about the same power as a human brain (20 watts) and running at ridiculously slow speeds (as slow as 10 Hz to 50 Hz).

The resulting boost to computing machines could springboard the technology into a whole new era of learning-enabled devices, according to analyst Rian Whitton of market research firm ABI Research, who sees the resulting applications including smarter home appliances, smarter drones, smarter industrial robots, and better supercomputers.

Neuromorphic chips will grow quickly, Whitton says. "We expect to see a CAGR [compound annual growth rate] of 135% from 2016 to 2022. Mobile devices, PCs, wearables, and smart home appliances will account for most of this, but you will also find on-device AI being developed for UAVs [unmanned aerial vehicles], robotics, and industrial automation systems. Companies like Neurala [for its algorithms] and Nvidia [for its many-core hardware] have already partnered with robotics and unmanned systems companies to apply these neural networks to their platforms."

An artificial neural network is the working mechanism of neuromorphic e-brains based on the human brain, consisting of hardware neurons and hardware synapses—in ratios of about 1,000 or more synapses per neuron. By putting them "on device," as Whitton puts it, the development process should be able to grow from small industrial applications such as self-recognizing analytic security cameras to smarter smartphones to larger systems for making financial, medical, and other analyses.

"The use-cases are identified and demonstrable. I do think there is a little too much overhyping of the AI phenomenon in general; one too many clichés about revolutionary potential when the gains may be somewhat more incremental," said Whitton. "In the future, this will be somewhat mitigated by the releasing of development tools that allow businesses to tailor their own solutions."

Today, most brain-like neural networks run as simulations on conventional computers, but IBM, Intel, Google, HRL Laboratories, the Moscow Institute of Physics and Technology, and numerous academic institutions and labs worldwide are working on the hardware components to reap the low-power/slow-speed benefits of neuromorphic hardware.

IBM  has delivered a working neuromorphic system to Lawrence Livermore National Laboratories, where it is shepherding the U.S.'s aging nuclear arsenal (by doing explosion emulations that are beyond the reach of supercomputer simulations). Based on its TrueNorth chip-set, IBM's innovation was to use conventional digital circuitry to emulate—rather than simulate in software—the way a human brain's hardware learns. The primary mechanism is sending low-frequency voltage spikes from digital brain-cells (neurons) from neuron to neuron via a hardware interconnection scheme consisting of digital synapses. The synapses accumulate the pulses until they pass a threshold, at which point another voltage spike is sent down the line to the next neuron in the network and the threshold is lowered. Each cycle that a synapse does not receive a spike ups its threshold, thus allowing e-brains to both learn and forget.

"Building on the foundation provided by TrueNorth, IBM is continuing to pursue breakthrough avenues—involving both architecture and technology innovation—to approach the capabilities of the brain while minimizing time, space, and energy," said Dharmendra Modha, IBM Fellow, chief scientist and brain-inspired computing leader at IBM Research–Almaden, and recipient of the ACM Gordon Bell Prize for 2009.

The U.S. Air Force is also using IBM's TrueNorth chip set to learn the tactics of enemy pilots in real time during combat. Samsung uses TrueNorth chips for its smart 2,000 frame-per-second industrial Dynamic Vision Sensors.

Other organizations also are pursuing strategic hardware that emuates how the human brain works:

  • Chip giant Intel last year announced an ultra-low-power self-learning chip set called Loihi, which the company said would be shared with leading university and research institutions with a focus on advancing AI during the first half of 2018.
  • Microsoft last year said the second generation of its HoloLens Virtual Reality (VR) headset will include a custom AI coprocessor for implementing deep neural networks.
  • Amazon purchased chip-maker Annapurna Labs in 2015 reportedly to make better e-brain chips to make Alexa smarter, with no reported results yet.
  • Google claims to be working on a second generation of its Tensor Processing Units (TPUs) that do not seek to emulate the brain's architecture, but only its functions (Google engineers believe emulating the brain's architecture is a waste of resources).

The next generation of e-brain chips will likely incorporate new types of synapse materials that can directly emulate synapses in biological brains. IBM, for instance, is working on phase-change materials to realize brain-like analog synapses, based on more than a decade of research on using such materials for memory applications.

To boot, the NIST recently unveiled a completely new material and architecture for e-brain synapses that closely emulate those in real brains. The new material requires cooling to 2 degrees Kelvin (-456.07 degrees Fahrenheit). The architecture uses superconducting Josephson Junctions made with niobium electrodes in a manganese-silicon matrix that can cycle 20 times faster than any computer today—in the 100s of gigahertz, compared to the limit of 5gHz for today's complementary metal oxide silicon semiconductors (CMOS). NIST's e-brains will consume much more power than human brains, but are expected be able to performing human tasks at superhuman speeds.

NISTs superconducting synapses mimic learning and forgetting, thus making them key elements for superhuman e-brains. They remember by magnetically aligning super-cooled nanoclusters of manganese in the silicon matrix between the two niobium electrodes, lowering the threshold at which the synapse will fire as in real brains, enabling learning. Likewise, it forgets by disrupting the magnetic alignment of the nanoclusters, increasing the threshold and decreasing the likelihood of firing.

"At the device level, we order the nanoclusters with short electrical pulses in a small applied magnetic field," said NIST physicist Mike Schneider. "The field alone does not arrange the molecules in a more orderly fashion, but only does so in combination with the electrical pulse.  This makes the operation addressable to specific synapses in a larger circuit.  If we apply short electrical pulses with no additional magnetic field, we decrease the order of the magnetic clusters [enabling learning and forgetting, by] adjusting the synapse in either direction."

NIST admits that the use of superconducting technology requires extra hardware—namely, a super-low-temperature cooler—but argues that for large-scale installations, its speeds will more than make up for the expense.

Schneider acknowledges his group has a long way to go in scaling its 10-micron 14-device prototype chip (of which only one synapse is active) to the types of systems that will outperform IBM and Intel, but insists it will be worth the wait. "We assume that our applications will be in large-scale systems where handling massive amounts of data is the main goal.  Here, the speed will be worth the cost of supercooling," said Schneider,  "If, however, there is a particular application that works better when processed slowly, we could do that too.  Since we have a naturally spiking system, and the main energy dissipation only occurs during the spike, slowing down the input would increase its energy efficiency even more."

MIT also recently announced a fast one-dimensional material for emulating synapses. Instead of aligning atoms in a bulk material, MIT's method uses well-defined one-dimensional pathways through single-crystal silicon and germanium atoms. It works by growing germanium—whose lattice size is slightly larger than silicon—on top of silicon, creating byways down which silicon atoms can pass. The National Science Foundation (NSF) is supporting the work as a possible stepping-stone to smaller on-device e-brains.

R. Colin Johnson is a Kyoto Prize Fellow who ​​has worked as a technology journalist ​for two decades.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More