Sign In

Communications of the ACM

ACM TechNews

Chip Ramps Up Artificial Intelligence Systems' Performance

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
The new chip.

Princeton University researchers working in collaboration with Analog Devices Inc., have fabricated a chip that can boost the performance and efficiency of neural networks.

Credit: Frank Wojciechowski

Princeton University researchers working with multinational semiconductor company Analog Devices have produced an accelerator chip that significantly enhances the performance and efficiency of neural networks.

The researchers say the technology could help further development of image recognition and other neural network applications, such as artificial intelligence (AI) systems in self-driving vehicles and robots.

The chip works with in-memory computing, cutting the energy and time used to retrieve data by conducting computations in place where data is stored, instead of shunting it elsewhere.

To tackle the computation signal-to-noise-ratio, the team used capacitor-driven, rather than transistor-driven, computing. The chip's capacitors are positioned atop its memory cells to save space and further reduce data communication costs.

The researchers aim to make the chip's architecture programmable and compatible with other hardware components, and then build out the software infrastructure so AI designers can create new apps that exploit the chip's performance.

From Princeton University
View Full Article


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


No entries found