IBM researchers have proposed reducing the number of computer bits from the current industry standard of 16 to just four.
They said this could increase the speed and reduce the energy costs needed to train deep learning models by more than sevenfold.
It also would allow smartphones and other small devices to run artificial intelligence models.
In a four-bit computer, the activations and weights in the neural network would be rescaled for every round of training to minimize the loss of precision.
To address the challenge of representing the intermediate values that arise during training, the researchers scaled these numbers logarithmically.
They ran several simulations of four-bit training for deep learning models in computer vision, speech, and natural language processing and saw a limited loss of accuracy in overall performance compared with 16-bit deep learning.
Stanford University's Boris Murmann said, "This advancement opens the door for training in resource-constrained environments."
From: MIT Technology Review
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found