Neural-inspired computing models have captured our imagination from the very beginning of computer science; however, victories of this approach were modest until 2012 when AlexNet, a "deep" neural net of eight layers, achieved a dramatic improvement on the image classification problem. One key to AlexNet's success was its use of the increased computational power offered by graphics processing units (GPUs), and it's natural to ask: Just how far can we push the efficient computing of neural nets?
Computing capability has advanced with Moore's Law over these last three decades, but integrated circuit design costs have grown nearly as fast. Thus, any discussion of novel circuit architectures must be met with a sobering discussion of design costs. That said, a neural net accelerator has two big things going for it. First, it is a special-purpose accelerator. Since the end of single-thread performance scaling due to power density issues, integrated circuit architects have searched for clever ways to exploit the increasing transistor counts afforded by Moore's Law without increasing power dissipation. This has led to a resurgence of special-purpose accelerators that are able to provide 10–100x better energy efficiency than general-purpose processors when accelerating their special functions, and which consume practically no power when not in use.
No entries found