Moore's Law and Dennard scaling are waning. Yet the demand for computer systems with ever-increasing computational capabilities and power/energy-efficiency continues unabated, fueled by advances in big data and machine learning. The future of fields as disparate as data analytics, robotics, vision, natural language processing, and more, rests on the continued scaling of system performance per watt, even as traditional CMOS scaling ends.
The following paper proposes a surprising, novel, and creative approach to post-Moore's Law computing by rethinking the digital/analog boundary. The central idea is to revisit the idea of data representation and show how it is a critical design choice that cuts across hardware and software layers.
In particular, the authors develop the concept of race logic, where the key idea is to encode values as delays from some reference. Unlike pure analog approaches, race logic continues to encode data in binary form. However, unlike traditional digital logic, the time at which signals transition from zero to one encodes the value. In other words, relative propagation times of signals, usually considered a design artifact that modern digital technologies must work around, becomes a design feature and is leveraged to perform computation. Because of its reliance on data races, this computation enjoys a low number of "bit flips" and fewer wires versus conventional digital logic. The benefit is significantly better energy efficiency versus conventional digital design.
A key question is the suitability of race logic for different classes of computation. Naturally, not all computations are amenable to these encodings, but those that are stand to benefit significantly. The paper shows that machine learning classification may be one such target. In particular, the authors show how race logic can be used to "reverse" and "flatten" decision trees, widely used and a promising candidate for explainable AI, and architect a programmable race tree hardware accelerator for ensemble tree learning. Via a tour de force of engineering, the authors validate their research hypotheses via energy, throughput, and area utilization studies for an ASIC design of their accelerator, functional RTL implementations on an FPGA, SPICE model synthesis of the underlying primitives of race logic, and a fully automated toolchain for scikit-learn. The upshot is a full-stack and unusually detailed study from software structures down to device configurations.
This paper will be of wide interest to the computing community as it hints at many tantalizing research questions worthy of scientific inquiry. Perhaps the most natural one is race logic's promise for machine learning. The need for ultra-energy-efficient machine learning in edge and IoT devices is already exigent. Dynamic vision sensors, time-based image sensors, time to first spike and time of flight cameras, and address event representation-based sound sensors are just a few systems expected to drive sophisticated learning algorithms and race logic is particularly well-suited to reducing their energy needs. To fully realize these benefits, further research will be needed on automated design tools and flows that enable at-scale race logic, as well as software development environments, domain-specific languages, compilers, and more.
The following paper will be of wide interest to the computing community as it hints of many tantalizing research questions worthy of inquiry.
Perhaps even closer to my heart is the more abstract principles on which race logic rests. At its core, race logic is inspired by several aspects of how neuro-scientists believe that the brain computes. These include, for example, the notion that time encodes computation, the concept of radial basis functions where larger signals trigger neurons more rapidly, and the inclusion of race logic primitives that are inspired by inhibitory post-synaptic potentials in the neo-cortex. Computer scientists have long been fascinated by the idea of drawing lessons from biology and nature to build better abstractions and methods for computing, spurring research on neuromorphic systems, natural algorithms, the emergence of intelligence, and more. These endeavors are often faced with the following question: To what degree is it useful for concepts from biology/nature to be replicated in systems/algorithms? Does, for example, the fact that computer systems rely on silicon and digital technologies, which differ from the elements and proteins used to realize life, mean that more abstract principles from natural computing need to be considered instead? And if so, what are the abstractions from nature appropriate for mimicry in computer systems?
Race logic offers perspective on this debate by lifting underlying principles of computation in the brain and abstracting them so that they may be suitable for deployment using silicon technologies. I believe this is what enables race logic to achieve efficiency across all three of sensor layer, learning algorithm, and architecture layer. As the authors point out, achieving all three is a rarity and, I believe, a testament to the educational value of this paper.
I hope you enjoy reading about race logic as much as I have.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment