Classical computing using digital symbols—equivalent to a Turing Machine—is reaching its limits. It is undeniable that computing’s historic exponential performance increases have improved the human condition. Yet such increases are a thing of the past due in large part to the constraints of physics and how today’s systems are constructed. Hardware device designers struggle to eliminate the effects of nanometer-scale thermodynamic fluctuations, and the soaring cost of fabrication plants has eliminated all but a few companies as a source of future chips. Software developers’ ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains like economic systems, ecological systems, medicine, social systems, warfare, and autonomous vehicles. Machine learning techniques, such as deep neural networks, can help but their capabilities are limited and they are implemented on top of the digital hardware with the aforementioned challenges.
Yet, even as we encounter these limits, we recognize that living systems evolve energy-efficient, universal, self-healing, and complex computational capabilities that dramatically transcend our current technologies. Animals, plants, bacteria, and proteins solve problems by spontaneously finding energy-efficient configurations that enable them to thrive in complex, resource-constrained environments. For example, proteins fold naturally into a low-energy state in response to their environment.a In fact, all matter evolves toward low-energy configurations in accord with the Laws of Thermodynamics. For near-equilibrium systems (see the sidebar) these ideas are well known and have been used extensively in the analysis of computational efficiency and in machine learning techniques.
Since the time of Carnot and Babbage in the early 19th century, the fields of computation and thermodynamics have co-evolved with seminal contributions from Maxwell, Boltzmann, Gibbs, von Neumann, Turing, Shannon and many others. Lately, we see this trend continuing as our understanding of thermodynamics expands to include non-equilibrium systems. For example, a new generation of “fluctuation theorems” (for example, Jarzynski,7 Crooks3) suggests a path toward a computing technology grounded in an understanding of open, non-equilibrium systems. To continue technological progress—and acquire its corresponding social and economic benefits—we advocate a new, physically grounded, computational paradigm centered on thermodynamics and an emerging understanding of using thermodynamics to solve problems that we call “Thermodynamic Computing” or TC. Like quantum computers, TCs are distinguished by their ability to employ the underlying physics of the computing substrate to accomplish a task.
As illustrated in Figure 1, we envision a thermodynamic computing system (TCS) as a combination of a conventional computing system and novel TC hardware. The conventional computer is a “host” through which users can access the TC and define a problem for the TC to solve. The TC, on the other hand, is an open thermodynamic system directly connected to real-world input potentials (for example, voltages), which drive the adaptation of its internal organization via the transport of charge through it to relieve those potentials. Qualitatively, better or poorer organization results in smaller or larger internal dissipation and fluctuation, which drives the network to stabilize existing configurations or sample new ones. The TC itself may comprise, for example, a networkb of “cores” and connections that spontaneously and collectively adapt to achieve low energy network states/high charge transport efficiency in response to changing external potentials.
Figure 1. Conceptual schematic for a Thermodynamic Computing system.
What problems can these systems solve? We see TC as particularly well-suited for searching complex energy landscapes that leverage both rapid device fluctuations and the ability to search a large space in parallel, and addressing NP-complete combinatorial optimization problems or sampling many-variable probability distributions. For example, Borders et al.1 have shown the ability to solve optimization problems such as integer factorization using a network of stochastic binary neurons constructed from thermally fluctuating nanoscale magnetic tunnel junctions configured as stochastic bits. Thermodynamic computing systems make such ideas both more general and more accessible, giving a TCS user the ability to easily define optimization problems that receive feedback not only from their programmed constraints, but also from direct interaction with an external environment.
The idea of using the physics of self-organizing electronic or ionic devices to solve computational problems has shown dramatic progress in recent years.
To put this vision in a larger context, Figure 2 divides computing into domains according to their relationship to fluctuation scales. Spatial and temporal fluctuation scales are estimated in terms of thermal energy (kT, that is, Boltzmann constant times temperature) and corresponding electronic quantum coherence times and lengths. We divide the computing paradigm into three qualitatively different domains that we label as “Classical,” “Thermodynamic,” and “Quantum.” In the Classical domain, fluctuations are small compared to the smallest devices in a computing system (for example, transistors, gates, memory elements), thereby separating the scales of “computation” and “fluctuation” and enabling abstractions such as device state and the mechanization of state transformation that underpin the current computing paradigm. In the Quantum domain, fluctuations in space and time are large compared to the computing system. While the Classical domain avoids fluctuations by “averaging them away,” the Quantum domain avoids them by “freezing them out.” In the Thermodynamic domain, fluctuations in space and time are comparable to the scale of the computing system and its devices. This is the domain of non-equilibrium, mesoscale thermodynamics, cellular operations, neuronal plasticity, genetic evolution, and so forth—that is, it is the domain of self-organization and the evolution of life. This is the domain that we need to understand in order to build technologies that operate near the thermodynamic limits of efficiency and spontaneously self-organize, but paradoxically it is also the domain that we assiduously avoid in our current classical and quantum computing efforts.
Figure 2. The three major domains of computing.
We note that the idea of using the physics of self-organizing electronic or ionic devices to solve computational problems has shown dramatic progress in recent years. For example, networks of oscillators built from devices exhibiting metal-insulator transitions have been shown to solve computational problems in the NP-hard class.9 Memristive devices have internal state dynamics driven by complex electronic, ionic, and thermodynamic considerations,4 which, when integrated into networks, result in large-scale complex dynamics that can be employed in applications such as reservoir computing.10 Other systems of memristive devices have been shown to implement computational models such as Hopfield networks8 and to build neural networks capable of unsupervised learning. Today we see opportunity to couple these recent experimental results11 with the new theories of non-equilibrium systems through both existing (for example, Boltzmann Machines5) and newer (for example, Thermodynamic Neural Network6) model systems. Given these experimental, theoretical, and modeling components, we envision a roadmap for developing TCs with three complementary foci:
- Using classical computing to model and simulate potential TC advances and, conversely, focusing the lens of TC back on classical systems in order to improve them.
- Developing nearer-term hybrid computer systems with both classical and thermodynamically-augmented components—for example, thermodynamic “bits,” “neurons,” “synapses,” “gates,” and “noise generators”—and evolving these systems toward greater TC exploitation.
- Creating systems using complex thermodynamics networks wherein a classical computing system provides an interface to and scaffolding for mesoscale assemblies of interacting, self-organizing components exhibiting complex dynamics and multiscale, continuously evolving structure, either at room temperature or—if quantum effects are key—at very low temperature (milli-Kelvin).
At least initially, we expect that TC will enable new computing opportunities rather than replace Classical Computing at what Classical Computing does well (enough), following the disruption path articulated by Christensen.2 These new opportunities will likely enable orders of magnitude more energy efficiency and the ability to self-organize across scales as an intrinsic part of their operation. These may include self-organizing neuromorphic systems and the simulation of complex physical or biological domains, but the history of technology shows that compelling new applications often emerge after the technology is available.
As TCs will co-exist with classical computing, future research will develop interaction abstractions. For example, the classical host might configure TC elements by viewing them as a fixed edge-weighted directed graph of thermodynamic elements. In the longer term, as more complex thermodynamic devices and networks become available, the TC might be presented as a continuously evolving neural network processor with networks and external inputs defined by the architect.
Thermodynamic computing can extend computing’s transformational effect on society well into the 21st century.
Our viewpoint is that developing TC can extend computing’s transformational effect on society well into the 21st century.
- TC’s potential societal impacts include sustaining U.S. computing leadership, improving outcomes in many areas of human enterprise (for example, medicine, business, agriculture, defense, security, and leisure), and new workforce and business opportunities.
- TC’s potential technical impacts include enabling ultra-low energy systems (for example, with perceptual capabilities that rival those of animal sensory systems), fundamental increases in battery life, and a potential reduction in the national computer energy consumption.
- TC’s potential scientific impacts include probing the fundamental efficiency limits of computing, exploring self-organization to reduce human programming effort, repurposing the extraordinary capabilities of living systems for human-engineered systems, and creating more intellectual synergy among diverse fields in engineering and physical sciences.
This Viewpoint has been developed from the output of the Computing Community Consortium’sc Thermodynamic Computing workshop that brought together computer scientists, mathematicians, physicists, and computation biologists. To learn more about the Thermodynamic Computingworkshop, visit the workshop websited or read the workshop report.e The authors thank all workshop participants and CCC/CRA staff.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment