Home/Magazine Archive/June 2021 (Vol. 64, No. 6)/A Vision to Compute like Nature: Thermodynamically/Full Text

Viewpoint
# A Vision to Compute like Nature: Thermodynamically

Classical computing using digital symbols—equivalent to a Turing Machine—is reaching its limits. It is undeniable that computing's historic exponential performance increases have improved the human condition. Yet such increases are a thing of the past due in large part to the constraints of physics and how today's systems are constructed. Hardware device designers struggle to eliminate the effects of nanometer-scale thermodynamic fluctuations, and the soaring cost of fabrication plants has eliminated all but a few companies as a source of future chips. Software developers' ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains like economic systems, ecological systems, medicine, social systems, warfare, and autonomous vehicles. Machine learning techniques, such as deep neural networks, can help but their capabilities are limited and they are implemented on top of the digital hardware with the aforementioned challenges.

Yet, even as we encounter these limits, we recognize that living systems evolve energy-efficient, universal, self-healing, and complex computational capabilities that dramatically transcend our current technologies. Animals, plants, bacteria, and proteins solve problems by spontaneously finding energy-efficient configurations that enable them to thrive in complex, resource-constrained environments. For example, proteins fold naturally into a low-energy state in response to their environment.^{a} In fact, all matter evolves toward low-energy configurations in accord with the Laws of Thermodynamics. For near-equilibrium systems (see the sidebar) these ideas are well known and have been used extensively in the analysis of computational efficiency and in machine learning techniques.

Since the time of Carnot and Babbage in the early 19^{th} century, the fields of computation and thermodynamics have co-evolved with seminal contributions from Maxwell, Boltzmann, Gibbs, von Neumann, Turing, Shannon and many others. Lately, we see this trend continuing as our understanding of thermodynamics expands to include non-equilibrium systems. For example, a new generation of "fluctuation theorems" (for example, Jarzynski,^{7} Crooks^{3}) suggests a path toward a computing technology grounded in an understanding of open, non-equilibrium systems. To continue technological progress—and acquire its corresponding social and economic benefits—we advocate a new, physically grounded, computational paradigm centered on thermodynamics and an emerging understanding of using thermodynamics to solve problems that we call "Thermodynamic Computing" or TC. Like quantum computers, TCs are distinguished by their ability to employ the underlying physics of the computing substrate to accomplish a task.

As illustrated in Figure 1, we envision a thermodynamic computing system (TCS) as a combination of a conventional computing system and novel TC hardware. The conventional computer is a "host" through which users can access the TC and define a problem for the TC to solve. The TC, on the other hand, is an open thermodynamic system directly connected to real-world input potentials (for example, voltages), which drive the adaptation of its internal organization via the transport of charge through it to relieve those potentials. Qualitatively, better or poorer organization results in smaller or larger internal dissipation and fluctuation, which drives the network to stabilize existing configurations or sample new ones. The TC itself may comprise, for example, a network^{b} of "cores" and connections that spontaneously and collectively adapt to achieve low energy network states/high charge transport efficiency in response to changing external potentials.

**Figure 1. Conceptual schematic for a Thermodynamic Computing system.**

What problems can these systems solve? We see TC as particularly well-suited for searching complex energy landscapes that leverage both rapid device fluctuations and the ability to search a large space in parallel, and addressing NP-complete combinatorial optimization problems or sampling many-variable probability distributions. For example, Borders et al.^{1} have shown the ability to solve optimization problems such as integer factorization using a network of stochastic binary neurons constructed from thermally fluctuating nanoscale magnetic tunnel junctions configured as stochastic bits. Thermodynamic computing systems make such ideas both more general and more accessible, giving a TCS user the ability to easily define optimization problems that receive feedback not only from their programmed constraints, but also from direct interaction with an external environment.

The idea of using the physics of self-organizing electronic or ionic devices to solve computational problems has shown dramatic progress in recent years.

To put this vision in a larger context, Figure 2 divides computing into domains according to their relationship to fluctuation scales. Spatial and temporal fluctuation scales are estimated in terms of thermal energy (kT, that is, Boltzmann constant times temperature) and corresponding electronic quantum coherence times and lengths. We divide the computing paradigm into three qualitatively different domains that we label as "Classical," "Thermodynamic," and "Quantum." In the Classical domain, fluctuations are small compared to the smallest devices in a computing system (for example, transistors, gates, memory elements), thereby separating the scales of "computation" and "fluctuation" and enabling abstractions such as device state and the mechanization of state transformation that underpin the current computing paradigm. In the Quantum domain, fluctuations in space and time are large compared to the computing system. While the Classical domain avoids fluctuations by "averaging them away," the Quantum domain avoids them by "freezing them out." In the Thermodynamic domain, fluctuations in space and time are comparable to the scale of the computing system and its devices. This is the domain of non-equilibrium, mesoscale thermodynamics, cellular operations, neuronal plasticity, genetic evolution, and so forth—that is, it is the domain of self-organization and the evolution of life. This is the domain that we need to understand in order to build technologies that operate near the thermodynamic limits of efficiency and spontaneously self-organize, but paradoxically it is also the domain that we assiduously avoid in our current classical and quantum computing efforts.

**Figure 2. The three major domains of computing.**

We note that the idea of using the physics of self-organizing electronic or ionic devices to solve computational problems has shown dramatic progress in recent years. For example, networks of oscillators built from devices exhibiting metal-insulator transitions have been shown to solve computational problems in the NP-hard class.^{9} Memristive devices have internal state dynamics driven by complex electronic, ionic, and thermodynamic considerations,^{4} which, when integrated into networks, result in large-scale complex dynamics that can be employed in applications such as reservoir computing.^{10} Other systems of memristive devices have been shown to implement computational models such as Hopfield networks^{8} and to build neural networks capable of unsupervised learning. Today we see opportunity to couple these recent experimental results^{11} with the new theories of non-equilibrium systems through both existing (for example, Boltzmann Machines^{5}) and newer (for example, Thermodynamic Neural Network^{6}) model systems. Given these experimental, theoretical, and modeling components, we envision a roadmap for developing TCs with three complementary foci:

- Using classical computing to model and simulate potential TC advances and, conversely, focusing the lens of TC back on classical systems in order to improve them.
- Developing nearer-term hybrid computer systems with both classical and thermodynamically-augmented components—for example, thermodynamic "bits," "neurons," "synapses," "gates," and "noise generators"—and evolving these systems toward greater TC exploitation.
- Creating systems using complex thermodynamics networks wherein a classical computing system provides an interface to and scaffolding for mesoscale assemblies of interacting, self-organizing components exhibiting complex dynamics and multiscale, continuously evolving structure, either at room temperature or—if quantum effects are key—at very low temperature (milli-Kelvin).

At least initially, we expect that TC will enable new computing opportunities rather than replace Classical Computing at what Classical Computing does well (enough), following the disruption path articulated by Christensen.^{2} These new opportunities will likely enable orders of magnitude more energy efficiency and the ability to self-organize across scales as an intrinsic part of their operation. These may include self-organizing neuromorphic systems and the simulation of complex physical or biological domains, but the history of technology shows that compelling new applications often emerge after the technology is available.

As TCs will co-exist with classical computing, future research will develop interaction abstractions. For example, the classical host might configure TC elements by viewing them as a fixed edge-weighted directed graph of thermodynamic elements. In the longer term, as more complex thermodynamic devices and networks become available, the TC might be presented as a continuously evolving neural network processor with networks and external inputs defined by the architect.

Thermodynamic computing can extend computing's transformational effect on society well into the 21st century.

Our viewpoint is that developing TC can extend computing's transformational effect on society well into the 21^{st} century.

- TC's potential societal impacts include sustaining U.S. computing leadership, improving outcomes in many areas of human enterprise (for example, medicine, business, agriculture, defense, security, and leisure), and new workforce and business opportunities.
- TC's potential technical impacts include enabling ultra-low energy systems (for example, with perceptual capabilities that rival those of animal sensory systems), fundamental increases in battery life, and a potential reduction in the national computer energy consumption.
- TC's potential scientific impacts include probing the fundamental efficiency limits of computing, exploring self-organization to reduce human programming effort, repurposing the extraordinary capabilities of living systems for human-engineered systems, and creating more intellectual synergy among diverse fields in engineering and physical sciences.

This Viewpoint has been developed from the output of the Computing Community Consortium's^{c} Thermodynamic Computing workshop that brought together computer scientists, mathematicians, physicists, and computation biologists. To learn more about the Thermodynamic Computingworkshop, visit the workshop website^{d} or read the workshop report.^{e} The authors thank all workshop participants and CCC/CRA staff.

1. Borders, W.A. et al. Integer factorization using stochastic magnetic tunnel junctions. *Nature 573*, 7774 (2019), 390–393.

2. Christensen, C.M. *The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail.* Harvard Business Review Press, 2013

3. Crooks, G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. *Physical Review, E, Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 60*, 3 (1999), 2721–2726.

4. Dittmann, R. and Strachan, J.P. Redox-based memristive devices for new computing paradigm. *APL Materials 7*, 11 (2019).

5. Hinton, G.E. et al. Learning and relearning in Boltzmann Machines. *Parallel Distributed Processing: Explorations in the Microstructure of Cognition 1*, 282–317 (1986), 2.

6. Hylton, T. Thermodynamic neural network. *Entropy 22*, 3 (2020), 256–279.

7. Jarzynski, C. Nonequilibrium equality for free energy differences. *Physical Review Letters 78*, 14 (1997), 2690–2693.

8. Kumar, S., Strachan, J.P. and Williams, R.S. Chaotic dynamics in nanoscale NbO2 mott memristors for analogue computing. *Nature 548*, 7667 (2017), 318–321.

9. Raychowdhury, A. et al. Computing with networks of oscillatory dynamical systems. In *Proceedings of the IEEE 107*, 1 (2018), 73–89.

10. Sillin, H.O. et al. A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing. *Nanotechnology 24*, 38 (2013).

11. Wang, Z. et al. Fully memristive neural networks for pattern classification with unsupervised learning. *Nature Electronics 1*, 2 (2018), 137–145.

a. Even this relatively simple system is still too compute intensive to model effectively on our most powerful supercomputers—what costs nature a few electronvolts currently costs a few terajoules on a supercomputer.

b. Among existing computing systems, TC is perhaps most similar to neuromorphic computing, except that it replaces rule-driven adaptation and neuro-biological emulation with thermo-physical evolution.

c. The Computing Community Consortium (CCC) is a programmatic committee of the Computing Research Association (CRA). The mission of Computing Research Association's Computing Community Consortium (CCC) is to enable the pursuit of innovative, high-impact computing research that aligns with pressing national and global challenges. Learn more about the CCC here: https://cra.org/ccc/about/

d. See https://bit.ly/32tmEre

e. See https://bit.ly/3tAq0V7

Mark D. Hill performed this work prior to joining Microsoft; the views expressed in this Viewpoint represent only those of the coauthors.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

No entries found