This short article considers the history of ACM’s involvement with analog computing. Now an almost forgotten technology,1 analog (as opposed to digital), was an alternative class of computing machinery that encouraged an experiment-oriented relationship between human and computer. Analog computing failed to maintain a central position in the development of computer science as the field became dominated by the digital viewpoint, and so there is a wider story of the history of this technology related to the history of electrical and electronic engineering. However, the analog computer was not completely isolated from the CS culture, and this article looks at how the technology was perceived within ACM.
As the professional communities surrounding computing became established, ACM’s focus became “computer science” (leaving the IEEE to consider the engineering aspects of computing). Because ACM was a strong influence in defining the boundaries of computational paradigms; it is interesting to consider how analog computing’s status within the Association developed. As the CS field became enclosed, ACM’s conferences and publications directed development toward new concepts such as analog software simulation.
During ACM’s formative years interest was scattered over a number of topics, including hardware and software principles. In the early days, the hot topic was automatic computing. Computing had originally been an activity associated with human labor, but through 20th century technology became mechanized. Despite the importance of automatic computing as a framing theme, a clear-cut definition of what was understood as computing technology was not always available. For example, Samuel Williams, ACM’s fourth president, clearly interpreted computing as something broader than just digital. In an introductory article to the first issue of the Journal of the ACM, he recognized the long-term benefits that digital would offer, but referred to the 1945 MIT conference where the Bush-Caldwell differential analyzer was first publicized as the “first meeting of those interested in the field.” Essentially, Williams used analog technology to frame the scope of automatic computing.
ACM’s involvement with analog computing separates into three periods. The first period is pre-1950 when automatic computing was the focus and analog/digital issues were perhaps more secondary. Second came the 1950s, with research emerging from the joint computer conferences.2 A third period beginning in the late 1950s witnessed an increasing popularity and utility of digital lead to a winding down of analog research. Despite this, analog computing became further entwined with ACM during this period. A number of research projects, often software-based, aimed to combine benefits of both analog and digital into “hybrid computers.” In a sense these projects kept interest in analog computing alive, providing an outlet for those with expertise in the old technology. These three periods correspond roughly to three different paradigms: analog as a predigital technology, analog as an alternative to digital technology, and analog as simulated by digital technology.
The MADDIDA project demonstrated that a digital computing machine could be special purpose and organized like the analog.
Before the 1940s, analog computers dominated the field of computing, and the principal technology was the differential analyzer. Invented by Vannevar Bush and his colleagues at MIT during the 1930s, the differential analyzer consisted of a number of components, each an analog of a different mathematical operator (typically integrators, summers, and multipliers). By interconnecting the inputs and outputs of these components, a system of differential equations could be modeled mechanically. Over time, this technology developed into what became known as the General Purpose Analog Computer (or GPAC), using electronic components to model each operator. Alongside the differential analyzer and its electronic derivatives, other analog computers existed that were more direct, their “programming” based on physical analogies rather than equations. Examples included the network analyzer and the electrolytic tank, both of which modeled an entire system rather than the individual differential equations [3, 9].
If the dichotomy analog/digital is understood to reflect the internal organization of the machine, that is, the relationship between continuous and discontinuous representation of variables, then the history of computing technology during the 20th century follows a general pattern of digitalization. Early machines like the differential analyzer employed an analog representation throughout, whereas later machines made a conversion to discrete quantities earlier in the calculation process. Today, the modern digital computer works solely with digital quantities at the conceptual level, and it is only when the internal engineering of a machine’s hardware is considered that an engineer becomes aware of analog signals and their conversion to digital.
The differential analyzer created a computing culture that had a strong following. At the 1950 ACM Conference, a new machine—the MADDIDA—was presented. Developed by aeronautical engineers, this computer was an early Digital Differential Analyzer (DDA), and the founding machine of the Computer Research Corporation (CRC) [4]. Don Eckdahl, reflecting on the event some 17 years later described his experience this way: “We took this machine to the ACM Conference in Rutgers and gave two papers on it and demonstrated it there. We also had the contact here with John von Neumann in that our corporation [in] attempting to see the significance of his work had us take this to him for study.”3
Constructed with digital components but based on the design of the earlier analog machines, MADDIDA allowed the interconnection of 42 digital integrators, and is an early example of research that combined analog qualities with the then-progressive technology of digital. The MADDIDA project demonstrated that a digital computing machine could be special purpose and organized like the analog. Following its exhibition, the DDA initialized a research interest in analog/digital that persisted within the ACM community for many years. By the mid-1950s, articles on topics such as the reliability of analog computers, analog hardware, and electrolytic tanks were being replaced with more conceptual investigations such as using analogs to solve discrete problems or formalizing the underlying theory [12]. During this period, DDA developments continued to move forward and 1956 saw the next generation of this work proposed—the simulation of an analog computer, not by digital hardware, but by digital software [2, 5, 10].
The decision in those early days to publish analog research was instrumental in the development of analog software simulations. As the scope of ACM shifted toward methods and algorithms, focusing on computer science rather than electrical engineering, so the applications of analog computing shifted from hardware to software. This illustrates a progression of digital absorption, the analog computer becoming replaced by the simulation of analog methods with digital programs. It is because of this involvement that we see the inclusion of the module “Analog and Hybrid Computing” in Curriculum 68, a document published by the ACM outlining a curriculum for CS courses. Suggested as a way to introduce analog simulation languages such as “MIDAS, PACTOLUS and DSL/90,” the module was “…concerned with analog, hybrid, and related digital techniques for solving systems of ordinary and partial differential equations, both linear and nonlinear. A portion of the course should be devoted to digital languages for the simulation of continuous or hybrid systems” [1].
Future ACM curricula would remove this analog element; indeed, analog failed to maintain even a simulative role in CS education. This was the result not just of the ever-improving digital technologies,4 but also due to the introduction of the Discrete Fourier Transform (DFT). Allan Newell saw the DFT as “penetrating the major bastion of analog computation” [7]. After this, analog technology became a small research field within the discipline of electronic engineering and thus spelled an end to its involvement with the ACM community.5
There are many factors that make and shape technology including inventors, venture capitalists, funding committees, and academic disciplines. However, what is highlighted here is the important role of professional societies. During the mid-20th century the technology of computing was in flux between analog and digital. The publications and conferences of ACM provided a space where rapid advancements could be distilled and discussed; providing an environment where analog computing could make the transition from hardware to software and so continue to be relevant. Here, we’ve considered MADDIDA—the startup computer of CSC. In fact, scholars have traced no fewer than 14 computer companies that trace their roots to the MADDIDA group. It was at the 1958 ACM Conference that they were able to show off their invention, and create their niche.6 ACM encouraged the development of this and analog simulation languages such as MIDAS.
To generalize slightly, the history of the analog computer in the 1950s and 1960s shows us that there were a variety of computational technologies in use. Questions such as “What makes a computer a computer?” and “What is computable?” did not always have the clear-cut answers they do today [8]. Reviewing this history encourages us to reconsider some of these questions, mindful that future technologies of automatic computing might be very different from what we find familiar today.
This article is available at doi.acm.org/10.1145/1230819.1230837.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment