It is a stock element in narratives of 1950s computing to distinguish between programmers and coders, the latter considered a “lowly technician” doing the routine job of converting flowcharts or pseudo-instructions into coded machine instructions (for example, see Campbell-Kelly et al.4 and Ensmenger7). This division of labor between coders and programmers is often overlaid with social distinctions of education and gender.7 Researching the early uses of the words “programming” and “coding,”5 however, we were confronted with the near absence of evidence for this coder/programmer distinction in practice. While the activities of coding and programming can be more or less clearly distinguished, the activities did not map onto different jobs, but rather were performed by the same person(s). It thus seemed that the distinction between the coder and the programmer pertains not to the reality but to the mythology of early computing. It emerged in the wake of automatic programming in the 1950s, creating a discourse that has been influential until today.
The Origin of the Myth of the Coder
The distinction between coder and programmer originated in the famous report Planning and Coding of Problems for an Electronic Computing Instrument by Herman H. Goldstine and John von Neumann. They give four hierarchical stages for planning and coding. At the top is the “mathematical stage” for the derivation of the algorithmic form of a mathematical problem. This stage was exclusively for the mathematician and “has nothing to do with computing or with machines.”8 After this, “the coding proper can begin,” subdivided into three stages:
A macroscopic or dynamical stage capturing the dynamic aspects or the flow of a computation including loops, conditionals and address modifications formalized by flowdiagram;
a microscopic or static stage which concerns the actual coding of every single operation box of the flow diagram; and
the last stage which consists (mostly) of assigning memory locations and conversion to binary
Drawing up a flow diagram (stage 1) should be “a routine matter” for “every moderately mathematically trained person.”8 As for the static coding (stage 2), “a moderate amount of experience” will make it “a perfectly routine operation.” After some preparations, coding becomes “filling-in operations that can be effected by a single linear passage over the entire coded sequence.”8 The distinction between stage 1 and 2 would provide the blueprint for the distinction between programming (often equated to planning or flowcharting) and coding (that is, translating a flowchart into coded instructions for the machine).
Why did von Neumann and Goldstine want the planning and coding to be separated? It is explained by their experience with the organization of labor for manual computation.9 Goldstine knew about the successful scheme used for manual computation of firing tables at Aberdeen Proving Ground. Redesigned shortly after World War I, it used a clear-cut division of labor.3 There first was the mathematical theory and its algorithms, then there were “computation sheets” (Form 5041) that laid out step-by-step what the human computers (soldiers or locals with reasonable good grades in arithmetic) had to do. Following the control orders of the computation sheets, the human computers had to look up values in tables, look up logarithms and perform additions and subtractions, the intermediate results were written on a “data sheet”, the end results were copied down on a “trajectory sheet” (Form 5041). For the people at the Ballistic Research Laboratory, this organization of manual calculation translated with ease to the preparation of instructions for electromechanical and electronic machines.11 On the ENIAC rewired to simulate a stored-program computer (1948), this division between programming and coding had been tried with some success, as Klara von Neumann’s coding of the Monte Carlo flowcharts shows.10
The Coder Absconded
Thus the activities of programming and coding could be distinguished easily. The former was planning on paper using flowchartinga or some pseudocode, whereas the latter was the translation into coded instructions on some medium to control the machine. Though they could be disassociated easily, in practice they were often not separated from one another. This was the experience at UNIVAC, the company Eckert and Mauchly started after having built the ENIAC for the military.16
Before delivering their first machine, they had indeed stipulated a distinction between the jobs of the programmer and the coder (1949).7 A year later, “standard flow chart and coding symbols and practices” were even introduced to facilitate review by “a person other than the person who originally did the charting and the coding.”14 UNIVAC had a detailed flowcharting manual following von Neumann and Goldstine’s formalism, and flow diagrams were used as “a potent means of communications.” The flow diagram’s boxes could be filled with anything, from “mathematical symbols to sentences.”2
But once the UNIVAC 1 computer was ready (1951), the distinction between coder and programmer was brushed off the table. When Ohlinger from Northrop Aircraft Company asked J.L. McPherson from UNIVAC: “How many programmers and coders were employed in order to keep UNIVAC busy full time?” McPherson replied: “We do not distinguish between programmers and coders. We have operators and programmers.”6
This is consistent with what we found at other places in the 1950s, both at universities and in industry, where “programmers” did both the planning and the coding part of programming. When the question was raised during MIT’s 1954 Summer Conference what groups distinguished between coder and programmer, only 8 out of more than 50 participants said yes,2 only one company adhered to this strict division of labor, Douglas Aircraft Company.
This observation is confirmed by reports commissioned by the U.S. government from 1957 onward to assess the impact of automation and to map the needs of the government in matters of “automatic data processing” (ADP). To establish grade-level distinctions and according pay rates, the Department of Labor did a broad survey in 1959 and presented a list of 13 “occupations in electronic data-processing systems.” While a division of labor is implied by the functional organization chart of these occupations (see the accompanying figure), the job of the coder is notable by its very absence. Instead, a distinction is made between a systems analyst, a programmer and a coding clerk. The first defines the problem and its requirements. The programmer then “designs detailed programs, flow charts, and diagrams indicating mathematical computations and sequence of machine operations necessary to copy and process data and print solution,”17 though a chief programmer may also be there “to assign, outline and coordinate” the work of the programmers.17 The coding clerk “convert[s] items of information from reports to codes for processing by automatic machines using a predetermined coding system”17; this is not the “coder” who converts flowcharts into machine instructions. A report by Weber and Associates on salaries in ADP from 196018 similarly distinguishes between Lead Programmer, Programmer Senior, Programmer A, B and C in the programming department, all involved in both flowcharting and translating of charts into coded instructions. Again, the separate job of “coder” that would convert flowcharts into coded machine instructions is absent.b
How Did the Coder Enter Computing Mythology?
How then did the idea of the coder vs. the programmer take root and why? If one looks at trade magazines such as Computers and Automation or Datamation in the 1950s, a (human) coder is rarely mentioned—on average only 3–4 times a year. But in 1955, there are 28 occurrences of “coder” in just three articles, which accounts for 60% of all the occurrences between 1954 and 1960. All three of those articles were authored by Grace Murray Hopper and her team. In charge of the automatic programming department of UNIVAC since 1954, Hopper was a relentless advocate for the automatization of programming, and she became very influential through her many public speeches in the 1950s.1
In May 1954, Hopper introduced the automation of the coder as an essential development. She described how “ten years ago, a programmer was, of necessity many things,”12 but with the “increase in the number and speed of computers” came “specialization.” The specialists, notably “analyst, programmer, coder, operator and maintenance man,” were separated and communicated only via tools such as flow diagrams.
Hopper, however, admitted that the “distinction between a programmer and a coder has never been clearly made. Coder was probably first used as an intermediate point of the salary scale between trainee and programmer. A programmer prepares a plan for the solution of a problem: […] One of his final results, to be passed on to a coder, will be a flow chart. […] It is then the task of the coder to reduce this flow chart to coding, to a list in computer code.”12
As even a supervisor from Remington-Rand had to acknowledge, it was “more often the case than not […] that the programmer and clerical coder [were] the same person” and that “the detailed flow chart” was simply omitted.15 Why then did Hopper distinguish a separate coder? Because they could now be automated: “It is this function, that of the coder, time-consuming and fraught with mistakes, that is the first human operation to be replaced by the computer itself.”12 Thus it was the introduction of “automatic coding” (sometimes unfortunately called “automatic programming”) that accounted for the retrospective, artificial distinction between the jobs. As she later wrote: “Automatic coding comes into its own”13 when “it can release the coder from most of the routine and drudgery of producing the instruction code. It may, someday, replace the coder or release him to become a programmer. “Again, the great advantage of automatic coding is given as: “the replacement of the coder by the computer.”
Many in the 1950s were far from convinced that automatic coding would be efficient enough to compete with manual programming. By introducing the distinction between a programmer and a coder—even though it did not reflect the reality on the workfloor—Grace Hopper’s promotional talks made the idea of automatic coding more appealing.
Conclusion
The influence of powerful imagery and rhetorics in promotional material for computing is neither new nor surprising. There is a longstanding tradition of overselling the latest technology, claiming it to be the next (industrial) revolution or promising that it will outperform human beings. With the passage of time it may become difficult to recognize these invented ideas and images that have acquired a life of their own and have become integrated as part of a historical narrative. As modern, digital electronic computing is nearing its 100th anniversary, such recognition does not become easier, though we may be in need of it more than ever before.
This particular case, where the praise of automatic programming implied the obsolescence of the coder, can be instructive for us today. There is a line that runs from Grace Hopper’s selling of “automatic coding” to today’s promises of large AI models such as Chat-GPT for revolutionizing computing by automating programming or even making human programmers obsolete.19,20 Then as now, it is certainly the case that the automation of some parts of programming is progressing, and it will upset or even redefine the division of labor. However, this is not a simple straightforward process that replaces the human element in one or more specific phases of programming by the computer itself. Rather, practice adopts new techniques to assist with existing tasks and jobs. Such changes do not generalize easily, and using titles as like “coders”—or today’s “prompt engineers,”—while memorable, does not do justice to the subtle process of changing practice.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment