# June 1960 - Vol. 3 No. 6

## Features

The future of automatic digital computers

Before discussing the future I want to talk about the past, which I find a source of disappointment. One of the things which prompted me to give the lecture its title was that a number of people who should know better have recently given tongue and pen to the statement that “new, second generation computers are with us” and that these machines are better than anything which has been thought of before. The first part of this lecture will attempt to disabuse you of this idea for, far from thinking that any second generation computer exists, I think that we are only just seeing the growing up of first generation computers. To justify this statement I remark that, in 1946-7, I worked with the late John von Neumann on the logical and physical design of computing machines. Von Neumann wrote, at the Institute for Advanced Study at Princeton, two reports [1, 2] on aims and objectives to which we may look to see the type of computer which was envisaged at that time. This computer was in fact a machine having, in retrospect, certain rather interesting characteristics. The most obvious of these is speed of operation and this was desired to be such that a 40 bit addition or subtraction would take about 10 micro-seconds. There is no computing machine commercially available in this country which achieves this addition time. The multiplication time of the von Neumann machine varied from 400 micro-seconds for the crudest scheme to 50 micro-seconds for a more sophisticated logical device which still made no use of the steam-roller electronics to be seen in at least one machine of the present day, which achieves a rather worse performance. So much for second generation speed!

The department of computer mathematics at Moscow State University

For the preparation of specialists in computer and machine mathematics and the development of scientific work in this area, the department of computer mathematics was created in 1949 in Moscow State University within the staff of the mechanical-mathematics faculty. This was headed by Academician S. L. Sobolev beginning in 1952.

Multiprogram scheduling: parts 1 and 2. introduction and theory

In order to exploit fully a fast computer which possesses simultaneous processing abilities, it should to a large extent schedule its own workload. The scheduling routine must be capable of extremely rapid execution if it is not to prove self-defeating.
The construction of a schedule entails determining which programs are to be run concurrently and which sequentially with respect to each other. A concise scheduling algorithm is described which tends to minimize the time for executing the entire pending workload (or any subset of it), subject to external constraints such as precedence, urgency, etc. The algorithm is applicable to a wide class of machines.

A short method for measuring error in a least-squares power series

When fitting a curve to a set of points by means of a least-squares powers series, it is frequently desirable to obtain also a measure of the total error of the curve. Such error is most often measured by the size of the residual sum of squares and, while it is not infrequent for computer programs to provide this error sum along with the coefficients of the power series, in every case we have so far found this sum has been obtained by evaluating the series at every data point. This procedure requires that either the data be read through the machine a second time after the determination of the coefficients or that every data point be stored in the memory of the computer.