Advertisement

Research and Advances

A comparison of numerical techniques in Markov modeling

This paper presents several numerical methods which may be used to obtain the stationary probability vectors of Markovian models. An example of a nearly decomposable system is considered, and the results obtained by the different methods examined. A post mortem reveals why standard techniques often fail to yield the correct results. Finally, a means of estimating the error inherent in the decomposition of certain models is presented.
Research and Advances

Relaxation methods for image reconstruction

The problem of recovering an image (a function of two variables) from experimentally available integrals of its grayness over thin strips is of great importance in a large number of scientific areas. An important version of the problem in medicine is that of obtaining the exact density distribution within the human body from X-ray projections. One approach that has been taken to solve this problem consists of translating the available information into a system of linear inequalities. The size and the sparsity of the resulting system (typically, 25,000 inequalities with fewer than 1 percent of the coefficients nonzero) makes methods using successive relaxations computationally attractive, as compared to other ways of solving systems of inequalities. In this paper, it is shown that, for a consistent system of linear inequalities, any sequence of relaxation parameters lying strictly between 0 and 2 generates a sequence of vectors which converges to a solution. Under the same assumptions, for a system of linear equations, the relaxation method converges to the minimum norm solution. Previously proposed techniques are shown to be special cases of our procedure with different choices of relaxation parameters. The practical consequences for image reconstruction of the choice of the relaxation parameters are discussed.
Research and Advances

The evolution of the DECsystem 10

The DECsystem 10, also known as the PDP-10, evolved from the PDP-6 (circa 1963) over five generations of implementations to presently include systems covering a price range of five to one. The origin and evolution of the hardware, operating system, and languages are described in terms of technological change, user requirements, and user developments. The PDP-10's contributions to computing technology include: accelerating the transition from batch oriented to time sharing computing systems; transferring hardware technology within DEC (and elsewhere) to minicomputer design and manufacturing; supporting minicomputer hardware and software development; and serving as a model for single user and timeshared interactive minicomputer/microcomputer systems.
Research and Advances

The development of the MU5 computer system

Following a brief outline of the background of the MU5 project, the aims and ideas for MU5 are discussed. A description is then given of the instruction set, which includes a number of features conducive to the production of efficient compiled code from high-level language source programs. The design of the processor is then traced from the initial ideas for an associatively addressed “name store” to the final multistage pipeline structure involving a prediction mechanism for instruction prefetching and a function queue for array element accessing. An overall view of the complete MU5 complex is presented together with a brief indication of its performance.
Research and Advances

The Manchester Mark I and atlas: a historical perspective

In 30 years of computer design at Manchester University two systems stand out: the Mark I (developed over the period 1946-49) and the Atlas (1956-62). This paper places each computer in its historical context and then describes the architecture and system software in present-day terminology. Several design concepts such as address-generation and store management have evolved in the progression from Mark I to Atlas. The wider impact of Manchester innovations in these and other areas is discussed, and the contemporary performance of the Mark I and Atlas is evaluated.
Research and Advances

The CRAY-1 computer system

This paper describes the CRAY-1, discusses the evolution of its architecture, and gives an account of some of the problems that were overcome during its manufacture. The CRAY-1 is the only computer to have been built to date that satisfies ERDA's Class VI requirement (a computer capable of processing from 20 to 60 million floating point operations per second) [1]. The CRAY-1's Fortran compiler (CFT) is designed to give the scientific user immediate access to the benefits of the CRAY-1's vector processing architecture. An optimizing compiler, CFT, “vectorizes” innermost DO loops. Compatible with the ANSI 1966 Fortran Standard and with many commonly supported Fortran extensions, CFT does not require any source program modifications or the use of additional nonstandard Fortran statements to achieve vectorization. Thus the user's investment of hundreds of man months of effort to develop Fortran programs for other contemporary computers is protected.
Research and Advances

Architecture of the IBM system/370

This paper discusses the design considerations for the architectural extensions that distinguish System/370 from System/360. It comments on some experiences with the original objectives for System/360 and on the efforts to achieve them, and it describes the reasons and objectives for extending the architecture. It covers virtual storage, program control, data-manipulation instructions, timing facilities, multiprocessing, debugging and monitoring, error handling, and input/output operations. A final section tabulates some of the important parameters of the various IBM machines which implement the architecture.
Research and Advances

The evolution of the Sperry Univac 1100 series: a history, analysis, and projection

The 1100 series systems are Sperry Univac's large-scale mainframe computer systems. Beginning with the 1107 in 1962, the 1100 series has progressed through a succession of eight compatible computer models to the latest system, the 1100/80, introduced in 1977. The 1100 series hardware architecture Is based on a 36-bit word, ones complement structure which obtains one operand from storage and one from a high-speed register, or two operands from high-speed registers. The 1100 Operating System is designed to support a symmetrical multiprocessor configuration simultaneously providing multiprogrammed batch, timesharing, and transaction environments.
Research and Advances

Reduction: a method of proving properties of parallel programs

When proving that a parallel program has a given property it is often convenient to assume that a statement is indivisible, i.e. that the statement cannot be interleaved with the rest of the program. Here sufficient conditions are obtained to show that the assumption that a statement is indivisible can be relaxed and still preserve properties such as halting. Thus correctness proofs of a parallel system can often be greatly simplified.
Research and Advances

Programming languages, natural languages, and mathematics

Some social aspects of pro gramming are illuminated through analogies with similar aspects of mathematics and natural languages. The split between pure and applied mathematics is found similarly in programming. The development of natural languages toward flexionless, word-order based language types speaks for programming language design based on general, abstract constructs. By analogy with incidents of the history of artificial, auxiliary languages it is suggested that Fortran and Cobol will remain dominant for a long time to come. The most promising avenues for further work of wide influence are seen to be high quality program literature (i.e. programs) of general utility and studies of questions related to program style.
Research and Advances

Backtrack programming techniques

The purpose of this paper is twofold. First, a brief exposition of the general backtrack technique and its history is given. Second, it is shown how the use of macros can considerably shorten the computation time in many cases. In particular, this technique has allowed the solution of two previously open combinatorial problems, the computation of new terms in a well-known series, and the substantial reduction in computation time for the solution to another combinatorial problem.
Research and Advances

A genealogy of control structures

The issue of program control structures has had a history of heated controversy. To put this issue on a solid footing, this paper reviews numerous theoretical results on control structures and explores their practical implications. The classic result of Böhm and Jacopini on the theoretical completeness of if-then-else and while-do is discussed. Several recent ideas on control structures are then explored. These include a review of various other control structures, results on time/space limitations, and theorems relating the relative power of control structures under several notions of equivalence. In conclusion, the impact of theoretical results on the practicing programmer and the importance of one-in, one-out control structures as operational abstractions are discussed. It is argued further that there is insufficient evidence to warrant more than if-then-else, while-do, and their variants.
Research and Advances

Merging with parallel processors

Consider two linearly ordered sets A, B, | A | = m, | B | = n, m ≤ n, and p, p ≤ m, parallel processors working synchronously. The paper presents an algorithm for merging A and B with the p parallel processors, which requires at most 2⌈log2(2m + 1)⌉ + ⌊3m/p⌋ + [m/p][log2(n/m)] steps. If n = 2&bgr;m (&bgr; an integer), the algorithm requires at most 2[log2(m + 1)] + [m/p](2 + &bgr;) steps. In the case where m and n are of the same order of magnitude, i.e. n = km with k being a constant, the algorithm requires 2[log2(m + 1)] + [m/p](3 + k) steps. These performances compare very favorably with the previous best parallel merging algorithm, Batcher's algorithm, which requires n/p + ((m + n)/2p)log2m steps in the general case and km/p + ((k + 1)/2)(m/p)log2m in the special case where n = km.
Research and Advances

Multidimensional binary search trees used for associative searching

This paper develops the multidimensional binary search tree (or k-d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O(log n); deletion of the root, O(n(k-1)/k); deletion of a random node, O(log n); and optimization (guarantees logarithmic performance of searches), O(n log n). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O(n(k-t)/k)] and for nearest neighbor queries [empirically observed average running time of O(log n).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k-d trees could be quite useful in many applications, and examples of potential uses are given.
Research and Advances

Optimal balancing of I/O requests to disks

Determining a policy for efficient allocation and utilization of a set of disk drives with differing operational characteristics is examined using analytical techniques. Using standard queueing theory, each disk drive is characterized by a queueing model with service time of a disk drive represented by the probability density function of the sum of two uniform distributions. Total response time of the set of disk models is then minimized under varying load conditions. The results indicate that faster devices should have higher utilization factors and that the number of different device types utilized tends to decrease with decreasing load. Specific examples using 2314 and 3330 combinations are examined.
Research and Advances

The digital simulation of river plankton population dynamics

This paper deals with the development of a mathematical model for and the digital simulation in Fortran IV of phytoplankton and zooplankton population densities in a river using previously developed rate expressions. In order to study the relationships between the ecological mechanisms involved, the simulation parameters were varied illustrating the response of the ecosystem to different conditions, including those corresponding to certain types of chemical and thermal pollution. As an investigation of the accuracy of the simulation methods, a simulation of the actual population dynamics of Asterionella in the Columbia River was made based on approximations of conditions in that river. Although not totally accurate, the simulation was found to predict the general annual pattern of plankton growth fairly well and, specifically, revealed the importance of the annual velocity cycle in determining such patterns. In addition, the study demonstrates the usefulness of digital simulations in the examinations of certain aquatic ecosystems, as well as in environmental planning involving such examinations.
Research and Advances

Interactive consulting via natural language

Interactive programming systems often contain help commands to give the programmer on-line instruction regarding the use of the various systems commands. It is argued that it would be relatively easy to make these help commands significantly more helpful by having them accept requests in natural language. As a demonstration, Weizenbaum's ELIZA program has been provided with a script that turns it into a natural language system consultant.
Research and Advances

The restriction language for computer grammars of natural language

Over the past few years, a number of systems for the computer analysis of natural language sentences have been based on augmented context-free grammars: a context-free grammar which defines a set of parse trees for a sentence, plus a group of restrictions to which a tree must conform in order to be a valid sentence analysis. As the coverage of the grammar is increased, an efficient representation becomes essential for further development. This paper presents a programming language designed specifically for the compact and perspicuous statement of restrictions of a natural language grammar. It is based on ten years' experience parsing text sentences with the comprehensive English grammar of the N.Y.U. Linguistic String Project, and embodies in its syntax and routines the relations which were found to be useful and adequate for computerized natural language analysis. The language is used in the current implementation of the Linguistic String Parser.
Research and Advances

A large semaphore based operating system

The paper describes the internal structure of a large operating system as a set of cooperating sequential processes. The processes synchronize by means of semaphores and extended semaphores (queue semaphores). The number of parallel processes is carefully justified, and the various semaphore constructions are explained. The system is proved to be free of “deadly embrace” (deadlock). The design principle is an alternative to Dijkstra's hierarchical structuring of operating systems. The project management and the performance are discussed, too. The operating system is the first large one using the RC 4000 multiprogramming system.
Research and Advances

Illumination for computer generated pictures

The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.
Research and Advances

Improved event-scanning mechanisms for discrete event simulation

Simulation models of large, complex “real-world” applications have occasionally earned the reputation of eating up hours of computer time. This problem may be attributed in part to difficulties such as slow stochastic convergence. However, an additional problem lies in the fact that a significant amount of bookkeeping time is required to keep future events in their proper sequence. This paper presents a method for significantly reducing the time spent scanning future events lists in discrete event simulations. Three models are presented, all of which improve in effectiveness as the events-list scan problem becomes more burdensome.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More