August 1978 - Vol. 21 No. 8
Features
Conventional programming languages are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of programming inherited from their common ancestor—the von Neumann computer, their close coupling of semantics to state transitions, their division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs.
An alternative functional style of programming is founded on the use of combining forms for creating programs. Functional programs deal with structured data, are often nonrepetitive and nonrecursive, are hierarchically constructed, do not name their arguments, and do not require the complex machinery of procedure declarations to become generally applicable. Combining forms can use high level programs to build still higher level ones in a style not possible in conventional languages.
Associated with the functional style of programming is an algebra of programs whose variables range over programs and whose operations are combining forms. This algebra can be used to transform programs and to solve equations whose “unknowns” are programs in much the same way one transforms equations in high school algebra. These transformations are given by algebraic laws and are carried out in the same language in which programs are written. Combining forms are chosen not only for their programming power but also for the power of their associated algebraic laws. General theorems of the algebra give the detailed behavior and termination conditions for large classes of programs.
A new class of computing systems uses the functional programming style both in its programming language and in its state transition rules. Unlike von Neumann languages, these systems have semantics loosely coupled to states—only one state transition occurs per major computation.
Value conflicts and social choice in electronic funds transfer system developments
During the last few years, computer-based systems which automate the transfer and recording of debits and credits have begun to be implemented on a large scale. These systems promise both financial benefits for the institutions that use them and potential conveniences to their customers. However, they also raise significant social, legal, and technical questions that must be resolved if full scale systems for Electronic Funds Transfer (EFT) are not to cause more problems for the larger public than they solve. This paper examines the incentives for EFT developments and the social problems they raise in the context of conflicts between five different value positions that are often implicit in analyses of proposed EFT arrangements. These conflicts reflect the relative importance of certain problems for specific groups. The value positions implicit in EFT proposals help to organize analyses of market arrangements, system reliability, and privacy of transactions. These topics are analyzed in this article and related to the value positions held by concerned parties. Last, the ways in which the public can learn about the social qualities of different EFT arrangements and the pace of EFT developments are both discussed in the context of social choice.
Fast parallel sorting algorithms
A parallel bucket-sort algorithm is presented that requires time O(log n) and the use of n processors. The algorithm makes use of a technique that requires more space than the product of processors and time. A realistic model is used in which no memory contention is permitted. A procedure is also presented to sort n numbers in time O(k log n) using n1+1/k processors, for k an arbitrary integer. The model of computation for this procedure permits simultaneous fetches from the same memory location.
A time- and space-efficient garbage compaction algorithm
Given an area of storage containing scattered, marked nodes of differing sizes, one may wish to rearrange them into a compact mass at one end of the area while revising all pointers to marked nodes to show their new locations. An algorithm is described here which accomplishes this task in linear time relative to the size of the storage area, and in a space of the order of one bit for each pointer. The algorithm operates by reversibly encoding the situation (that a collection of locations point to a single location) by a linear list, emanating from the pointed-to location, passing through the pointing locations, and terminating with the pointed-to location's transplanted contents.
Communicating sequential processes
This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of a familiar programming exercises.
Feedback coupled resource allocation policies in the multiprogramming-multiprocessor computer system
Model studies of some integrated, feedback-driven scheduling systems for multiprogrammed-multiprocessor computer systems are presented. The basic control variables used are the data-flow rates for the processes executing on the CPU. The model systems feature simulated continuous-flow and preempt-resume scheduling of input-output activity. Attention is given to the amount of memory resource required for effective processing of the I/O activity (buffer space assignment). The model studies used both distribution-driven and trace-driven techniques. Even relatively simple dynamic schedulers are shown to improve system performance (as measured by user CPU time) over that given by optimal or near-optimal static schedulers imbedded in identical system structures and workload environments. The improvement is greatest under a heavy I/O demand workload.