February 1976 - Vol. 19 No. 2
Features
Semantic evaluation from left to right
This paper describes attribute grammars and their use for the definition of programming languages and compilers; a formal definition of attribute grammars and a discussion of some of its important aspects are included. The paper concentrates on the evaluation of semantic attributes in a few passes from left to right over the derivation tree of a program. A condition for an attribute grammar is given which assures that the semantics of any program can be evaluated in a single pass over the derivation tree, and an algorithm is discussed which decides how many passes from left to right are in general necessary, given the attribute grammar. These notions are explained in terms of an example grammar which describes the scope rules of Algol 60. Practical questions, such as the relative efficiency of different evaluation schemes, and the ease of adapting the attribute grammar of a given programming language to the left-to-right evaluation scheme are discussed.
On self-organizing sequential search heuristics
This paper examines a class of heuristics for maintaining a sequential list in approximately optimal order with respect to the average time required to search for a specified element, assuming that each element is searched for with a fixed probability independent of previous searches performed. The “move to front” and “transposition” heuristics are shown to be optimal to within a constant factor, and the transposition rule is shown to be the more efficient of the two. Empirical evidence suggests that transposition is in fact optimal for any distribution of search probabilities.
Permutation enumeration: four new permutation algorithms
Classical permutation enumeration algorithms encounter special cases requiring additional computation every nth permutation when generating the n! permutations on n marks. Four new algorithms have the attribute that special cases occur every n(n—1) permutations. Two of the algorithms produce the next permutation with a single exchange of two marks. The other two algorithms infrequently exchange more than two marks, but the rules for generating the next permutation are very simple. Performance tests which have counted execution of assignment statements, comparisons, arithmetic operations, and subscripted array references have shown superiority of the new algorithms compared to Boothroyd's implementation of M.B. Well's algorithm and Ehrlich's implementation of the Johnson-Trotter algorithm.
An application of heuristic search methods to edge and contour detection
This paper presents a method for detecting edges and contours in noisy pictures. The properties of an edge are embedded in a figure of merit and the edge detection problem becomes the problem of minimizing the given figure of merit. This problem can be represented as a shortest path problem on a graph and can be solved using well-known graph search algorithms. The relations between this representation of the minimization problem and a dynamic programming approach are discussed, showing that the graph search method can lead to substantial improvements in computing time. Moreover, if heuristic search methods are used, the computing time will depend on the amount of noise in the picture. Some experimental results are given; these show how various information about the shape of the contour of an object can be embedded in the figure of merit, thus allowing the extraction of contours from noisy pictures and the separation of touching objects.
A stochastic evaluation model for database organizations in data retrieval systems
Experimental work in the valuation of large scale data retrieval systems has been scarce due to its difficulty and prohibitive cost. This paper discusses a simulation model of a data retrieval system which has the effect of significantly reducing the cost of experimentation and enabling research never attempted before. The model is designed to estimate the retrieval workload of alternative data retrieval systems. These data retrieval systems can be organized under several database organizations, including inverted list, threaded list, and cellular list organizations and hybrid combinations of these systems. Effectiveness of the methodology is demonstrated by using the model to study the effect of database organizations in data retrieval systems. In particular, the impact of query complexity is analyzed.
A counterintuitive example of computer paging
A counterexample is exhibited to a natural conjecture concerning the optimal way to group records into pages in the independent reference model of computer paging (an organization is said to be optimal if the “least recently used” miss ratio is minimized).
A fast division technique for constant divisors
A fast algorithm for division by constant divisors is presented. The method has proved very useful implemented as microcode on a binary machine, and can be adapted directly into hardware. The mathematical foundations of the algorithm are presented as well as some performance measures.