October 1977 - Vol. 20 No. 10
Features
Optimal surface reconstruction from planar contours
In many scientific and technical endeavors, a three-dimensional solid must be reconstructed from serial sections, either to aid in the comprehension of the object's structure or to facilitate its automatic manipulation and analysis. This paper presents a general solution to the problem of constructing a surface over a set of cross-sectional contours. This surface, to be composed of triangular tiles, is constructed by separately determining an optimal surface between each pair of consecutive contours. Determining such a surface is reduced to the problem of finding certain minimum cost cycles in a directed toroidal graph. A new fast algorithm for finding such cycles is utilized. Also developed is a closed-form expression, in terms of the number of contour points, for an upper bound on the number of operations required to execute the algorithm. An illustrated example which involves the construction of a minimum area surface describing a human head is included.
An interactive computer graphics approach to surface representation
An interactive computer graphics method has been developed for the rapid generation of arbitrary shaped three-dimensional surfaces. The method is a synthesis of spline theory and algorithms, an interactive means for man-machine communication, and software for static or dynamic graphics display. The basic technique employed is a modified lofting method in which sectional curves are represented by uniform B-splines and the surface is interpolated between sections by Cardinal splines. Among the features of this method are algorithm, which enable interactive modification of the B-spline representation of the sectional curves. At all stages of the process, the spatial information is graphically displayed to the user. Complex surfaces can be created by the combination of a number of shapes that have been separately generated and automatically joined. The system has been successfully interfaced to a variety of analytical routines for structural, medical and graphical applications.
In contrast to the predominant use of low-level intermediate text, high-level data flow analysis deals with programs essentially at source level and exploits the control flow information implicit in the parse tree. The need for high-level flow analysis arises from several aspects of recent work on advanced methods of program certification and optimization. This paper proposes a simple general method of high-level data flow analysis that allows free use of escape and jump statements, avoids large graphs when compiling large programs, facilitates updating of data flow information to reflect program changes, and derives new global information helpful in solving many familiar global flow analysis problems. An illustrative application to live variable analysis is presented. Many of the graphs involved are constructed and analyzed before any programs are compiled, thus avoiding certain costs that low-level methods incur repeatedly at compile time.
Two-level control structure for nondeterministic programming
The basic ideas of nondeterministic programming are critically reconsidered to single out a proper attitude and programming style for languages allowing direct control of nondeterministic features. The proposed attitude aims at retaining the purity of the nondeterministic formulation of search processes on one level (the attempt level), deferring the coordination of problem solving efforts to another (the choice level). The feasibility of recognizing these two levels is discussed, stressing that the structure to be managed at the choice level is a tree of contexts. The leaves are computational environments, each holding an alternative under inspection, while the other nodes are associated with choice points. According to the proposed programming style, a generative function is associated with each choice point, which expresses the desired choice strategy. The main advantage of this approach is the localization of the search strategies: Each nonterminal node of the tree keeps track of the state of the computation as it was when the choice point was last interrogated, holding at the same time the strategy to coordinate the available alternatives. Examples are given in term of ND-Lisp, an extension of Lisp designed and implemented according to these guidelines.
Regular right part grammars and their parsers
This paper introduces an alternative to context-free grammars called regular right part (RRP) grammars, which resemble PASCAL syntax diagrams. Formally, RRP grammars have production right parts, which are nondeterministic finite state machines (FSMs), and, as a special case, regular expression, since these can be converted to FSMs. RRP grammars describe the syntax of programming languages more concisely and more understandably than is possible with CF grammars. Also introduced is a class of parsers, RRP LR(m, k) parsers, which includes the CF LR(k) parsers and provides the same advantages. Informally, an RRP LR(m, k) parser can determine the right end of each handle by considering at most k symbols to the right of the handle and the left end, after the right end has been found, by considering at most m symbols to the left of the handle. A mechanism for determining the left end is required because there is no bound on the length of the handle.
Game interpretation of the deadlock avoidance problem
The deadlock avoidance problem may be defined informally as the determination, from some a priori information about the processes, resources, operating system, etc., of the “safe situations” which may be realized without endangering the smooth running of the system. When each process specifies its future needs by a flowchart of need-defined steps, a global approach to the phenomenon and its interpretation as a game between the operating system and the processes allows formalization of risk and safety concepts. The bipartite graph representation of this game may then be used to construct explicitly the set of safe states and to study their properties.
The programmer’s workbench—a machine for software development
On almost all software development projects the assumption is made that the program development function will be done on the same machine on which the eventual system will run. It is only when this production machine is unavailable or when its programming environment is totally inadequate that alternatives are considered. In this paper it is suggested that there are many other situations where it would be advantageous to separate the program development and maintenance function onto a specialized computer which is dedicated to that purpose. Such a computer is here called a Programmer's Workbench. The four basic sections of the paper introduce the subject, outline the general concept, discuss areas where such an approach may prove beneficial, and describe an operational system utilizing this concept.
Multiprocessor memory organization and memory interference
ture of shared memory in a multiprocessor computer system is examined with particular attention to noninterleaved memory. Alternative memory organizations are compared and it is shown that a home memory organization, in which each processor is associated with one or more memories in which its address space is concentrated, is quite effective in reducing memory interference. Home memory organization is shown to be particularly suited to certain specialized computational problems as well as to possess advantages in terms of interference and reliability for general purpose computation. Results for interleaved memory are drawn from previous work and are used for comparison. Trace-driven simulations are used to verify the conclusions of the analysis.
A fast string searching algorithm
An algorithm is presented that searches for the location, “il” of the first occurrence of a character string, “pat,” in another string, “string.” During the search operation, the characters of pat are matched starting with the last character of pat. The information gained by starting the match at the end of the pattern often allows the algorithm to proceed in large jumps through the text being searched. Thus the algorithm has the unusual property that, in most cases, not all of the first i characters of string are inspected. The number of characters actually inspected (on the average) decreases as a function of the length of pat. For a random English pattern of length 5, the algorithm will typically inspect i/4 characters of string before finding a match at i. Furthermore, the algorithm has been implemented so that (on the average) fewer than i + patlen machine instructions are executed. These conclusions are supported with empirical evidence and a theoretical analysis of the average behavior of the algorithm. The worst case behavior of the algorithm is linear in i + patlen, assuming the availability of array space for tables linear in patlen plus the size of the alphabet.
3~