Systems design education: a gaming approach
November 1978 - Vol. 21 No. 11
Features
One of the problems facing managers of computer installations is the problem of configuring the computer system to meet the demands made by the mix of jobs that the computer center must service, This paper presents a management game that allows the player to configure a computer system to meet a hypothetical job mix. The job mix is under the control of a game administrator and can be varied to simulate a variety of real-world situations (I/0 bound jobs, compute bound jobs, etc.). The player of the game receives a set of detailed reports on the cost of his choices and a simulated run of the center operating under his choices.
A simply extended and modified batch environment graphical system (SEMBEGS)
SEMBEGS is a complete batch environment graphical system containing components for handling graphical data files, for displaying the contents of these files on a variety of graphical hardware, and for performing graphical batch input operations. SEMBEGS is easy to extend and modify to meet the growing needs of a large batch environment, and is even extendable to a fully interactive system. The paper presents the conceptual view of graphics leading to the design of SEMBEGS and outlines the major components of the system. The design of SEMBEGS is founded upon the basic assumption that the true aim of computer graphics is to describe graphical entities, rather than, as commonly held, to provide graphical input and output functional capabilities. SEMBEGS is built around a Basic Graphical Data Management System (BAGDAMS) which provides a common means of communicating the descriptions of graphical entities between the various components of SEMBEGS. BAGDAMS provides facilities for storing, retrieving, and manipulating the descriptions of graphical entities provided by, and received by application programs, graphics packages, and graphical devices.
Performance evaluation of highly concurrent computers by deterministic simulation
Simulation is presented as a practical technique for performance evaluation of alternative configurations of highly concurrent computers. A technique is described for constructing a detailed deterministic simulation model of a system. In the model a control stream replaces the instruction and data streams of the real system. Simulation of the system model yields the timing and resource usage statistics needed for performance evaluation, without the necessity of emulating the system. As a case study, the implementation of a simulator of a model of the CPU-memory subsystem of the IBM 360/91 is described. The results of evaluating some alternative system designs are discussed. The experiments reveal that, for the case study, the major bottlenecks in the system are the memory unit and the fixed point unit. Further, it appears that many of the sophisticated pipelining and buffering techniques implemented in the architecture of the IBM 360/91 are of little value when high-speed (cache) memory is used, as in the IBM 360/195.
Using synthetic images to register real images with surface models
A number of image analysis tasks can benefit from registration of the image with a model of the surface being imaged. Automatic navigation using visible light or radar images requires exact alignment of such images with digital terrain models. In addition, automatic classification of terrain, using satellite imagery, requires such alignment to deal correctly with the effects of varying sun angle and surface slope. Even inspection techniques for certain industrial parts may be improved by this means.
We achieve the required alignment by matching the real image with a synthetic image obtained from a surface model and known positions of the light sources. The synthetic image intensity is calculated using the reflectance map, a convenient way of describing surface reflection as a function of surface gradient. We illustrate the technique using LANDSAT images and digital terrain models.
Computer generation of gamma random variables—II
A rejection method is proposed for generating gamma variates with nonintegral shape parameter &agr;, &agr; > 1. This method is similar to other methods given by Fishman, Wallace, and Tadikamalla and is faster than these methods for &agr; > 2. The core storage requirements and the programming effort for the proposed method are similar to those of Wallace's or Tadikamalla's methods. The computational times for the proposed method remain fairly constant for medium and large values of &agr; and are superior to times obtained by Arhens and Dieter's method for all values of &agr;. The proposed method is simpler than Ahrens and Dieter's method.
A simple recovery-only procedure for simple precedence parsers
A simple method is described enabling simple precedence parsers to recover from syntax errors. No attempt to repair errors is made, yet parsing and most semantic processing can continue. The result is a good “first approximation” to syntax error handling with negligible increase in parsing time, space, and complexity of both the parser and its table generator.
A note on conditional expressions
Evaluation of a conditional expression may succeed even when the “deciding predicate” diverges and the alternatives are records (or nodes) whose fields have different content.
Distributed processes: a concurrent programming concept
A language concept for concurrent processes without common variables is introduced. These processes communicate and synchronize by means of procedure calls and guarded regions. This concept is proposed for real-time applications controlled by microcomputer networks with distributed storage. The paper gives several examples of distributed processes and shows that they include procedures, coroutines, classes, monitors, processes, semaphores, buffers, path expressions, and input/output as special cases.
The new class of Pk trees is presented, where height balance is maintained for the nodes lying on particular paths. The number of nodes of a Pk tree asympotically grows as a power of the height, in the worst case. A procedure for node insertion is given, and the class of trees considered is restricted to IPk trees, which are buildable by such a procedure. The average behavior of such trees, studied by an extensive set of simulation runs, is close to that of AVL trees. In particular, the family of IP0 trees whose main advantage is the reduced number of restructurings required after node insertion, is analyzed.
Median split trees: a fast lookup technique for frequently occuring keys
Split trees are a new technique for searching sets of keys with highly skewed frequency distributions. A split tree is a binary search tree each node of which contains two key values—a node value which is a maximally frequent key in that subtree, and a split value which partitions the remaining keys (with respect to their lexical ordering) between the left and right subtrees. A median split tree (MST) uses the lexical median of a node's descendents as its split value to force the search tree to be perfectly balanced, achieving both a space efficient representation of the tree and high search speed. Unlike frequency ordered binary search trees, the cost of a successful search of an MST is log n bounded and very stable around minimal values. Further, an MST can be built for a given key ordering and set of frequencies in time n log n, as opposed to n2 for an optimum binary search tree. A discussion of the application of MST's to dictionary lookup for English is presented, and the performance obtained is contrasted with that of other techniques.
Synthesizing constraint expressions
A constraint network representation is presented for a combinatorial search problem: finding values for a set of variables subject to a set of constraints. A theory of consistency levels in such networks is formulated, which is related to problems of backtrack tree search efficiency. An algorithm is developed that can achieve any level of consistency desired, in order to preprocess the problem for subsequent backtrack search, or to function as an alternative to backtrack search by explicitly determining all solutions.
On-the-fly garbage collection: an exercise in cooperation
As an example of cooperation between sequential processes with very little mutual interference despite frequent manipulations of a large shared data space, a technique is developed which allows nearly all of the activity needed for garbage detection and collection to be performed by an additional processor operating concurrently with the processor devoted to the computation proper. Exclusion and synchronization constraints have been kept as weak as could be achieved; the severe complexities engendered by doing so are illustrated.