May 1977 - Vol. 20 No. 5
Features
SP/k: a system for teaching computer programming
SP/k is a compatible subset of the PL/I language that has been designed for teaching programming. The features of the SP/k language were chosen to encourage structured problem solving by computers, to make the language easy to learn and use, to eliminate confusing and redundant constructs, and to make the language easy to compile. The resulting language is suitable for introducing programming concepts used in various applications, including business data processing, scientific calculations and non-numeric computation. SP/k is actually a sequence of language subsets called SP/1, SP/2, … SP/8. Each subset introduces new programming language constructs while retaining all the constructs of preceding subsets. Each subset is precisely defined and can be learned or implemented without the following subsets.
Achieving specific accuracy in simulation output analysis
This paper extends the use of the regenerative property of queueing systems in the analysis of simulation output. In particular, it describes a sequential estimation method which when used with the regenerative property allows results to be obtained with specified statistical accuracy. This method includes a test to check the normality assumption on which the sequential procedure relies. The paper illustrates the method using the empty and idle state as the regenerative state. A second example then describes how using the most frequently entered state as the regenerative state reduces the chance of making a costly error in a preliminary simulation run. The paper also described how a variance reduction method due to Page [9] can be used to obtain a specified accuracy with considerably fewer job completions than are required when no variance reduction technique is applied.
Optimal program and data locations in computer networks
An optimization procedure for the allocation of program and data files in a computer network is presented. This algorithm takes into account the dependencies between files and programs such as occur in real heterogeneous computer networks. Insights into whether or not to convert programs from one computer to another can also be gained from the model. A search procedure for the file location problem is described, along with an example and a possible application of the model.
A comparison of tree-balancing algorithms
Several algorithms — height-balance (i.e. AVL and extensions), weight-balance (i.e. BB and WB), and total restructuring — for building balanced binary search trees are compared. The criteria for comparison encompass theoretical aspects (e.g. path lengths) and implementation independent and machine/algorithm-dependent measures (e.g. run time). A detailed analysis of code is also presented at a level believed to be language- and compiler-independent. The quality of the resulting trees and the overhead spent on building them are analyzed, and some guidelines are given for an efficient use of the methods. If insertion and subsequent queries are the only operations of interest, then “pure” AVL trees present the overall best qualities.
A comparison of hardware and software associative memories in the context of computer graphics
The Associative Processing of Line Drawings (APLD) System utilizes a hardware associative memory and creates, modifies, deletes, stores, and retrieves two-dimensional line drawings consisting of points, lines, rectangles, and triangles. The APLD functions were duplicated on the TX-2 computer at M.I.T.'s Lincoln Laboratory under the LEAP Language and Data Structure. A comparison of the hardware approach with the software simulation illustrates the advantages of the hardware associative memory in three areas: (1) processing speed, (2) storage requirements, and (3) flexibility. The major problem areas of hardware associative memory technology, namely input/output and cost effectiveness, are also addressed.
The choice of reference points in best-match file searching
Improvements to the exhaustive search method of best-match file searching have previously been achieved by doing a preprocessing step involving the calculation of distances from a reference point. This paper discusses the proper choice of reference points and extends the previous algorithm to use more than one reference point. It is shown that reference points should be located outside of data clusters. The results of computer simulations are presented which show that large improvements can be achieved by the proper choice and location of multiple reference points.
A fast algorithm for computing longest common subsequences
Previously published algorithms for finding the longest common subsequence of two sequences of length n have had a best-case running time of O(n2). An algorithm for this problem is presented which has a running time of O((r + n) log n), where r is the total number of ordered pairs of positions at which the two sequences match. Thus in the worst case the algorithm has a running time of O(n2 log n). However, for those applications where most positions of one sequence match relatively few positions in the other sequence, a running time of O(n log n) can be expected.