Advertisement

Research and Advances

A comparison of two network-based file servers

This paper compares two working network-based file servers, the Xerox Distributed File System (XDFS) implemented at the Xerox Palo Alto Research Center, and the Cambridge File Server (CFS) implemented at the Cambridge University Computer Laboratory. Both servers support concurrent random access to files using atomic transactions, both are connected to local area networks, and both have been in service long enough to enable us to draw lessons from them for future file servers. We compare the servers in terms of design goals, implementation issues, performance, and their relative successes and failures, and discuss what we would do differently next time.
Research and Advances

A generalized user interface for applications programs

A general method for using disk files (instead of a more conventional parameter-passing mechanism) to transfer control information from a user interface to a set of related applications programs is described. This technique effectively moves much of the user interface, including command decoding and limited parameter checking, from the applications programs to a table-driven executive.
Research and Advances

An experimental study of the human/computer interface

An exploratory study was conducted to analyze whether interface and user characteristics affect decision effectiveness and subject behavior in an interactive human/computer problem-solving environment. The dependent variables were performance and the use of the systems options. Two of the independent variables examined, experience and cognitive style, were user characteristics; the other three, dialogue, command, and default types, were interface characteristics. Results indicate that both user and interface characteristics influence the use of the system options and the request for information in the problem-solving task.
Research and Advances

A history and evaluation of System R

System R, an experimental database system, was constructed to demonstrate that the usability advantages of the relational data model can be realized in a system with the complete function and high performance required for everyday production use. This paper describes the three principal phases of the System R project and discusses some of the lessons learned from System R about the design of relational systems and database systems in general.

Research and Advances

A user-friendly algorithm

The interface between a person and a computer can be looked at from either side. Programmers tend to view it from the inside; they consider it their job to defend the machine against errors made by its users. From the outside, the user sees his/her problems as paramount. He/she is often at odds with this complex, inflexible, albeit powerful tool. The needs of both people and machines can be reconciled; users will respond more efficiently and intelligently if they receive meaningful feedback. A “user-friendly” algorithm that covers a wide range of interactive environments and is typical of most operating systems and many application programs is presented.
Research and Advances

The selection of optimal tab settings

A new generation of computer terminals allows tab settings to be selected and set by the computer. This feature can be used to reduce the number of characters that are needed to represent a document for transmission and printing. In this note, an algorithm is given for selecting the optimal set of tab stops for minimizing the number of characters transmitted. An implementation of the algorithm has reduced the number of characters transmitted by from 7 to 30 percent, but requires a prepass through the document to compute a matrix used in determining the optimal set of tab stops. The use of fixed tab stops, as a heuristic alternative, can achieve about 80 percent of optimal with no prepass.
Research and Advances

Detection of logical errors in decision table programs

In this paper an algorithm to detect logical errors in a limited-entry decision table and in loop-free programs with embedded decision tables is developed. All the conditions in the decision tables are assumed to be inequalities or equalities relating linear expressions. It is also assumed that actions in a decision table are linear in variables which occur in the condition stub of the decision table (or tables) to which control is transferred from the table. The algorithm is based on determining whether a set of linear inequalities has or does not have a solution. The algorithm described in the paper is implemented in Fortran IV.
Research and Advances

Optimizing decision trees through heuristically guided search

Optimal decision table conversion has been tackled in the literature using two approaches, dynamic programming and branch-and-bound. The former technique is quite effective, but its time and space requirements are independent of how “easy” the given table is. Furthermore, it cannot be used to produce good, quasioptimal solutions. The branch-and-bound technique uses a good heuristic to direct the search, but is cluttered up by an enormous search space, since the number of solutions increases with the number of test variables according to a double exponential. In this paper we suggest a heuristically guided top-down search algorithm which, like dynamic programming, recognizes identical subproblems but which can be used to find both optimal and quasioptimal solutions. The heuristic search method introduced in this paper combines the positive aspects of the above two techniques. Compressed tables with a large number of variables can be handled without deriving expanded tables first.
Research and Advances

A simply extended and modified batch environment graphical system (SEMBEGS)

SEMBEGS is a complete batch environment graphical system containing components for handling graphical data files, for displaying the contents of these files on a variety of graphical hardware, and for performing graphical batch input operations. SEMBEGS is easy to extend and modify to meet the growing needs of a large batch environment, and is even extendable to a fully interactive system. The paper presents the conceptual view of graphics leading to the design of SEMBEGS and outlines the major components of the system. The design of SEMBEGS is founded upon the basic assumption that the true aim of computer graphics is to describe graphical entities, rather than, as commonly held, to provide graphical input and output functional capabilities. SEMBEGS is built around a Basic Graphical Data Management System (BAGDAMS) which provides a common means of communicating the descriptions of graphical entities between the various components of SEMBEGS. BAGDAMS provides facilities for storing, retrieving, and manipulating the descriptions of graphical entities provided by, and received by application programs, graphics packages, and graphical devices.
Research and Advances

Jump searching: a fast sequential search technique

When sequential file structures must be used and binary searching is not feasible, jump searching becomes an appealing alternative. This paper explores variants of the classic jump searching scheme where the optimum jump size is the square root of the number of records. Multiple level and variable size jump strategies are explored, appropriate applications are discussed and performance is evaluated.
Research and Advances

An algorithm using symbolic techniques for the Bel-Petrov classification of gravitational fields

In this note, an algorithm is presented for the symbolic calculation of certain algebraic invariants of the Weyl tensor which permits the determination of the Bel-Petrov types of a gravitational field. This algorithm, although more specialized than that of D'Inverno and Russell-Clark, requires neither the use of a special coordinate system nor the spin coefficient formalism. The algorithm has been implemented in FORMAC and is designed to complete the classification scheme proposed by Petrov in his book. An appendix contains examples illustrating the use of the algorithm.
Research and Advances

Generalized working sets for segment reference strings

The working-set concept is extended for programs that reference segments of different sizes. The generalized working-set policy (GWS) keeps as its resident set those segments whose retention costs do not exceed their retrieval costs. The GWS is a model for the entire class of demand-fetching memory policies that satisfy a resident-set inclusion property. A generalized optimal policy (GOPT) is also defined; at its operating points it minimizes aggregated retention and swapping costs. Special cases of the cost structure allow GWS and GOPT to simulate any known stack algorithm, the working set, and VMIN. Efficient procedures for computing demand curves showing swapping load as a function of memory usage are developed for GWS and GOPT policies. Empirical data from an actual system are included.
Research and Advances

A note on virtual memory indexes

In [4] Maruyama and Smith analyzed design alternatives for virtual memory indexes. The indexes investigated were those modeled on the structure of VSAM [5] which are closely related to B-trees [1]. Maruyama and Smith presented alternatives for search strategies, page format, and whether or not to compress keys. They made a comparative analysis based on the average cost of processing the index for retrieval of a key with its associated data pointer (or a sequence of keys and data pointers). The analysis showed that the optimal solution depends on machine and file characteristics and they provided formulas for selecting the best alternative. In this brief note1 we propose that another alternative for the page format be considered. The basic idea is to replace the sequential representation within a page by a linked representation. The main advantage of this method is realized in the construction (and maintenance) phase of the index. Although this phase is difficult to analyze quantitatively—as alluded to in [4]—we provide data to substantiate our claim of a net saving in construction cost without any corresponding loss in the average retrieval cost. In the ensuing discussion we assume that the reader is familiar both with B-trees and the paper of Maruyama and Smith [4].
Research and Advances

A controlled experiment in program testing and code walkthroughs/inspections

This paper describes an experiment in program testing, employing 59 highly experienced data processing professionals using seven methods to test a small PL/I program. The results show that the popular code walkthrough/inspection method was as effective as other computer-based methods in finding errors and that the most effective methods (in terms of errors found and cost) employed pairs of subjects who tested the program independently and then pooled their findings. The study also shows that there is a tremendous amount of variability among subjects and that the ability to detect certain types of errors varies from method to method.
Research and Advances

Right brother trees

Insertion and deletion algorithms are provided for the class of right (or one-sided) brother trees which have O (log n) performance. The importance of these results stems from the close relationship of right brother trees to one-sided height-balanced trees which have an insertion algorithm operating in O (log2 n). Further, although both insertion and deletion can be carried out in O (log n) time for right brother trees, it appears that the insertion algorithm is inherently much more difficult than the deletion algorithm—the reverse of what one usually obtains.
Research and Advances

Simulations of dynamic sequential search algorithms

In [3], R.L. Rivest presents a set of methods for dynamically reordering a sequential list containing N records in order to increase search efficiency. The method Ai (for i between 1 and N) performs the following operation each time that a record R has been successfully retrieved: Move R forward i positions in the list, or to the front of the list if it was in a position less than i. The method A1 is called the transposition method, and the method AN-1 is called the move-to-front method.
Research and Advances

On the complexity of computing the measure of ∪[ai,bi]

The decision tree complexity of computing the measure of the union of n (possibly overlapping) intervals is shown to be &OHgr;(n log n), even if comparisons between linear functions of the interval endpoints are allowed. The existence of an &OHgr;(n log n) lower bound to determine whether any two of n real numbers are within ∈ of each other is also demonstrated. These problems provide an excellent opportunity for discussing the effects of the computational model on the ease of analysis and on the results produced.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More