May 1972 - Vol. 15 No. 5

May 1972 issue cover image

Features

Research and Advances

An improved index sequential access method using hashed overflow

The Index Sequential Access Method (ISAM) is one of the most important file management systems used with moveable head disk devices. This study investigates the use of an unconventional method of treating overflow records. The method is to use hashing techniques to allocate space for such records. If certain conditions are satisfied, this is superior to the conventional ISAM method of chaining the overflow records via linked list techniques. These conditions are: long overflow chains with significant overflow; lack of tight disk space constraints; record keys which are small compared to the total record size; and significant use of the file in the index as opposed to the sequential mode. Using hashed overflow, the time to locate a record is dependent not on the total volume of overflow records as in conventional ISAM, but on the percentage use of space dedicated to overflow records.
Research and Advances

The Multics virtual memory: concepts and design

As experience with use of on-line operating systems has grown, the need to share information among system users has become increasingly apparent. Many contemporary systems permit some degree of sharing. Usually, sharing is accomplished by allowing several users to share data via input and output of information stored in files kept in secondary storage. Through the use of segmentation, however, Multics provides direct hardware addressing by user and system programs of all information, independent of its physical storage location. Information is stored in segments each of which is potentially sharable and carries its own independent attributes of size and access privilege. Here, the design and implementation considerations of segmentation and sharing in Multics are first discussed under the assumption that all information resides in a large, segmented main memory. Since the size of main memory on contemporary systems is rather limited, it is then shown how the Multics software achieves the effect of a large segmented main memory through the use of the Honeywell 645 segmentation and paging hardware.
Research and Advances

MUX, a simple approach to on-line computing

An on-line system operating as part of a normal batch system for the CDC 6600 computer is described. The system, which required one man-year for initial software implementation, although basically simple, provides the necessary elements to input and modify files, submit them for batch execution, and provide results at the user's terminal. A multiplexer designed and developed as part of the project cost one man-year for design and checkout, and $16,000 for parts and fabrication. All aspects of the system are described, including design criteria, implementation, cost, overhead, and user reactions.
Research and Advances

A technique for software module specification with examples

This paper presents an approach to writing specifications for parts of software systems. The main goal is to provide specifications sufficiently precise and complete that other pieces of software can be written to interact with the piece specified without additional information. The secondary goal is to include in the specification no more information than necessary to meet the first goal. The technique is illustrated by means of a variety of examples from a tutorial system.
Research and Advances

Implementing Clenshaw-Curtis quadrature, I methodology and experience

Clenshaw-Curtis quadrature is a particularly important automatic quadrature scheme for a variety of reasons, especially the high accuracy obtained from relatively few integrand values. However, it has received little use because it requires the computation of a cosine transformation, and the arithmetic cost of this has been prohibitive. This paper is in two parts; a companion paper, “II Computing the Cosine Transformation,” shows that this objection can be overcome by computing the cosine transformation by a modification of the fast Fourier transform algorithm. This first part discusses the strategy and various error estimates, and summarizes experience with a particular implementation of the scheme.
Research and Advances

Implementing Clenshaw-Curtis quadrature, II computing the cosine transformation

In a companion paper to this, “I Methodology and Experiences,” the automatic Clenshaw-Curtis quadrature scheme was described and how each quadrature formula used in the scheme requires a cosine transformation of the integrand values was shown. The high cost of these cosine transformations has been a serious drawback in using Clenshaw-Curtis quadrature. Two other problems related to the cosine transformation have also been troublesome. First, the conventional computation of the cosine transformation by recurrence relation is numerically unstable, particularly at the low frequencies which have the largest effect upon the integral. Second, in case the automatic scheme should require refinement of the sampling, storage is required to save the integrand values after the cosine transformation is computed. This second part of the paper shows how the cosine transformation can be computed by a modification of the fast Fourier transform and all three problems overcome. The modification is also applicable in other circumstances requiring cosine or sine transformations, such as polynomial interpolation through the Chebyshev points.
Research and Advances

Minimax nonlinear approximation by approximation on subsets

A possible algorithm for minimax approximation on an infinite set X consists in choosing a sequence of finite point sets {Xk} which fill out X and taking a limit of minimax approximations on Xk as k → ∞. Such a procedure is considered by Rice [4, pp. 12-15]. In the case of linear approximation such a procedure has been shown to converge [1, pp. 84-88]. It has been claimed by Watson [5] that the procedure works for approxition by nonlinear families.
Research and Advances

Merge sort algorithm [M1]

Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives rise to a maximum recursion depth of [log2(n - 1) + 2]. This procedure is particularly suitable for sorting when it is not desirable to move the n objects physically in store and the sorting criterion is not simple. In that case it is reasonable to take the number of compare operations as a measure for the speed of the algorithm. When n is an integral power of 2, this number will be comprised between (n × log2n)/2 when the objects are sorted to begin with and (n × log2n - n + 1) as an upper limit. When n is not an integral power of 2, the above formulas are approximate.
Research and Advances

Algorithm 428: Hu-Tucker minimum redundancy alphabetic coding method [Z]

This algorithm implements the Hu-Tucker method of variable length, minimum redundancy alphabetic binary encoding [1]. The symbols of the alphabet are considered to be an ordered forest of n terminal nodes. Two nodes in an ordered forest are said to be tentative-connecting if the sequence of nodes between the two given nodes is either empty or consists entirely of nonterminal nodes.

Recent Issues

  1. November 2024 CACM cover
    November 2024 Vol. 67 No. 11
  2. October 2024 CACM cover
    October 2024 Vol. 67 No. 10
  3. September 2024 CACM cover
    September 2024 Vol. 67 No. 9
  4. August 2024 CACM cover
    August 2024 Vol. 67 No. 8