October 1975 - Vol. 18 No. 10
Features
A preliminary system for the design of DBTG data structures
The functional approach to database design is introduced. In this approach the goal of design is to derive a data structure which is capable of supporting a set of anticipated queries rather than a structure which “models the business” in some other way. An operational computer program is described which utilizes the functional approach to design data structures conforming to the Data Base Task Group specifications. The automatic programming technology utilized by this program, although typically used to generate procedure, is here used to generate declaratives.
CONVERT: a high level translation definition language for data conversion
This paper describes a high level and nonprocedural translation definition language, CONVERT, which provides very powerful and highly flexible data restructuring capabilities. Its design is based on the simple underlying concept of a form which enables the users to visualize the translation processes, and thus makes data translation a much simpler task.
“CONVERT” has been chosen for conveying the purpose of the language and should not be confused with any other language or program bearing the same name.
Optimizing the performance of a relational algebra database interface
An approach for implementing a “smart” interface to support a relational view of data is proposed. The basic idea is to employ automatic programming techniques so that the interface analyzes and efficiently refines the high level query specification supplied by the user. A relational algebra interface, called SQUIRAL, which was designed using this approach, is described in detail. SQUIRAL seeks to minimize query response time and space utilization by: (1) performing global query optimization, (2) exploiting disjoint and pipelined concurrency, (3) coordinating sort orders in temporary relations, (4) employing directory analysis, and (5) maintaining locality in page references. Algorithms for implementing the operators of E. F. Codd's relational algebra are presented, and a methodology for composing them to optimize the performance of a particular user query is described.
Implementation of a structured English query language
The relational model of data, the XRM Relational Memory System, and the SEQUEL language have been covered in previous papers and are reviewed. SEQUEL is a relational data sublanguage intended for ad hoc interactive problem solving by non-computer specialists. A version of SEQUEL that has been implemented in a prototype interpreter is described. The interpreter is designed to minimize the data accessing operations required to respond to an arbitrary query. The optimization algorithms designed for this purpose are described.
Merging with parallel processors
Consider two linearly ordered sets A, B, | A | = m, | B | = n, m ≤ n, and p, p ≤ m, parallel processors working synchronously. The paper presents an algorithm for merging A and B with the p parallel processors, which requires at most 2⌈log2(2m + 1)⌉ + ⌊3m/p⌋ + [m/p][log2(n/m)] steps. If n = 2&bgr;m (&bgr; an integer), the algorithm requires at most 2[log2(m + 1)] + [m/p](2 + &bgr;) steps. In the case where m and n are of the same order of magnitude, i.e. n = km with k being a constant, the algorithm requires 2[log2(m + 1)] + [m/p](3 + k) steps. These performances compare very favorably with the previous best parallel merging algorithm, Batcher's algorithm, which requires n/p + ((m + n)/2p)log2m steps in the general case and km/p + ((k + 1)/2)(m/p)log2m in the special case where n = km.
Horner’s rule for the evaluation of general closed queueing networks
The solution of separable closed queueing networks requires the evaluation of homogeneous multinomial expressions. The number of terms in those expressions grows combinatorially with the size of the network such that a direct summation may become impractical. An algorithm is given which does not show a combinatorial operation count. The algorithm is based on a generalization of Horner's rule for polynomials. It is also shown how mean queue size and throughput can be obtained at negligible extra cost once the normalization constant is evaluated.