November 1966 - Vol. 9 No. 11

November 1966 issue cover image

Features

Research and Advances

Data filtering applied to information storage and retrieval applications

Manipulation of data strings is the most complex processing function in information storage and retrieval applications. Data string manipulation is discussed within the context of an interpretive processing environment controlled by the use of procedural directives. The sequence of procedural directives is derived from a job assumed to be expressed in a user-oriented source language. Each data string within the structured data environment (data bank) is explicitly or implicitly related to a format declaration residing in a format library. The processing mechanics associated with data string manipulation is developed in accordance with a generalized data filtering concept. This results in the implementation of a two-port data filter module that satisfies internal processing functions by filtering data strings through format declarations associated with its input and output ports.
Research and Advances

Syntax macros and extended translation

A translation approach is described which allows one to extend the syntax and semantics of a given high-level base language by the use of a new formalism called a syntax-macro. Syntax-macros define string transformations based on syntactic elements of the base language. Two types of macros are discussed, and examples are given of their use. The conditional generation of macros based on options and alternatives recognized by the scan are also described.
Research and Advances

Conversion of decision tables to computer programs by rule mask techniques

The rule mask technique is one method of converting limited entry decision tables to computer programs. Recent discussion suggests that in many circumstances it is to be preferred to the technique of constructing networks or trees. A drawback of the technique as hitherto presented is its liability to produce object programs of longer run time than necessary. In this paper a modification of the technique is discussed which takes into account both rule frequencies and the relative times for evaluating conditions. This can materially improve object program run time.
Research and Advances

The augmented predictive analyzer for context-free languages—its relative efficiency

It has been proven by Greibach that for a given context-free grammar G, a standard-form grammar Gs, can be constructed, which generates the same language as is generated by G and whose rules are all of the form Z→ cY1 ··· Ym (m ≥ 0) where Z and Yi are intermediate symbols and c a terminal symbol. Since the predictive analyzer at Harvard uses a standard-form grammar, it can accept the language of any context-free Grammar G, given an equivalent standard-form grammar Gs. The structural descriptions SD(Gs, &khgr;) assigned to a given sentence &khgr; by the predictive analyzer, however, are usually different from the structural descriptions SD(G, &khgr;) assigned to the same sentence by the original context-free grammar G from which Gs is derived. In Section 1, an algorithm, originally due to Abbott is described, which converts a given context-free grammar into an augmented standard-form grammar each of whose rules is in standard form, supplemented by additional information describing its derivation from the original context-free grammar. A technique for performing the SD(Gs, &khgr;) to SD(G, &khgr;) transformation effectively is also described. In Section 2, the augmented predictive analyzer as a parsing algorithm for arbitrary context-free languages is compared with two other parsing algorithms: a selective top-to-bottom algorithm similar to Irons' “error correcting parse algorithm” and

Recent Issues

  1. October 2024 CACM cover
    October 2024 Vol. 67 No. 10
  2. September 2024 CACM cover
    September 2024 Vol. 67 No. 9
  3. August 2024 CACM cover
    August 2024 Vol. 67 No. 8
  4. July 2024 CACM cover
    July 2024 Vol. 67 No. 7