Advertisement

Research and Advances

DIAGRAM: a grammar for dialogues

An explanatory overview is given of DIAGRAM, a large and complex grammar used in an artificial intelligence system for interpreting English dialogue. DIAGRAM is an augmented phrase-structure grammar with rule procedures that allow phrases to inherit attributes from their constituents and to acquire attributes from the larger phrases in which they themselves are constituents. These attributes are used to set context-sensitive constraints on the acceptance of an analysis. Constraints can be imposed by conditions on dominance as well as by conditions on constituency. Rule procedures can also assign scores to an analysis to rate it as probable or unlikely. Less likely analyses can be ignored by the procedures that interpret the utterance. For every expression it analyzes, DIAGRAM provides an annotated description of the structure. The annotations supply important information for other parts of the system that interpret the expression in the context of a dialogue. Major design decisions are explained and illustrated. Some contrasts with transformational grammars are pointed out and problems that motivate a plan to use metarules in the future are discussed. (Metarules derive new rules from a set of base rules to achieve the kind of generality previously captured by transformational grammars but without having to perform transformations on syntactic analyses.)
Research and Advances

Analysis of future event set algorithms for discrete event simulation

New analytical and empirical results for the performance of future event set algorithms in discrete event simulation are presented. These results provide a clear insight to the factors affecting algorithm performance, permit evaluation of the hold model, and determine the best algorithm(s) to use. The analytical results include a classification of distributions for efficient insertion scanning of a linear structure. In addition, it is shown that when more than one distribution is present, there is generally an increase in the probability that new insertions will have smaller times than those in the future event set. Twelve algorithms, including most of those recently proposed, were empirically evaluated using primarily simulation models. Of the twelve tested, four performed well, three performed fairly, and five performed poorly.
Research and Advances

A two-list synchronization procedure for discrete event simulation

The traditional mechanism for maintaining a list of pending events in a discrete event simulation is the simple linked list. However, in large scale simulations this list often becomes cumbersome to maintain since the number of pending events may become quite large. As a result, the execution time required by the simple linked list is often a significant portion of total simulation time. Several papers have been published suggesting improved synchronization procedures. The most efficient procedures reported are the time-indexed procedure and the two-level procedure. Both methodologies are much more efficient than simple linked lists; however, neither has been adopted by a general purpose simulation language. Further, both procedures require external parameter definition, which is a major handicap to their adoption by a general purpose language. This paper introduces a new sychronization procedure, the two-list procedure, which is much faster than simple linked lists for large pending event files. This procedure was designed for implementation in Fortran, and properly implemented it is transparent to the user. Thus it is ideal for adoption by general purpose simulation languages.
Research and Advances

An abstract programming model

An abstract model of a processor is presented informally. The model can be used by itself to abstractly describe algorithms, or with a direct implementation to write and run programs, or as the foundation of a programming language. In the last case, a translator allows higher level descriptions of algorithms for the model.
Research and Advances

Automatic extension of an ATN knowledge base

A computer program is described that acquires much of its knowledge from conversations among operators on Morse code radio networks. The system consists of a learning component and a language understander. The learning component extends a ‘core’ augmented transition network (ATN) knowledge base by generalizing from sentences taken from scripts of actual conversations. The extensions enable the understanding component to process a large number of sentences that are syntactically and semantically similar to the examples.
Research and Advances

An on-line algorithm for fitting straight lines between data ranges

Applications often require fitting straight lines to data that is input incrementally. The case where a data range [&agr;k, &ohgr;k] is received at each tk, t1 < t2 < … tn, is considered. An algorithm is presented that finds all the straight lines u = mt + b that pierce each data range, i.e., all pairs (m, b) such that &agr;k ≤ mtk + b ≤ &ohgr;k for k = 1, … , n. It may be that no single line fits all the ranges, and different alternatives for handling this possibility are considered. The algorithm is on-line, producing the correct partial result after processing the first k ranges for all k < n. For each k, the set of (m, b) pairs constitutes a convex polygon in the m-b parameter space, which can be constructed as the intersection of 2k half-planes. It is shown that the O(n logn) half-plane intersection algorithm of Shamos and Hoey can be improved in this special case to O(n).
Research and Advances

Psychology of calculator languages: a framework for describing differences in users' knowledge

This paper presents a framework for describing users' knowledge of how a simple four-function calculator operates. Differences among novices and experts in their conceptions of “what goes on inside the calculator” for various sequences of button presses are summarized. Individual differences include different views on when an expression is evaluated, different procedures for evaluating a chain of arithmetic, and different rules for evaluating unusual sequences of key presses.
Research and Advances

Polynomial manipulation with APL

A simple but effective method for the manipulation of polynomials of several variables in APL is presented. The method is especially advantageous in situations where more sophisticated symbolic computing systems such as SAC-1 and MACSYMA are not available. The method is shown to successfully solve a problem not readily resolved by other means.
Research and Advances

Representing super-sparse matrices with perturbed values

This paper describes a form of purposeful data perturbation in a linear programming model which pertains to uncertainties in the magnitudes of the matrix coefficients. A problem in value pool construction is described first, then a resolution based on a new concept, “covering lattices.” Computer representations of real values, limited by finite precision, is an example of a covering lattice. After presenting the strategy and tactical variations, the effects of resident distortion are analyzed. Several theorems are presented that measure bias under a variety of assumptions. An appendix is included that contains mathematical proofs.
Research and Advances

Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing
Research and Advances

Experience with a space efficient way to store a dictionary

The paper, “Computer Programs for Detecting and Correcting Spelling Errors” by James L. Peterson [3], listed methods for checking and correcting spelling errors. One significant method, however, was not included: a probabilistic technique suggested by Carter, Floyd, Gill, Markovsky, and Wegman [1]. The present note discusses aspects of these practical space efficient algorithms for testing set membership—a simple abstraction of looking a word up in a dictionary. An implementation of one of these algorithms uses only 20 percent of the space used by the Stanford SPELL program described by Peterson.
Research and Advances

Another spelling correction program

We read James L. Peterson's article, “Computer Programs for Detecting and Correcting Spelling Errors” with great interest. We too have recently developed a spelling correction program as part of a tutorial on programming techniques and style. Our basic modesty inhibited us from publishing the details—there was, after all, little research involved in making the program. However, on reflection, there are several features in our design that we feel merit wider circulation.

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More