September 1988 - Vol. 31 No. 9
Features
Program verification: the very idea
The notion of program verification appears to trade upon an equivocation. Algorithms, as logical structures, are appropriate subjects for deductive verification. Programs, as causal models of those structures, are not. The success of program verification as a generally applicable and completely reliable method for guaranteeing program performance is not even a theoretical possibility.
Learning by doing with simulated intelligent help
Intelligent advisory interfaces will afford new approaches to help and training problems, however little is known about the usability of such facilities. This recent study indicates that although intelligent help can support users, there are also specific potential problems.
Reading and writing with computers: a framework for explaining differences in performance
Several factors can influence the behavior of users as they read and write with computers. Recent research indicates that both quality and quantity depend upon page size, legibility, responsiveness and tangibility.
Comparison of analysis techniques for information requirement determination
A comparison of systems analysis techniques, the Data Flow Diagram (DFD) and part of the Integrated Definition Method (IDEFo), is done using a new developmental framework.
A comparison of techniques for the specification of external system behavior
The elimination of ambiguity, inconsistency, and incompleteness in a Software Requirements Specification (SRS) document is inherently difficult, due to the use of natural language. The focus here is a survey of available techniques designed to reduce these negatives in the documentation of a software product's external behavior.
The input/output complexity of sorting and related problems
We provide tight upper and lower bounds, up to a constant factor, for the number of inputs and outputs (I/OS) between internal memory and secondary storage required for five sorting-related problems: sorting, the fast Fourier transform (FFT), permutation networks, permuting, and matrix transposition. The bounds hold both in the worst case and in the average case, and in several situations the constant factors match. Secondary storage is modeled as a magnetic disk capable of transferring P blocks each containing B records in a single time unit; the records in each block must be input from or output to B contiguous locations on the disk. We give two optimal algorithms for the problems, which are variants of merge sorting and distribution sorting. In particular we show for P = 1 that the standard merge sorting algorithm is an optimal external sorting method, up to a constant factor in the number of I/Os. Our sorting algorithms use the same number of I/Os as does the permutation phase of key sorting, except when the internal memory size is extremely small, thus affirming the popular adage that key sorting is not faster. We also give a simpler and more direct derivation of Hong and Kung's lower bound for the FFT for the special case B = P = O(1).
Improving locality of reference in a garbage-collecting memory management system
Modern Lisp systems make heavy use of a garbage-collecting style of memory management. Generally, the locality of reference in garbage-collected systems has been very poor. In virtual memory systems, this poor locality of reference generally causes a large amount of wasted time waiting on page faults or uses excessively large amounts of main memory. An adaptive memory management algorithm, described in this article, allows substantial improvement in locality of reference. Performance measurements indicate that page-wait time typically is reduced by a factor of four with constant memory size and disk technology. Alternately, the size of memory typically can be reduced by a factor of two with constant performance.