December 1987 - Vol. 30 No. 12
Features
Generality in artificial intelligence
My 1971 Turing Award Lecture was entitled "Generality in Artificial Intelligence." The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.
I am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.
It was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.
Another symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.
Here are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.
Issues in the pragmatics of qualitative modeling: lessons learned from a xerographics project
The photocopier is one of the most complex machines because xerography involves many types of physical phenomena. ARIA is a qualitative simulation of xerography that is intended to teach technicians the reasons behind some of the subtle problems that occur in copiers. This effort to model xerography exposed shortcomings in the techniques of qualitative modeling as applied to complex systems and helped to better understand the impact of certain basic modeling decisions.
Variations on UNIX for parallel-processing computers
Porting a familiar UNIX environment to today's parallel-processing computers is challenging, but options abound.
The duality of database structures and design techniques
Attempting to pair database structures with database design techniques that seem incompatible yields some fascinating concepts about the world of database, including foreign keys and multiple relationships.
Economies of scale in computing: Grosch’s law revisited
Does Grosch's law apply in the 1980s? A look at the price and performance of computer systems concludes that there are no economies of scale in computing. Computer technology is characterized by constant returns to scale, a fact that has important implications on future centralization/decentralization decisions.
Self-organizing search lists using probabilistic back-pointers
A class of algorithms is presented for maintaining self-organizing sequential search lists, where the only permutation applied is to move the accessed record of each search some distance towards the front of the list. During searches, these algorithms retain a back-pointer to a previous probed record in order to determine the destination of the accessed record's eventual move. The back-pointer does not traverse the list, but rather it is advanced occasionally to point to the record just probed by the search algorithm. This avoids the cost of a second traversal through a significant portion of the list, which may be a significant savings when each record access may require a new page to be brought into primary memory. Probabilistic functions for deciding when to advance the pointer are presented and analyzed. These functions demonstrate average case complexities of measures such as asymptotic cost and convergence, similar to some of the more common list update algorithms in the literature. In cases where the accessed record is moved forward a distance proportional to the distance to the front of the list, the use of these functions may save up to 50 percent of the time required for permuting the list.