"The fox knows many things, but the hedgehog knows one big thing." Philosophers have used this line, which is attributed to the ancient Greek poet Archilochus, to capture the notion that a person can be either a generalist or a specialist. While the former may be broad-minded and capable of coping reasonably with many different kinds of problems, the latter is someone who can apply deep expertise in an important, though perhaps narrow, domain.
This distinction exists in software, too. Programs can be written to be either general-purpose or special-purpose. General-purpose code has the advantage of being usable in a variety of situations, whereas special-purpose code might be written in a way that takes advantage of unique characteristics of the execution environment and thus gain efficiency at the cost of reusability. Perhaps the classic example is the problem of matrix multiplication. While this may be considered a toy problem, it turns out to be instructive nonetheless. The general-purpose algorithm for matrix multiply is easy to write and handles any two matrices as inputs; however, a significantly more efficient algorithm would be preferable if we just happen to know that one of the matrices is sparse. The problem, of course, is that we write the code and compile it during an early stage when we do not yet know what the inputs look like; it is only in the later execution stage when the inputs become available.
On the surface, this might appear to be just a matter of trading between programmer effort and computational efficiency. But a key part of what makes well-written software well written is that it makes as few assumptions about the external environment as possible. Unfortunately, software written with such generality can be too costly in terms of computational efficiency. Thus, programmers are often forced to write specialized code that takes advantage of the particulars of a specific set of assumptions. We can ask the question: Is it possible to write code so that it is general-purpose but then have it automatically specialize itself to the situation at hand during execution?
This is a question that researchers have thought about for many years. While ideas related to "self-modifying code" have been around since the earliest days of computing, one of the first significant demonstrations that automatic code specialization might have important practical benefits on a practical scale was given in 1988 by Pu, Massalin, and Ioannidis.a In this work, they presented the Synthesis operating system and showed it could take advantage of dynamic data values in order to generate, at runtime, specialized kernel routines. This enabled a system design with exceptionally clean, high-level interfaces and yet the high performance of a more complicated system. On the other hand, the implementation of Synthesis itself was difficult, and in general, the problem of writing programs that can generate specialized code at runtime has continued to prove challenging. Variations on the Lisp-like concepts of program quotation and macro expansion are both limiting and error-prone.
As a result, an important direction of research has involved the search for language and compiler technology that can allow programmers to write general-purpose code that is then correctly and efficiently turned into high-performance specialized code at run-time. This would allow, for example, a programmer to write a general matrix-multiply code but then still get much better performance if one of the inputs turns out to be sparse. In this line of research, one of the core ideas is the concept of staging. We imagine the execution of a program proceeding in a series of stages, each one calculating values that are used by later stages. What we seek, then, is to write the program code so that somehow these stages are made apparent. If this is accomplished, then we can arrange for the later-stage code to be compiled into code generators that optimize against the results of earlier-stage computations.
This is easier said than done, of course. Among the many approaches that have been proposed in recent years, in my view the most promising are those that make use of sophisticated type systems to manage the identification of stages, as Tiark Rompf and Martin Odersky have proposed in the following paper on Lightweight Modular Staging (LMS). By using a type system, the programmer can indicate her intent with regard to the program staging, and have much of that intent typechecked for various kinds of consistency, for example, to make sure that later-stage computations really do come after early-stage computations. Furthermore, programs are not restricted to specialized code generation at just compile time or runtime. In fact, LMS programs can express arbitrarily many computation stages, obtaining the benefits of code generation at every stage. No matter how many stages, the type system provides the guarantee that, in every stage, all values that are needed for code generation will have been computed in time.
Besides providing guarantees on the consistency of staging, the language also supports good program structure and modularity. In particular, through the use of language features such as method inheritance and overriding, important aspects of the code specialization can be encapsulated neatly in program libraries.
With ideas such as LMS, we get closer to the day when all of our programs can be written with elegant generality and yet take advantage of runtime information for the best possible efficiency and user experience.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment