In 2002, the U.S. Defense Advanced Research Projects Agency (DARPA) launched a major initiative in high-productivity computing systems (HPCS). The program was motivated by the belief that the utilization of the coming generation of parallel machines was gated by the difficulty of writing, debugging, tuning, and maintaining software at peta scale.
As part of this initiative, DARPA encouraged work on new programming languages, runtimes, and tools. It believed by making the expression of parallel constructs easier, matching the runtime models to the heterogeneous processor architectures under development, and providing powerful integrated development tools, that programmer productivity might improve. This is a reasonable conjecture, but we sought to go beyond conjecture to actual measurements of productivity gains.
It is little-known that the late Ken Kennedy initiated the idea of MPI as a runtime library that his parallelizing compilers could use in their implementation, never intending that such a rude library would ever be exposed to humans. His request to colleagues for this library gained a life of its own, and now decades of HPC programmers have been required to use a very low-level parallel programming style, thus setting back the development of more productive approaches to parallel programming For use by humans, MPI has grown into a very large library with an enormous API.
Displaying 1 comment