Scalability is the capability of a parallel program to speed up its execution as we provide it with more CPUs. Back in 1967, Gene Amdahl noticed the sequential part of a parallel program had a disproportionate influence on scalability.1 Suppose that some program takes 100 s to run on a sequential processor. Now, let's run it on a parallel computer. If we are able to parallelize, say, 80% of the code, then with enough CPUs that 80% would take essentially zero time. However, the remaining sequential portion will not run any faster; this means the parallel program will always take at least 20 s to run, a maximum speed-up of only 5X. If we are able to parallelize 95% of the code, speedup is still limited to 20X, even with an infinite number of CPUs! This back-of-the-envelope calculation, known as Amdahl's Law, does not take into account other factors, such as increased memory size, but remains an important guideline.
In 1967, parallelism was a niche topic, but not any more. Improving program performance on today's clusters, clouds, and multicore computers requires the developer to pay serious attention to scalability. The inherent scalability of an interface is the focus of the following paper.
No entries found