BLOG@CACM
Systems and Networking

Simple HPC Wins

Posted
Microsoft Research Director Daniel Reed

In the 19th century, writing about his work on mechanical calculating devices, Charles Babbage noted, “The most constant difficulty in contriving the engine has arisen from the desire to reduce the time in which the calculations were executed to the shortest which is possible.” Roughly a century later, Daniel Slotnick wrote retrospectively about the ILLIAC IV parallel computing design, “By sacrificing a factor of roughly three in circuit speed, it's possible that we could have built a more reliable multi-quadrant system in less time, for no more money, and with a comparable overall performance.”

Babbage’s design challenged the machining and manufacturing capabilities of his day, though recently others were able to build a functioning system using parts fabricated to tolerances achievable with 19th century processes. Similarly, Slotnick’s design challenged electronics and early semiconductor fabrication and assembly. Today, of course, parallel computing designs embodying tens of thousands of processors are now commonplace, leveraging inexpensive, commodity hardware.

Technology Lessons

There is a lesson here that systems designers repeatedly ignore at their peril. Simple designs usually triumph, and the artful exploitation of mainstream technologies usually bests radical change. Or, as Damon Runyon once archly observed, “The race may not always be to the swift, nor the victory to the strong, but that's how you bet.”

All of which is to say that incrementalism wins repeatedly, right up to the point when a dislocating phase transition occurs.  There are, of course, many paths to failure. One can be too early or too late. Or to put it another way, you want to be the first person to design a successful, transistorized computer system, not the last person to design a vacuum tube computer. The same is true of design approaches such as pipelining, out of order issue and completion, superscalar dispatch, cache design, system software, and programming tools.

Innovator’s Dilemma

Any designer’s challenge is to pick the right technologies at the right time, recognizing when inflection points–maturing, disruptive technologies–are near. This is the essence of Clayton Christensen’s well-documented innovator’s dilemma.

The shift from largely proprietary high-performance computing (HPC) designs to predominantly commodity clusters a decade ago was but one of the most recent transitions. Arguably, we are near another disruptive technology point. The embedded hardware ecosystem offers one intriguing new performance-power-price point, particularly as we consider trans-petascale and exascale designs that are energy constrained. The experiences of cloud providers in building massive scale infrastructures for data analytics and on-demand computing are another possibility.

As I frequently told my graduate students at Illinois, the great thing about parallel computing is the question never changes–“How can I increase performance?”–but the answers do. Babbage would have understood.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More