Sign In

Communications of the ACM

ACM TechNews

Exascale Computing: The View From Argonne


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
IBM's Sequoia Supercomputer

Lawrence Livermore National Laboratory's Sequoia supercomputer is a 16.32 petaflops IBM machine built from 96 racks containing 98,304 computing nodes and 1.6 million cores.

Credit: NNSA

In an interview, U.S. Argonne National Laboratory directors Rick Stevens, Michael Papka, and Marc Snir contextualize the challenges and advantages of developing exascale supercomputing systems.

Snir stresses that building an exascale system by stitching together many petascale computers is impossible, and argues that exascale is needed to provide complex models to match hypothesis to evidence in increasingly complex systems. "As we transition to the exascale era the hierarchy of systems will largely remain intact, so the advances needed for exascale will influence petascale resources and so on down through the computing space," Papka says.

Snir anticipates a 10-year window for exascale system deployment at best, and he notes that Argonne "is heavily involved in exascale research, from architecture, through operating systems, runtime, storage, languages and libraries, to algorithms and application codes."

Papka says the U.S. Department of Energy exascale initiative opted for a development approach emphasizing co-design to ensure that the delivered exascale resources fulfill the requirements of the domain researchers and their applications. Stevens agrees that "we will not reach exascale in the near term without an aggressive co-design process that makes visible to the whole team the costs and benefits of each set of decisions on the architecture, software stack, and algorithms."

From HPC Wire
View Full Article

Abstracts Copyright © 2012 Information Inc. External Link, Bethesda, Maryland, USA


 

No entries found