Sign In

Communications of the ACM

ACM TechNews

Supercomputer Visuals Without Graphics Chip


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Core Collapse

Argonne National Laboratory

The obsolescence of graphics-processing clusters is being hastened by the rapidly accelerating data processing speed of supercomputers. Scientists at Argonne National Laboratory are crafting software that permits visualization by the thousands of processors within a supercomputer. Argonne researcher Tom Peterka has written software for the Intrepid supercomputer that he says "allows us to [visualize experiments] in a place that's closer to where data reside--on the same machine."

Visualization and post-processing of data produced by Intrepid normally requires a separate graphics-processing cluster, but Peterka observes that storage capacity and storage bandwidth upgrades are not keeping pace with processing speed. This means that separate graphics-processing units (GPUs) may be unaffordable for future supercomputing facilities. Los Alamos National Laboratory researcher Pat McCormick says that Peterka's direct data visualization effort is important because "these machines are getting so big that you really don't have a choice."

Peterka, McCormick, and Hank Childs with the Lawrence Berkeley National Laboratory anticipate a future where supercomputers execute in-situ processing, which entails the visualization of simulations as they are running via a circumvention of input/output.

Peterka envisions a migration away from processors specialized for specific functions as desktops follow supercomputers and GPUs into the domain of multicore and massively parallel processing.

From Technology Review

Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
ACM Resources