IBM is developing a new supercomputing architecture equipped with more co-processors and accelerators to increase computing speed and power efficiency. The aim is to boost data processing at the storage, memory, and input/output levels, according to IBM's Dave Turek.
He says the new architecture will help break down parallel computational tasks into small chunks, reducing the compute cycles required to solve problems.
"When we are working with petabytes and exabytes of data, moving this amount of data is extremely inefficient and time-consuming, so we have to move processing to the data," Turek says. "We do this by providing compute capability throughout the system hierarchy."
He notes the size of the data sets can be reduced by decomposing information in storage, which can then be moved to memory. "We see a hierarchy of storage and memory including nonvolatile RAM, which means much lower latency, higher bandwidths, without the requirement to move the data all the way back to central storage," Turek says.
He notes IBM is now looking at optimizing entire supercomputing workloads, which involve modeling, simulation, visualization, and complex analytics, on massive data sets. "Our own research shows that many classic [high-performance computing] applications are only moderately related to the measure of LINPACK," a benchmark measurement based on floating point operations, Turek says.
From IDG News Service
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found