Sign In

Communications of the ACM

ACM TechNews

More Chip Cores Can Mean Slower Supercomputing, Sandia Simulation Shows

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

Simulations at Sandia National Laboratory have shown that increasing the number of processor cores on individual chips may actually worsen the performance of many complex applications. The Sandia researchers simulated key algorithms for deriving knowledge from large data sets, which revealed a significant increase in speed when switching from two to four multicores, an insignificant increase from four to eight multicores, and a decrease in speed when using more than eight multicores. The researchers found that 16 multicores were barely able to perform as well as two multicores, and using more than 16 multicores caused a sharp decline as additional cores were added. The drop in performance is caused by a lack of memory bandwidth and a contention between processors over the memory bus available to each processor. The lack of immediate access to individualized memory caches slows the process down once the number of cores exceeds eight, according to the simulation of high-performance computing by Sandia researchers Richard Murphy, Arun Rodrigues, and Megan Vance. "The bottleneck now is getting the data off the chip to or from memory or the network," Rodrigues says.

The challenge of boosting chip performance while limiting power consumption and excessive heat continues to vex researchers. Sandia and Oak Ridge National Laboratory researchers are attempting to solve the problem using message-passage programs. Their joint effort, the Institute for Advanced Architectures, is working toward exaflop computing and may help solve the multichip problem.

From Sandia National Laboratories

View Full Article


No entries found