The Graph500 executive committee recently announced new specifications for a more representative way to rate large-scale data analytics in high-performance computing.
Graph500 rates machines on their ability to solve complex problems that have seemingly infinite numbers of components, rather than ranking machines on how fast they solve those problems. Graph500 executive committee member and Georgia Tech University professor David A. Bader says the latest benchmark "highlights the importance of new systems that can find the proverbial needle in the haystack of data." The new specification will measure the closest distance between two things, such as the smallest number of people between two random people in the LinkedIn network, says Sandia National Laboratories researcher Richard Murphy.
Large data problems are especially important in cybersecurity, medical informatics, data enrichment, and social and symbolic networks.
"A machine on the top of this list may analyze huge quantities of data to provide better and more personalized health-care decisions, improve weather and climate prediction, improve our cybersecurity, and better integrate our online social networks with our personal lives," Bader says.
From Sandia National Laboratories
View Full Article
No entries found