The challenge of moving massive amounts of data to supercomputing facilities for analysis was addressed by Indiana University researchers through data transfer over an experimental 100 Gbps network that exploits a link whose speed is 10 times that of most currently in use.
The network was established to support testing by several universities during the SCinet Research Sandbox, a component of the recent SC11 conference. A full cluster and file system operated at each end of the 2,300-mile 100 Gbps link spanning Indianapolis and Seattle, and the Indiana team achieved a peak throughput of 96 Gbps for network benchmarks and 5.2 Gbps with a combination of eight real-world application workflows. Indiana's entry employed the Lustre file system, which can support distributed applications.
Indiana's Stephen Simms says the network "will provide much needed and exciting new avenues to manage, analyze, and wrest knowledge from the digital data now being so rapidly produced."
The network also features tools for cross-administrative collaboration using multi-site workflows and distributing data from instruments to compute resources. "With a centralized file system serving thousands of computational resources around the world, user data can be available everywhere, all of the time," says Indiana's Robert Henschel.
From Indiana University
View Full Article
Abstracts Copyright © 2011 Information Inc. , Bethesda, Maryland, USA
No entries found