News
Computing Applications

The Cloud Enables Hpc For All

Posted
An illustration representing the massive nature of high-performance computing.
As organizations start to tackle big data issues and their computing needs increase, the demand for supercomputing capabilities is growing.

As a software-as-a-service (SaaS) provider whose business operates 24/7, the last thing Commissions Inc. needed was downtime on the Fourth of July, but that’s exactly what happened in 2012, when one of the startup’s vendors, which monitors the software the company provides to real estate brokers and agents across the US and Canada, erroneously reported the SaaS provider was having connectivity issues.

The problem actually resided with a third-party vendor, and a 4 a.m. call to Commissions Inc.’s cloud provider, Rackspace, quickly nipped the matter in the bud. Commissions’ Chief Software Architect Matthew Swanson says having a system "that is fast, vast, and accurate is of utmost importance" so its customers can connect agents and brokers with homebuyers and sellers.

As organizations start to tackle big data issues and their computing needs increase, the demand for supercomputing capabilities is growing. Many of today’s applications need much more computing capacity than in the past, in order to analyze and mine greater amounts of data. Companies like Rackspace, Amazon, and Microsoft have answered the call by offering massive-scale cloud computing capacity, and Rackspace earlier this month announced the availability of new Performance Cloud servers which the company says offer greater speed, throughput, and reliability. IBM has also gotten into the act, announcing that its IBM Watson technology will be available as a development platform in the cloud for the first time, with the goal of enabling application developers of all sizes and in a variety of industries, to build innovative new cognitive apps.

"Things that five years ago were impossible, right now are possible because of the amount of compute and data that exists at your fingertips due to cloud computing," says Paul Rad, open technology strategy vice president at Rackspace, and director of research for Cloud and Big Data at the University of Texas, San Antonio.

As the big data phenomenon continues, demand for high-performance computing (HPC) is only going to grow. Sixty-seven percent of respondents to the 2013 IDC Worldwide Study of HPC End-User Sites study said they perform big data analysis on their HPC systems, with 30 percent of the available computing cycles devoted, on average, to big data analysis work. IDC forecasts revenue for high-performance data analysis (HPDA) servers will grow robustly, increasing from $743.8 million in 2012 to approach $1.4 billion in 2017. HPDA storage revenue will near $1 billion million by 2017, the research firm projects.

The types of work HPC is attracting include molecular dynamics, which is used across a lot of sectors besides life sciences, and is suitable for "embarrassingly highly parallel work," says Steve Conway, research vice president of High Performance Computing/Data Analysis at IDC. When German drug company Schrodinger wanted to test the accuracy of its drug algorithm in 2012, it built a 50,000-core supercomputer on Amazon to test it against 21 million drug candidates, he notes.

HPC servers also tend to be used for R&D projects that companies want isolated until they can be put into production on a day-to-day basis, Conway says. They can be used in the oil and gas industry, by companies doing seismic processing when security concerns are addressed, he says. They are also used by lots of smaller companies and startups, as well as by departments within larger companies that don’t have access to on-premise computers for HPC, so they turn to the cloud.

HPC servers are well over 90-percent utilized, Conway adds, and don’t have a lot of extra capacity to give out as they are often oversubscribed. The 2013 IDC study also reveals that the proportion of sites taking advantage of cloud computing to address parts of their HPC workloads climbed from 13.8 percent in 2011 to 23.5 percent in 2013, with public and private cloud use about equally represented among the 2013 sites.

The Performance Cloud Server from Rackspace offers the high input/output performance needed for a wide variety of scientific and business applications, including sorting and searching of large data sets, parallel computing with large data inputs and data analysis and mining, says Rad. "This is the next step in making cloud computing a universal technology to accommodate transaction-oriented, as well as high performance computing applications."

In early November, a company called Cycle Computing said it ran a project simulating the properties of 205,000 molecules in 18 hours on Amazon’s cloud servers, which would have previously taken 264 years on a single conventional server. The cost to the client, the University of Southern California was $33,000, a pittance compared with the millions of dollars it would have cost to purchase hardware that often sits idle. According to the company, projects like the one at USC are well suited for the server processing offered by cloud servers.

Rad concurs. When companies were building data centers 10 years ago, it cost tens of millions of dollars to house 1,000 servers and it took a couple of years to build them, he notes. "Now you can go to Rackspace and do the same thing in a period of days or even hours, to spin that many machines as needed on Rackspace’s cloud and it costs only a fraction of what it did 10 years ago." With the utility consumption model that cloud providers offer, says Rad, when a project is completed, someone else will use the servers and they don’t sit idle.

HPC is advantageous to providers because these types of projects tend to take up a lot of real estate in the cloud, says Conway, referring to the Schrodinger drug example, in which 50,000 cores were used on Amazon, compared with non-HPC jobs that might only use anywhere from three to 10 cores.

The cost of using the cloud can be five to seven times as much as doing computing in-house, according to Conway, but for companies that don’t have their own data centers, the cloud is far less expensive, he says.

Esther Shein is a freelance technology and business writer based in the Boston area.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More