News
Architecture and Hardware News

Redesigning the Data Center

Faced with rising electricity costs, leading companies have begun revolutionizing the way data centers work, from the hardware to the buildings themselves.
Posted
Facebook's data center in Prineville, Oregon
  1. Introduction
  2. Heating up
  3. Smart Monitoring
  4. Efficiency for All
  5. Further Reading
  6. Author
  7. Figures
Facebook's data center in Prineville, Oregon
Ambient air flows through Facebook's data center in Prineville, OR, and cools the servers inside the 334,000-square-foot facility.

Late last year, Stanford University researcher Jonathan Koomey released a report detailing a few surprising trends about the energy squanderers known as data centers. Previous estimates suggested that electricity consumption in massive server farms would double between 2005 and 2010. Instead, the number rose by 56% worldwide, and merely 36% in the U.S. The slower-than-expected growth stemmed from a number of changes, including a stagnant economy and the rise of virtualization software.

Yet experts say a more fundamental change is also starting to take effect—one that could lead to much greater improvements in efficiency. Over the past seven or so years, leading companies have begun revising the way they design, maintain, and monitor data centers, from the physical building all the way down to the hardware doing the computation. Recently, Google, Facebook, and other major companies have begun releasing details on the efficiency of their facilities, and revealing a few of the technological tricks they have devised to achieve those gains.

Still, these leaders are the exception rather than the rule. There are no solid estimates of the total number of data centers in the U.S., and the Silicon Valley giants are secretive about exactly how many they operate, but they hardly dominate from an energy standpoint. In all, U.S. facilities consume between 65 and 88 billion kilowatt hours per year, and Google, for instance, accounts for less than 1% of that figure.

The fact remains that the average data center is still largely inefficient. The standard measure of a data center’s efficiency is its PUE, or power usage effectiveness. PUE is the total energy used to operate a data center divided by the amount devoted to actual computing. That total includes lighting, fans, air conditioners, and even electrical losses as power is transferred from the grid to physical hardware. Ideally, a data center would run at a PUE of 1.0, and all of the electricity would go toward computing. Yahoo!, Facebook, and Google have all touted facilities scoring below 1.1. Across industries, though, these numbers are hardly common. “What happens in the typical data center is that it’s more like 2.0,” explains Koomey.


Over the past seven or so years, leading companies have begun revising the way they design, maintain, and monitor data centers, from the physical building all the way down to the hardware doing the computation.


Until recently, most companies did not even bother measuring PUE. They had little sense of how and where energy was used or lost within the facility. “The primary reason all of this happens is because there’s not great accounting of the energy in data centers,” says Raju Pandey, the chief technical officer of Synapsense, a Folsom, CA-based company that performs data center optimizations. “There’s an incredible amount of wastage.”

Back to Top

Heating up

If you had walked into the average data center 10 years ago, you would have needed a sweater. The American Society of Heating, Refrigerating and Air-Conditioning Engineers recommended these facilities be maintained at temperatures between 60 and 65 degrees Fahrenheit to prevent the equipment inside from overheating. And the machines that cool the space are often inefficient. “Traditional data centers basically have the same air-conditioning unit you’d put in your house,” says Bill Weihl, Facebook’s manager of energy efficiency and sustainability.

The rationale was that warmer temperatures could lead to hardware failures, but several experts doubted this was actually the case. When Google began planning a new breed of data center in 2004, the company started testing the temperature limits of its hardware. “We started running our servers warmer and monitoring the failure rates,” says Joe Kava, Google’s director of data center operations. In the end, Google simply did not see any major problems. “The servers ran just fine,” he adds, “and if you know your servers can run at 80 degrees, you can redesign your cooling system entirely.”

Google found that it could avoid relying on giant air-conditioning units, as did other companies. The most efficient data centers now hover at temperatures closer to 80 degrees Fahrenheit, and instead of sweaters, the technicians walk around in shorts. Facebook’s data centers in Lulea, Sweden and Prineville, OR, have no mechanical chillers at all. “We don’t need them,” says Weihl, who was previously Google’s energy efficiency czar.

At Facebook’s Prineville facility, ambient air flows into the building, passing first through a series of filters to remove bugs, dust, and other contaminants, then into a long corridor. On hot days, when the outside temperature rises above 80 degrees Fahrenheit, the air moves through a fine mist of micron-sized droplets of water suspended in the corridor. Some of the mist evaporates on contact with the warmer outside air. This reduces the temperature, and the mildly chilled air then passes through another filter, which captures the droplets of water that did not evaporate. The end result is a rush of cool air flowing into the building.

When it is too cold outside, the control system automatically mixes in some of the 85 to 90 degree Fahrenheit air coming out of the back of the servers to bring it up to the right temperature. “We don’t want 20 degree Fahrenheit air going into our servers,” Weihl says. Drastic changes in temperature could cause components to expand and contract, creating mechanical stresses that might lead to permanent damage.

Back to Top

Smart Monitoring

The source of the cool air in traditional data centers is only part of the problem. Companies have also begun demonstrating the importance of managing circulation within the space. When Synapsense audits a facility, its technicians install wireless sensors throughout the building to measure temperature, pressure, humidity, and more. Pandey says Synapsense often identifies intense hot spots—warmer areas that force the fans and mechanical chillers to work harder to manipulate the temperature, thus increasing energy usage. “You might have enough cool air, but it’s not going to the right places,” he explains. “There might be mixing of the air or there might be areas where it’s leaking.”


The most efficient data centers now hover at temperatures closer to 80 degrees Fahrenheit, and instead of sweaters, the technicians walk around in shorts.


In one case, Synapsense installed 3,674 sensors throughout a 100,000-square-foot data center. The sensors fed Synapsense’s control system a stream of data on temperature, pressure, and humidity, and the company’s software built a live-updated map of these metrics throughout the facility. With this data, Synapsense was able to figure out how to optimize energy use by turning up certain fans or shutting down specific air conditioning units. It ended up saving the company 8,244 megawatt hours per year—or $766,000 in annual electrical bills.

In other instances, the changes can be simpler to identify. “In many data centers,” Pandey says, “the hot side of a server might be blowing air into a cool side.” When this happens, the chilled air rising from the floor is partially wasted, so Synapsense and others advocate arranging data centers into hot and cold aisles. In one aisle, you will be faced with the backs of the racks on both sides expelling warm air, whereas the two adjacent aisles will be comparatively cool, with only the front ends of the hardware facing out. Weihl says Facebook installs plastic panels around its hot aisles, creating a corridor that ferries hot air straight to the ceiling. Then it is either exhausted to the outside or mixed with incoming, colder ambient air to lower it to the ideal temperature.

Focusing on the air flow and conditions within the hardware itself has proven critical as well. Both Google and Facebook advocate simpler, stripped-down server hardware without the standard plastic or metal plates that often bear a manufacturer’s logo. “The more obstructions you put in the way of the air flow, the harder the fans have to work and the more energy you use,” Weihl says.

Those vanity plates are only one problem with standard hardware. “If you take a standard off-the-shelf server there are probably quite a few things that need to be improved to have it work more efficiently,” says Bianca Schroeder, a computer scientist at the University of Toronto and the co-author of a recent paper on data center efficiency (see the Further Reading list). For example, Schroeder notes that the standard machine won’t have internal temperature sensors to monitor whether one of its hard disk drives might be overheating. On the other hand, Facebook’s Open Vault, a freely accessible server hardware design, has 10 thermal gauges spaced throughout. These sensors link to a self-monitoring system that can adjust the speed of six fans that help to ensure the server stays cool. Furthermore, the fans themselves consume less energy than the industry standard.

Back to Top

Efficiency for All

The Open Vault design is part of a larger Facebook effort, OpenCompute, that makes the company’s data center-related efficiency tricks publicly available. Weihl says Facebook released this information in part because it does not see much of a competitive advantage in locking up its energy-saving secrets. “We’ve done a lot of cool things,” he says, “and the conclusion here was that we should figure out how to work with the industry to be as efficient as possible.”

Facebook’s goal is to have a larger impact on data center energy consumption on the whole. And Weihl says the company was thrilled to see that server manufacturers like Dell and Hewlett-Packard have incorporated some of its recommendations, like the removal of vanity plates. Such changes could translate into warmer data centers, and more savings on the cost of cooling the huge buildings.


At Facebook’s Prineville, OR, facility, ambient air flows into the building, passing first through a series of filters to remove bugs, dust, and other contaminants.


Despite the evidence, and examples from the efficiency leaders, many companies are still afraid to turn up the thermostat, says Schroeder. Her own research suggests that this fear is unjustified. “We can safely say that increasing the temperature by a few degrees will not significantly increase failure rates,” she explains, “and increasing temperature even a few degrees will save significant amounts in cooling.”

Koomey argues a number of road-blocks remain. For instance, traditional data centers can last for 15 to 20 years, preventing the wholesale adoption of the more efficient new designs, and many of these older facilities are filled with “comatose” servers that suck up power but no longer handle any computation. “There’s still a long way to go,” he says.

Back to Top

Further Reading

Barroso, L.A. and Urs Hölzle, U.
The Data Center as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Morgan & Claypool Publishers, San Francisco, CA, 2009.

El-Sayed, N., Stefanovici, I., Amvrosiadis, G., Hwang, A., and Schroeder, B.
Temperature management in data centers: Why some (might) like it hot, Proceedings of the 12th ACM SIGMETRICS/Performance Joint International Conference on Measurement and Modeling of Computer Systems, June 11-15, 2012, London, England.

Google, Inc.
Google’s Hamina data center, http://www.youtube.com/watch?v=VChOEvKicQQ&feature=player_embedded, May 23, 2011.

Hamilton, J.
Perspectives, a blog by Amazon Web Services infrastructure efficiency expert James Hamilton, http://perspectives.mvdirona.com/.

Koomey, J.
Growth in Data Center Electricity Use 2005 to 2010, Analytics Press, Oakland, CA, 2011.

Back to Top

Back to Top

Figures

UF1 Figure. Ambient air flows through Facebook’s data center in Prineville, OR, and cools the servers inside the 334,000-square-foot facilty.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More