An FAQ about green data centres (3-Sep-07)

From Lauraibm

Contents

MI Summary

Full article: An FAQ about green data centres (3-Sep-07)

As the cost of electricity continues to increase it becomes more important to care about having a “green” data centre. The most important step is to find a way to measure the efficiency in your facility, this can be achieved through an analysis; by knowing which areas are inefficient these can be addressed to try and help reduce power consumption, thus reducing the cost of energy.

It is also useful to increase the efficiency of IT equipment within the data centre, the biggest savings in this area come from server consolidation using virtualisation technology.

The largest potential savings with regards to cooling and mechanical systems within the data centre come from airflow optimisation. Airflow blockages have been found to cause substantial losses in the data centre, this problem can be addressed by utilising initiatives such as hot aisle/cold aisle designs, and variable speed fans.

Combined these initiatives outline some of the main areas that can help in saving energy in the data centre.

Text of Article

Green computing is a hot-button issue right now, but not all the ideas out there are practical for data centers. “It’s 90% hype,” says Ben Stewart, senior vice president of facilities planning at Terremark Worldwide Inc. He’s dubious about solar and wind power, for example. But Stewart says 10% of the ideas are win-win: Done right, certain green initiatives can increase energy efficiency, reduce carbon emissions and yield savings.

According to Steve Sams, vice president at IBM Global Technology Services, there’s only one way to evaluate green energy options. “If I spent the money, where would I get the best return? That’s the question to ask,” says Sams. The key is knowing where to start. These four questions and answers can help you develop a plan.

Why should I care about having a green data center?

Data center managers who have run out of power, cooling or space are already motivated to move to greener practices. But many others don’t care because they put reliability and performance first — and they don’t see the power bills, says Peter Gross, CEO at New York-based EYP Mission Critical Facilities Inc. That’s likely to change as electricity consumption continues to rise. “Our data centers are a small fraction of our square footage but a huge percentage of our total energy bill,” says Sams.

The cost of electricity over a three-year period now exceeds the acquisition cost of most servers, says Gross. “I don’t know how anybody can ignore such an enormous cost. It is the second-largest operating cost in data centers after labor,” he says. Gross says that every CIO, facility manager and CEO he meets expresses concern about data center energy efficiency.

“My CEO is beating the drum about cutting power consumption,” says John Engates, chief technology officer at hosting company Rackspace Inc. in San Antonio. He says just 50% of power coming into the data center goes to the IT load. The rest is consumed by surrounding infrastructure, including power, cooling and lighting. “If you’re using less power, you’re spending less money. It’s just good business,” Engates says.

Returns on investment can be difficult to determine, however, because in most cases, the IT staff in a data center doesn’t see the power bill. “The single most important step is to find ways to measure efficiency in your facility,” says Gross. “You cannot control what you cannot measure.”

One way to determine overall data center energy efficiency and provide a benchmark is to hire professionals to do an analysis. An inspection by IBM Global Technology Services costs $50,000 to $70,000 for a 30,000-square-foot data center, says Sams.

But just a one- or two-day engagement might get you most of the benefits for a lot less money, says Rakesh Kumar, an analyst at Gartner Inc.

What steps can I take to increase the efficiency of my data center’s IT equipment?

The biggest savings come from server consolidation using virtualization technology. Not only does this remove equipment from service, but it also helps raise server utilization rates from the typical 10% to 15% load today, increasing energy efficiency.

Consolidating onto new servers brings an additional benefit. Power-supply efficiencies for servers purchased more than 12 months ago typically range from 55% to 85%, says Gross. That means 15% to 45% of incoming power is wasted before it hits the IT load. Newer servers operate at 92% or 93% efficiency, and most don’t drop below 80%, even at lower utilization levels.

Using virtualization, Affordable Internet Services Online Inc. in Romoland, Calif., consolidated 120 servers onto four IBM xSeries servers. “Now we don’t have the power use and cooling needs we had before,” says CTO and co-founder Phil Nail.

Using networked storage can also keep energy costs in check. Direct-attached storage devices use 10 to 13 watts per disk. In an IBM BladeCenter, for example, 56 blades can use 112 disk drives that consume about 1.2 kilowatts of power. Those can be replaced with a single 12-disk Serial Attached SCSI storage array that uses less than 300 watts, says Scott Tease, BladeCenter product manager.

IT managers should demand more energy-efficient designs for all data center equipment, says Engates. He says his company standardized on Brocade Communications Systems Inc. switches in part because of their energy efficiency and “environmental friendliness.”

How can I get more out of my data center’s cooling and mechanical systems?

Getting back to basics is key, says Dave Kelley, manager of application engineering at Columbus, Ohio-based Liebert Precision Cooling, a division of Emerson Network Power Co. “You have to go back and look at a lot of the things that you didn’t worry about 10 years ago.”

The biggest potential savings come from airflow optimization. For every kilowatt of load, each rack in a data center requires 100 to 125 cubic feet of cool air per minute. Airflow blockages under the floor or air leaks in the racks can cause substantial losses, says Kelley. The typical response to such problems has been to increase the air conditioning temperature — and that’s a big energy-waster.

Simple steps such as implementing hot-aisle/cold-aisle designs, sealing off cable cutouts, inserting blanking plates and clearing underfloor obstructions make a big difference. With greater airflow efficiency, air conditioning output temperatures can be raised.

After performing a computerized airflow analysis of its data centers, San Francisco-based Wells Fargo & Co. did exactly that. “In many data centers, you can hang meat in there, they’re so cold. With computerized control and better humidification systems, we’ve raised the set point of our data centers so we’re not overcooling them,” says Bob Culver, senior vice president and manager of facilities for Wells Fargo’s technology information group.

At Pacific Gas and Electric Co. (PG&E), cable races under the floor were blocking 80% of the airflow. The utility expects to save 15% to 20% in energy costs by rewiring under the floor, redesigning the return-air plenum and carefully choosing and placing perforated tiles in the cold aisles. Choosing the right perforated tile — a seemingly small consideration — can actually make a big difference. “There are better tiles out there that will give you more efficient distribution of cool air,” says Jose Argenal, PG&E’s data center manager. The changes also allowed PG&E to avoid adding chillers, pumps and piping — and piping is a potential problem in its older, basement-level data center.

Data center managers can also optimize air conditioning systems by using variable-speed fans, says Ken Baker, data center infrastructure technologist at Hewlett-Packard Co. “AC runs at 100% duty cycle all the time, and the fans have one speed: on,” he says. HP’s Dynamic Smart Cooling initiative uses rack-mounted temperature sensors, and variable-speed fans allow the power consumption of air conditioning units to vary with the IT equipment load. Intelligent control circuitry manages both fan speed and temperature settings on air conditoners.

It’s relatively easy to retrofit existing fans, Baker says, and the approach has two major benefits. One is that cutting fan speed dramatically reduces energy use. A 10-horsepower fan uses 7,500 watts of power at full speed but just 1,000 watts at half speed, he says. The increased efficiency also allows the temperature of the cool air supply to be automatically raised from the typical 55 degrees Fahrenheit to between 68 and 70 degrees, he says.

“The biggest low-hanging fruit is just turning the thermo­stat up,” Baker says. People keep the temperature set too low because they fear that the equipment will overheat after a power interruption before the air conditioning system can get the room temperature back under control. “The truth is that the temperature won’t rise that rapidly,” Baker says.

Managers of data centers located in colder locales can also save money by designing air conditioning systems that use economizers that take advantage of outside air to cool their facilities during the winter. Wells Fargo implemented such a system in its Minneapolis data center. That technology makes the most sense when designing new data centers.

Are there changes I can make to my power distribution system that will increase efficiency and save money?

Data centers use many uninterruptible power supplies. In fact, when it comes to energy consumption, UPSs are second only to air conditioning systems among components of the data center infrastructure, and they represent one of the biggest areas for potential savings, says Sams. While servers tend to be refreshed every three or four years, data center UPS equipment tends to be much older. The units are often oversized for the load and were never designed to operate efficiently when running at low utilization rates. While older units might run at 70% efficiency at low utilization levels, newer UPSs run at 93% to 97% efficiency even at low utilization levels, Sams says.

Rather than buying traditional UPSs, Terremark Worldwide went with greener technology. It replaced all of its battery-backed UPSs in its Miami data center with rotary UPSs. These use a spinning flywheel to deliver transitional power during the time interval between when power is lost and when generators come online. Stewart says flywheels aren’t necessarily more energy-efficient than modern battery-backed UPSs, and the units can be heavy. But they take up less floor space and are greener because there are no lead-acid batteries to dispose of. Today, Terremark’s Miami data center fits 6 megawatts of generators and UPS equipment into a 2,000-square-foot room. “To do that with a static UPS, you’d need three to five times the space just for the batteries,” Stewart says.

Efficiencies can also be gained in the power distribution system. Most data centers step voltage down several times, from 480 to 208 volts and then to 120 volts. Kelley says you can reduce conversion losses by bringing 480 volts directly to the racks and stepping it down from there. Stewart says he is considering moving Terremark’s system to higher European-standard voltage for the same reason. Most IT equipment already supports a 240-volt feed. He expects to see a 4% efficiency gain. “Our power bill is $400,000 a month, so that adds up pretty quickly,” Stewart says.

The best green options will vary with the configuration of each data center. The key to success is to focus on the big picture when assessing overall power and cooling needs, says Gross. “Know what you have, benchmark it, figure out where the low-hanging fruit is, and start one element at a time,” he says.

For an overview on the topic(s), see also

Personal tools