LinuxWorld 2007 goes green with Green Grid consortium (10-Aug-07)

From Lauraibm

Contents

MI Summary

Full article: LinuxWorld 2007 goes green with Green Grid consortium (10-Aug-07)

  • Rackable offers 40U racks which use energy-efficient DC power distribution within the rack.
    • A large amount of power is wasted before it even touches the server when we have to go through a separate UPS and power distribution system that outputs AC which has to be converted back to DC again by the servers power supply.
  • Sun’s energy-efficient BlackBox project is a data centre in a standard shipping container. What makes Sun’s BlackBox efficient is that it uses water as a heat exchange mechanism. Since water is about 7 times more efficient in heat exchange over air, it reduces the amount of power consumption used for cooling. You basically pump cold water in to the BlackBox and warm water comes out.
  • SMEs with server rooms may need a little motivation to conserve power. These IT shops often use pedestal-based servers and a mixture of rack-mount servers and they often never see the electricity bill since that may be handled by departmental budgets. Business should seriously consider moving the power budget under the IT department and then they might get some motivation in containing energy costs through more efficient server rooms and desktop computers. Until then all the cries of green computing seems to fall upon deaf ears.

Text of Article

Next generation data center and green computing were common themes at LinuxWorld 2007 and the Green Grid panel brought many of these issues to light. Pictured above is a panel of individuals who represent companies on the Green Grid consortium. The discussion mostly centered around issues of containing power consumption, corporate sales pitches, battle of words between Sun and IBM, and even something about the environment. See LinuxWorld 2007 hardware gallery.

Rich Lechner raised a very interesting point that the greenest computer is the one that doesn’t exist, but then he went on to give his obligatory spiel about IBM mainframes (presumably zSeries) and how they can consolidate a hundred conventional Linux servers (compiled for zSeries) on to a single mainframe. Others however challenged Lechner on the fact that while the software may be open, the hardware is proprietary. I also have to question if it’s really cheaper to buy a big proprietary mainframe when relatively cheap commodity quad-socket quad-core systems from Intel and AMD can easily host 32 or 64 virtual servers running Linux, BSD, or Windows compiled for generic x86 or x64.

Rackable’s Colette LaForce explained to me after the panel discussion that Rackable offers 40U racks which can support a total of 80 1U half-depth 2-socket quad-core x86/x64 servers using energy efficient DC (Direct Current) power distribution within the rack. A large amount of power is wasted before it even touches the server when we have to go through a separate UPS (Uninterruptible Power Supply) and power distribution system that outputs AC (Alternating Current) which has to be converted back to DC again by the servers power supply. Rackable mostly sells their products to large data centers and the customer usually orders servers by the rack with pre-cabled servers and rarely individually. Unlike HP’s c-Class blade servers, Rackable’s designs are not meant for you to fill the rack as you grow, they’re meant for you to fill the datacenter aisles with more pre-configured racks as you go. This design isn’t as flexible but it’s more cost effective for larger operations.

Lechner also made a notable comment that perhaps the first thing you do after you finish converting 3000 servers to 30 mainframes is to fire the guy who’s bright idea it was to architect the 3000 servers. Lechner argued that consolidating 3000 servers to 30 mainframes would shift CPU utilization from less than 10% on average to near full capacity and this would waste less energy because you don’t have 3000 servers doing mostly nothing. Andrew Kutz of the Burton Group acting as the moderator gave a good rebuttal that maybe that guy tried consolidating a bunch of things on to a single server and it blew up in his face. As someone who worked in the front lines of IT for many years, I can attest that the dirty little secret of consolidation is the increase of interdependencies that makes IT management far more difficult.

While things have improved substantially with the arrival of cheap virtualization and cheap multi-core hardware, issues still remain. Virtualization might solve some of the problems inherent in consolidation because it affords us logical separation in the software, but we still have to be aware of the fact that there are still more hardware interdependencies. Hardware downtime due to failure or maintenance now means IT has to contact every department that the tens of servers touches and get approval from each one of them for the most convenient time to take the server down. The problem is that the most convenient time for department A may not be the most convenient time for department B and it only gets more complicated as you share more and more hardware resources. These problems aren’t insurmountable, but they need to be acknowledged and dealt with. The recent shift to cheap hardware and cheap virtualization has changed the economics to be in favor of consolidation but there was a good reason why IT departments use to run everything as separate servers because it meant the least interdependency and the least impact whenever there was a hardware failure.

Sun’s David Douglas who was sitting next to Lechner wasn’t about to be outdone and he reminded the audience that Sun just released the new UltraSPARC T2 (codenamed Niagara 2). The UltraSPARC T2 allows the consolidation of 64 legacy SPARC-based servers to migrate on to a single T2 system through Solaris containers or LDOMs (Logical Domains). Douglas also boasted that Sun was one of the first to get the power company to offer energy efficiency rebates for Niagara based servers.

Note: The T2 chip has eight 1.4 GHz CPU cores, two pipelines per core, four threads per pipeline, eight crypto off-loaders, a 10-Gigabit Ethernet controller, and four memory controllers on a single monolithic 342 mm squared die built on a 65nm process.

Douglas also mentioned Sun’s energy efficient BlackBox project which is a datacenter in a standard shipping container. What makes Sun’s BlackBox efficient is that it uses water as a heat exchange mechanism. Since water is about 7 times more efficient in heat exchange over air, it reduces the amount of power consumption used for cooling. You basically pump cold water in to the BlackBox and warm water comes out.

Fred Stack of Emerson Network Power raised the bar even higher on cooling efficiency by saying that their chillers use a substance called 134a. While water might be seven times more efficient than air, 134a is seven times more efficient than water and it’s non-conductive which is a very desirable property if the coolant ever leaks on to the servers.

Someone in the audience asked when will corporations stop being so beholden to the almighty dollar and start prioritizing the environment more. One of the panelists answered that everything has to be driven by economics or else it is a nonstarter and I agree with them. “Green computing” has to mean a saving of green pieces of paper or else corporations and consumers won’t adopt it. Business should seriously consider moving the power budget under the IT departmentFortunately, consolidation through virtualization and the power savings it translates to is a win/win for everyone because it does save money and it lowers energy consumption.

Large datacenters are keenly aware of the energy consumption issues out of necessity because their power bills are through the roof and sometimes they’re up against an absolute ceiling on power utilization. Small and medium size businesses and organizations with server rooms may need a little motivation to conserve power. These IT shops often use pedestal-based servers and a mixture of rack-mount servers and they often never see the electricity bill since that may be handled by Facility’s budget. Business should seriously consider moving the power budget under the IT department and then they might get some motivation in containing energy costs through more efficient server rooms and desktop computers. Until then all the cries of green computing seems to fall upon deaf ears.

For an overview on the topic(s), see also

Personal tools