How to Run an Efficient Data Centre (1-Nov-07)

From Lauraibm

Revision as of 12:10, 2 November 2007 by Laura (Talk | contribs)
(diff) ←Older revision | view current revision (diff) | Newer revision→ (diff)

Contents

MI Summary

Full story: How to Run an Efficient Data Centre (1-Nov-07)

Text of Article

Today's business environment runs 24 hours a day. Customer demand and the "follow the sun" necessities of global operation mean that access to transactions and services is needed 24 hours a day, seven days a week, and downtime has massive business impact.

Meanwhile, data-retention legislation means that businesses are forced to store ever-larger volumes of information. Green concerns mean they have to do this while minimising energy consumption. As if that was not enough, businesses also expect to be able to react quickly to their markets and rapidly roll out software to support new business processes.

Because datacentres are under intense pressure to perform with exceptional reliability, good design and management are vital. Where once the datacentre was a bunch of servers in a room, pressures on space, power supply and cooling capacity - plus the need to reconfigure quickly to meet new business demands - mean that design has to be extremely well thought out.

At the same time, hardware and software assets must be managed, and the requirement of business continuity must be dealt with. So, what are the key planks of datacentre design and operations that can ensure 24x7 operations?

First, it is worth stepping back a little and defining what the datacentre is meant to do so that we can determine its key characteristics.

The key aim of any datacentre should be to cost-effectively support the technology needs of the business, says James Staten, principal analyst at Forrester Research.

"What this means to datacentre design is making sure the right level of facility is matched to the needs of the business," Staten says.

"For example, if the business requires the datacentre to provide 24x7, high-performance internet-based application availability, that has strong implications on the reliability, availability and serviceability features of the datacentre, and how many datacentres are needed to accomplish this goal. If the demands are less, the integrity of the datacentre can be lower."

The key purpose of a datacenter is to support the operations of the business. So it is critical for the IT department to work side by side with business to understand strategy so that the infrastructure can scale with the business.

At the same time, the datacentre has to be able to change to meet the demands of the business as the market environment changes. Simplicity that can enable flexibility and agility needs to be built in from the start, says Guy Bunker, chief scientist at Symantec.

"The design needs to be able to react quickly and efficiently to changing business needs. Time equals money, and by creating a standardised infrastructure, management costs can be reduced. Complexity is the enemy of the IT administrator, and no more so than in the datacentre," Bunker says.

For that reason, datacentre experts recommend building as much standardisation and modularity into the hardware inventory as possible. This makes sense because when compared with high-cost silos, standardisation allows a greater degree of commonality and more rapid change to meet business needs.

Modularity of hardware can also help to optimise power and cooling, which is a vital consideration for the datacentre, as not only do efficient servers cost less, they also perform more reliably, says Ian Brooks, vertical marketing manager for the UK and Ireland at Hewlett-Packard.

"Indications show that it already costs more to power and cool a server over its lifetime than it does to buy the server. With multiplying numbers of servers, higher densities and hotter processors, it is clear that IT facilities are running out of cooling capacity and power. This is a problem regardless of platform - rack, tower, blade - all datacentres need to address it," Brooks says.

Power and cooling problems in datacentres are a product of the increasing power densities of modern IT hardware. Introducing the most efficient power distribution and cooling mechanisms will result in the lowest cost to the organisation for running the datacentre.

Traditional raised-floor datacentres using forced cool air can no longer provide enough cooling power for the maximum power densities, says mark Blackburn, chief technologist with management software supplier 1E.

"A 19-inch rack full of blade servers would require, on average, eight perforated floor tiles in front of it to maintain the requisite airflow to keep them cool," Blackburn says.

"Advances in CPU efficiencies with the advent of multi-core and smaller die sizes through further miniaturisation of logic circuitry alleviates this somewhat, but the trend for higher and higher densities forced by Moore's Law leads to requiring new power and cooling designs to increase efficiency."


Innovations in cooling

There are a number of innovations in cooling that provide alternatives to computer-room air chillers, such as using ducted cool air from the outside environment, or using water-based heat exchangers.

Making your datacentre a lights-out environment can also cut cooling requirements. Using remote access, you can ensure doors are not opening and closing all the time, which means the air conditioning does not have to work as hard to keep the room temperature within acceptable limits.

Cooling is also aided by rearranging your racks to create hot and cool aisles, says Bill Allen, director of product management with remote access software supplier Avocent.

"Position your racks so that one row has the front of equipment facing the other and the next row is the backs of the racks facing inward," he says. "This can help with airflow, especially if the air conditioning ductwork overhead is aimed directly on the 'hot' aisles. This exercise will also force you to be smarter about cabling issues which can impede good airflow," he says.

It is also possible to reduce power consumption and cooling needs by making sure applications are running with the correct number of servers. Blackburn says, "Many applications are over-served, and turning off unused capacity can be a quick and easy win. Aslo, turn off unused systems – there are many legacy systems that are no longer utilised that can be decommissioned, thereby saving power."

Metered and switched power-management devices also help match power to actual processing needs. That way you can power down specific, non-critical, machines during "off hours" to lessen the power draw on each rack.

Powering up machines in a sequence instead of all at once can also help prevent a massive power draw at the boot-up moment. After dealing with the physical design and layout of the datacentre, the next step is to consider managing the datacentre and its hardware and software assets.

At the highest level, where the computing infrastructure meets the physical infrastructure, Blackburn recommends an interface between the IT and facilities departments.

"Organisations should handle the datacentre holistically, and integrate the IT and facilities staff by bringing them under a single management structure," Blackburn says.

"Making the person responsible for buying the servers also responsible for paying the electricity bill will ensure that more thought is given to datacentre efficiency as a whole. Attempt, wherever possible, to integrate building-management and systems-management tools. This gives you an insight into how operations are impacting power consumption and it is then possible to make small changes to systems operations."

Then there are the software tools that can help manage the datacentre's hardware and software. It is possible to get tools from suppliers such as HP, Tivoli and BMC that provide single-screen, real-time supplier agnostic views of:

  • Monitor power and cooling requirements
  • Failover and redundancy for disaster prevention
  • System health monitoring and alerting for disaster prevention
  • Audit logs and reporting for company- or government-mandated compliance requirements
  • Access and control of virtual servers as well as physical servers.

Such management tools may sometimes provide aids to change management too, although point products are also available. Key tasks these tools assist with are provisioning of new server environments, for example.

Automating server provisioning can have a dramatic impact on the length of time it takes to get a new system operational, and can provide many benefits in large, complex, interdependent environments, especially in the test and development environments where many servers may need to be repeatedly rebuilt.


Keeping track of changes

Another vital aid to change management is the configuration management database (CMDB), says Symantec's Guy Bunker. "A CMDB is rapidly becoming an essential element in tracking and controlling systems in the datacentre.

"This can be coupled into an automated provisioning system, so that machines, especially blades, can be quickly and efficiently re-provisioned and therefore re-used on multiple tasks. CMDBs are also essential in tracking 'configuration drift', which becomes critical when looking at disaster recovery planning," he says.

Closely allied with change management is preventative maintenance, which nowadays is more about software than hardware. Hardware has achieved mean time between failure figures in the hundreds of thousands or even millions of hours, so good preventative maintenance is nowadays synonymous with good patch management.

This can be handled fairly easily within the datacentre through planned downtime and a good change and configuration management system.

Last but by no means least there is the training and skills aspect of datacentre management. Here, IT departments need to ensure they have effective Information Technology Infrastructure Library- and IT Service Management-compliant procedures in place. These provide industry best practices and standard operating procedures in terms of IT deployment that can be adapted to any IT organisation.

Simplifying your environment and judicious use of outsourcing can also help, says Quocirca's Clive Longbottom. "Keep it simple. Consolidate to a single transport such as TCP/IP, rationalise applications and services, provide first-level support yourself – such as forgotten passwords – and outsource more technical aspects," Longbottom says.

"Also think about moving basic technology skills into more of a business-technical translator capability to bridge the gap between the business and the IT capability."

Text of Article

  • Source: [ ]

Text of Article

  • Source: [ ]

For an overview on the topic(s), see also

  • [[]]
Personal tools