Category Archives: Data center infrastructure management

Measurement, Control and Efficiency in the Data Center

Guest Post by Roger Keenan, Managing Director of City Lifeline

To control something, you must first be able to measure it.  This is one of the most basic principles of engineering.  Once there is measurement, there can be feedback.  Feedback creates a virtuous loop in which the output changes to better track the changing input demand.  Improving data centre efficiency is no different.  If efficiency means better adherence to the demand from the organisation for lower energy consumption, better utilisation of assets, faster response to change requests, then the very first step is to measure those things, and use the measurements to provide feedback and thereby control.

So what do we want to control?  We can divide it into three: the data centre facility, the use of compute capacity and the communications between the data centre and the outside world.  The balance of importance of those will differ between all organisations.

There are all sorts of types of data centres, ranging from professional colocation data centres to the server-cupboard-under-the-stairs found in some smaller enterprises.  Professional data centre operators focus hard on the energy efficiency of the total facility.  The most common measure of energy efficiency is PUE, defined originally by the Green Grid organisation.  This is simple:   the energy going into the facility divided by the energy used to power electronic equipment.  Although it is often abused, a nice example is the data centre that powered its facility lighting over POE, (power over ethernet) thus making the lighting part of the ‘electronic equipment, it is widely understood and used world-wide.  It provides visibility and focus for the process of continuous improvement.  It is easy to measure at facility level, as it only needs monitors on the mains feeds into the building and monitors on the UPS outputs.

Power efficiency can be managed at multiple levels:  at the facility level, at the cabinet level and at the level of ‘useful work’.  This last is difficult to define, let alone measure and there are various working groups around the world trying to decide what ‘useful work’ means.  It may be compute cycles per KW, revenue generated within the organisation per KW or application run time per KW and it may be different for different organisations.  Whatever it is, it has to be properly defined and measured before it can be controlled.

DCIM (data centre infrastructure management) systems provide a way to measure the population and activity of servers and particularly of virtualised machines.  In large organisations, with potentially many thousands of servers, DCIM provides a means of physical inventory tracking and control.  More important than the question “how many servers do I have?” is “how much useful work do they do?”  Typically a large data centre will have around 10% ghost servers – servers which are powered and running but which do not do anything useful.  DCIM can justify its costs and the effort needed to set it up on those alone.

Virtualisation brings its own challenges.  Virtualisation has taken us away from the days when a typical server operated at 10-15% efficiency, but we are still a long way from most data centres operating efficiently with virtualisation.  Often users will over-specify server capacity for an application, using more CPU’s, memory and storage than really needed, just to be on the safe side and because they can.   Users see the data centre as a sunk cost – it’s already there and paid for, so we might as well use it.  This creates ‘VM Sprawl’.  The way out of this is to measure, quote and charge.  If a user is charged for the machine time used, that user will think more carefully about wasting it and about piling contingency allowance upon contingency allowance ‘just in case’, leading to inefficient stranded capacity.  And if the user is given a real-time quote for the costs before committing to them, they will think harder about how much capacity is really needed.

Data centres do not exist in isolation.  Every data centre is connected to other data centres and often to multiple external premises, such as retail shops or oil rigs.  Often those have little redundancy and may well not operate efficiently.  Again, to optimise efficiency and reliability of those networks, the first requirement is to be able to measure what they are doing.  That means having a separate mechanism at each remote point, connected via a different communications network back to a central point.  The mobile phone network often performs that role.

Measurement is the core of all control and efficiency improvement in the modern data centre.  If the organisation demands improved efficiency (and if it can define what that means) then the first step to achieving it is measurement of the present state of whatever it is we are trying to improve.  From measurement comes feedback.  From feedback comes improvement and from improvement comes control.  From control comes efficiency, which is what we are all trying to achieve.

Roger Keenan, Managing Director of City Lifeline

Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Euro Data Centre Viewpoint: 2013 to be a Year of Growth, Uncertainty

Guest Post by Roger Keenan, Managing Director of City Lifeline

The data centre industry forms part of the global economy and, as such; it is subject to the same macro-economic trends as every other industry.  For 2013, those continue to be dominated by uncertainty and fear.  The gorilla of course, is the on-going problem in the Eurozone.  This time last year, many commentators predicted that this would come to a head in 2012, with either the central monetary authorities accepting fiscal union and central control across the Eurozone, or the Eurozone starting to break up.  In the event, neither happened and the situation remains unresolved and will continue to drive uncertainty in 2013.

One major uncertainty has been resolved with a convincing win for Barack Obama in the US presidential elections and the removal of the possibility of a lurch to the right.  However, the “fiscal cliff” remains and will cause a massive contraction in the US economy, and hence the world economy, if it goes ahead at the end of 2012.  For the UK, predictions are that interest rates will stay low for the next two to three years as the banks continue to rebuild their strengths at the expense of everyone else.

So the macro-economic environment within which the data centre industry operates is likely to stay uncertain and fearful in 2013.  Companies have massive cash reserves, but they choose to continue to build them rather than invest.  Decision making cycles in 2013 are likely to be as they are now – slow.  Companies will not invest in new project unless they have the confidence that their customers will buy, and their customers think the same and so the cycle goes round.

At a more specific industry level, the on-going trend towards commoditisation of infrastructure is likely to continue.  Whereas data centres five years ago were specific and unique, new entrants to the market have made data centre capacity more available than it was and driven up technical standards.  Older facilities have upgraded to match new builds, which ultimately benefits the industry and its customers.  The new builds and rebuilds are of varying quality and veracity, with some being excellent, however, others are claiming tier levels and other standards which are simply not true or claiming to be in central London whilst actually being somewhere else – perhaps following the example of London Southend Airport?  Even in a more commoditised market, quality, connectivity, accessibility and service still stand out and well-run established data centres will always be first choice for informed customers.

The next part of the consolidation process is probably networks; new entrants are coming into a market where prices continue to fall at a dizzying rate.  There is no end of small new entrants to the marketplace, some of which will succeed and some of which will fall by the wayside.  At the larger end, consolidation continues.  In City Lifeline’s central London data centre alone, Abovenet has become Zayo (and consequently moved from the very top of everyone’s list to the very bottom, possibly not causing joy in Abovenet’s marketing department), Cable and Wireless/Thus has become part of Vodafone, PacketExchange has become part of GT-T and Global Crossing has become part of Level 3.

Data Centre Infrastucture Management (DCIM) systems may establish themselves more in 2013.  DCIM was predicted to have a massive impact, with Gartner stating publicly in 2010 that penetration would be 60% by 2014.  In the event, penetration at the end of 2012 is only 1%.  DCIM is hard and laborious to implement but it offers serious benefits to larger organisations in terms of the management of their physical assets, power, space and cooling and can quickly repay its investment in answering the basic question “how many servers can I have for the capacity I am paying for”.  DCIM deserves more success than it has had to date, and perhaps 2013 will be the year it takes off.

Power densities will continue to increase in 2013.  Five years ago, many racks drew 2KW (8 amps).  Now 8 amp racks are becoming unusual and 16 amps racks are the norm.  Five years ago 7KW racks (about 30 amps) were unusual, now they are common, and 20KW racks are starting to appear.  The trend to higher and higher performance and power densities will continue.

The data centre industry continues to grow, driven by the move to Cloud.  By the end of 2013, an estimated 23% of all data centre space will be in commercial colocation operations.  The leading market segments are likely to be Telecoms and Media, with 24%, Healthcare and Education, with 21% and Public Sector, also with 21%.  In-house data centre capacity is likely to continue to decrease and the commercial colocation market to grow, even in spite of the uncertain macro-economic environment.

Roger Keenan, Managing Director of City Lifeline

 Roger Keenan joined City Lifeline, a leading carrier neutral colocation data centre in Central London, as managing director in 2005.  His main responsibilities are to oversee the management of all business and marketing strategies and profitability. Prior to City Lifeline, Roger was general manager at Trafficmaster plc, where he fully established Trafficmaster’s German operations and successfully managed the $30 million acquisition of Teletrac Inc in California, becoming its first post-acquisition Chief Executive.

Report: Green Data Center Market $45 Billion by 2016

The combination of rising energy costs, increasing demand for computing power, environmental concerns, and economic pressure has made the green data center a focal point for the transformation of the IT industry as a whole. According to a recent report from Pike Research, a part of Navigant’s Energy Practice, the worldwide market for green data centers will grow from $17.1 billion in 2012 to $45.4 billion by 2016 – at a compound annual growth rate of nearly 28 percent.

“There is no single technology or design model that makes a data center green,” says research director Eric Woods. “In fact, the green data center is connected to the broader transformation that data centers are undergoing—a transformation that encompasses technical innovation, operational improvements, new design principles, changes to the relationship between IT and business, and changes in the data center supply chain.”

In particular, two powerful trends in IT are shaping the evolution of data centers, Woods adds: virtualization and cloud computing. Virtualization, the innovation with the greatest impact on the shape of the modern data center, is also recognized as one of the most effective steps toward improving energy efficiency in the data center. In itself, however, virtualization may not lead to reduced energy costs. To gain the maximum benefits from virtualization, other components of the data center infrastructure will need to be optimized to support more dynamic and higher-density computing environments. Cloud computing, meanwhile, has many efficiency advantages, but new metrics and new levels of transparency are required if its impact on the environment is to be adequately assessed, the report finds.

The report, “Green Data Centers”, explores global green data center trends with regional forecasts for market size and opportunities through 2016. The report examines the impacts of global economic and political factors on regional data center growth, along with newly adopted developments in power and cooling infrastructure, servers, storage, and data center infrastructure management software tools across the industry. The research study profiles key industry players and their strategies for expansion and technology adoption. An Executive Summary of the report is available for free download on the Pike Research website.