All posts by ngd

Data centres: To buy or not to buy – that is the question

Picture credit: iStockPhoto

By Nick Razey, CEO, Next Generation Data

When the CIO of a company recently contacted us about data space for housing 25 racks with the possibility of growing to 80 racks, he mentioned in passing that he was thinking about building his own data centre as an alternative. “We’re not experts in data centres but it would be much cheaper than buying from you,” he added.

Well, on the face of it, he could be right assuming a build cost of £2.5m, depreciate it over 10 years and you’ve a cost of £250K per annum, or per rack of £3,125 per year. That does sound cheap.

But how about the cost of capital, say an average of £100K pa, and equipment maintenance – perhaps another £50K pa? And of course rent and rates, say £90K pa, plus staffing where even assuming a bare minimum you will need £150K for salaries including all uplifts. So now the total is £640K pa or £8,000 per rack. Not so cheap after all.

But that’s the least of his problems. Remember he only wanted 25 racks initially. Those 25 racks now work out at an average of over £25,600 pa each. Still within a couple of years the data centre will be full so his price per rack will get back down to the lower level. But what if he keeps growing? What if he needs 85 racks? Where do the extra 5 racks go? Does he build another 80 rack data hall for a further £2.5m?

The build option looks cheaper than colocation only if the cost of real estate and staff are ignored and full occupancy is assumed

The foregoing analysis also assumes that he has an available site which is suitable (secure, safe from flooding and flightpaths), planning permission, a source of power, connectivity options, a quality design and build contractor and plenty of cash to invest.

By comparison, housing the same number of initial and potential additional racks in a modern tier 3 UK colocation data centre will cost between £5,000 and £10,000 per rack pa depending on location (with London/inner M25 locations being at the higher end of the scale). This includes space, power, cooling and associated infrastructure.

This illustration shows the own-build option looks cheaper than colocation only if the cost of real estate and staff are ignored and full occupancy is assumed. Some of the new single site ‘mega’ data centres – over 250,000 square feet – can further reduce the cost per rack by delivering even greater economies of scale. This is achieved by all data hall construction taking place within one building and the utilisation of a common facilities infrastructure (power supply, HVAC plant, fibre cross connects, security).

So why do so many companies consider building their own data centres? A charitable explanation is that they feel more secure with an in-house data centre and it’s more efficient for staff if the data centre is on-site. A more cynical explanation is that a data centre is the ultimate vanity project.

But as the saying goes, “Revenue is vanity, profit is sanity.”

Opinion: The inverted pyramid of IT infrastructure

By Nick Razey, CEO, Next Generation Data

If we ever lose a bid for a data centre contract, it is for one of two reasons.

Either the customer insists on being close to London and is happy to pay the premium for doing so, or the customer is obsessed by paying the lowest possible price and will accept any quality of data centre service to achieve this. While I struggle to understand both points of view it is the latter approach which is totally incomprehensible in the modern world.

I see the IT infrastructure as an inverted pyramid. At the bottom is the cheapest element, the data centre service.

It is staggering to think that the cost of this per rack (at NGD at least) is only £20 per day or less. That £20 rack supports perhaps £50k to £100k of hardware and software (the next layer of the pyramid), which support multiple business applications that provide the business systems and processes which are fundamental to the efficient running of a business worth millions (the top level of the pyramid).

It’s staggering to think a £20 rack supports perhaps £50k to £100k of software

That’s an awful lot of trust to place in a service costing £20 per day. But responsible operators like NGD (and to be fair, most of the London DCs) offer a service that IS trustworthy. A well designed and built Tier 3 infrastructure, properly maintained with ISO approved processes and highly trained staff will ensure that outages are vanishingly rare.

However, some customers are so focused on price that they will seek out the cheap and nasty data centre – the N+0 infrastructure with zero security and technical support 9 to 5 rather than 24/7.

With a service of this quality it is a question of when it fails not if it fails. And when the data centre fails, the whole IT pyramid will come crashing down with the attendant loss of revenue and reputation.

All to save a few pounds per day.

Picture credit: YoNoSoyTu

Why physical security is essential to combating the ever present and growing threat to data centres

Nick Razey, CEO and Co-founder of Next Generation Data

Although recognised as important, the absolute criticality of the data centre is often underestimated and this is possibly due to its relative cost in comparison to other elements of the IT stack. While a rack footprint might cost £10K pa, the hardware might cost over £50K and the managed service £100K.

However, if the data centre fails the lost business can reach millions of pounds per day. So, while the data centre is considered the least important element when it is working, it immediately becomes the most important element if it fails.

While this is of course recognised by data centre designers who are focused on building a resilient mechanical and electrical infrastructure – normally with the objective of achieving “concurrent maintainability” – ensuring a comparable focus on security measures is being overlooked.

Targeting physical infrastructure is commonplace in time of war. Ports, airports and road arteries have been the traditional targets of those seeking to disrupt and destroy.

While the data centre is considered the least important element when it is working, it immediately become the most important if it fails

In today’s world, where data is the oxygen that drives almost everything we do and there is hardly a business in the land without some form of information technology at its core, it is unsurprising that the locations which store active data, the data centres, have become recognised as threatened.

In their research paper Predicts 2013: Infrastructure Services Threatened as New Structural, Political, Competitive and Commercial Challenges Emerge, Gartner note that the flip side of greatly enhanced cyber security through the use of mind-bending algorithms is that it pushes those with an axe to grind to consider a plan B where a physical assault makes the most sense. The London riots of 2011 might have created just such a spark had the rioters wished to target a particular company or government department on which to vent their frustration.

Gartner’s report forewarns senior IT decision makers that by 2016 it is likely government regulation will dictate minimum physical levels of security for data centre infrastructure. No longer will it be acceptable to store data in facilities that do not have rigorous security protocols so executives must prepare now for this change.

In the current world this lack of focus on the physical aspects of security is somewhat surprising as not only have we seen the destruction of the World Trade Center (which housed many data centres) and the London bombings of 2005, but as Gartner point out, going back further there is an even more pertinent example:

The bombing of the financial district of London’s Docklands in February 1996 demonstrated the vulnerability of data centre buildings and surrounding areas to major disruption caused by a terrorist bomb.

The emphasis that most data centre operators have placed on London is understandable. The vast majority of companies have their headquarters in London and the financial sector is almost exclusively based there. In the days when IT was unreliable and communication links were expensive it made sense to house equipment close to the main office – to be a “server hugger”.  The focus on maintain cheaper communications links meant that companies’ IT equipment began to cluster together in a few mega data centres in London’s Docklands. However, this clustering has created a concentration of risk as Gartner point out:

As large data centers serve more and more enduser organizations, their potential as a target for criminal and terrorist activity increases.

And according to Gartner it is also the risk to critical infrastructure that means the Government might be required to legislate:

“Targeting critical infrastructure is a well-established strategy in times of war, and by terrorists (international and domestic alike). Data centers are critical infrastructure for the effective operations of most world economies. Information security measures continue to make “getting inside” harder for those with malicious intentions, thus requiring a reversion to the oldest form of gaining access — kick down the door!

The emphasis that most data centre operators have placed on London is understandable

“Such actions could be taken by disenfranchised domestic groups, contracted corporate espionage agents, or even the result of international conflict. Although the actual success of such efforts may be limited, the mere attempt will be enough to cause many governments to recognize data centers as being vital critical infrastructure requiring regulations potentially on par with those found in nuclear facilities.”

Fortunately, thanks to increasingly sophisticated remote control and monitoring, and cheap fibre, the data centre no longer needs to be tightly shackled to the Head Office.  Responsible CIOs should therefore focus on priorities other than convenience and consider data centre locations which are more secure than central London.

So why is a more remote out of town location more secure? Firstly, good security requires space – space enough for double layer fencing, space enough for the data centre building to be at least 25m from the road. In congested and expensive London this is very difficult to achieve whereas NGD’s facility in South Wales, for example, has a 25 acre site with military grade fences. Secondly, a secure site should have a low footfall – remote from highly populated focal points for crowd unrest or riots, a location where unwanted strangers are easy to spot. And of course, the data centre should not be in the vicinity of natural terrorist targets such as Canary Wharf, flight paths or flood plains.

In summary, the UK government has already begun to recognise the growing security threat facing data centres. In order to minimise risk, buyers will in turn need to assess physical security capabilities far more rigorously than at present and look toward the selection of redundant delivery centres that are more geographically dispersed.

Assessing the balance of power for data centre operators

By Nick Razey, CEO, Next Generation Data

Achieving the lowest possible power usage effectiveness (PUE) rating should not be the only objective of a data centre operator. After all, the most energy efficient data centre is one which has lost all power! Furthermore there is a considerable grey area around the method of PUE calculation.

Effectively consolidating space and power constrained legacy data centres into more energy efficient ‘PUE-friendly’ environments ultimately requires their migration into modern facilities. These can offer the space, power and infrastructure necessary for supporting future compute requirements over the medium to longer term; five, ten, even fifteen years.    

But for a realistic data centre PUE to be calculated it should include the power consumed by offices, general lighting, security systems and so on. As a minimum it should include transformer and UPS losses. Ideally it should be measured over a 12 month period or if calculated then it should be based on the worst case conditions (i.e the hottest day of the year) – even so many companies will look at the energy consumption of the air conditioning units under ideal conditions and only include this in their calculations.

Without doubt the PUE can be improved by stripping out levels of resiliency; N+1 is more efficient than N+N for example, but some customers will demand the reliability of N+N regardless of PUE. Reliability is still king and this necessitates resilient energy supply which must not be unduly compromised in pursuit of the ultimate in PUE ratings.

While a PUE of 1 is not possible as this would mean that no power is consumed other than that used by the IT Load, it is feasible in a modern facility to get close. The processes and procedures to lower the PUE should focus particularly on reducing power for cooling. This will require a combination of close-coupled CRACs, hot and cold aisle containment, higher data hall temperatures and, assuming sufficiently low ambient temperatures are available, fresh air cooling.

Although the use of fresh air cooling drastically reduces the energy consumption there are some losses which cannot be avoided. For example, getting electricity from the incoming feed to the rack wastes power due to transformer losses, cable resistances and UPS inefficiencies. Even the fresh air cooling will consume some energy due to the fans needed to circulate the air. Furthermore, regardless of cooling requirements, best practice dictates that the data centre fresh air is changed regularly.

Server virtualisation

When taking steps to reduce power consumption, server virtualisation is essential. An average server used to be run at 10% to 15% of capacity but it is now possible to virtualise new servers to run 20 to 40 virtual machines.

But organisations must consider the power implications for the type of rack hardware used for running the required applications or they may be forced into using more racks and more power than actually necessary. There is a common misconception that running low density racks instead of higher density ones will be less costly when it comes to power but the reverse is actually the case. 

Running fewer high density racks than lower density ones will yield a lower total cost of ownership because they have far superior compute capabilities while using significantly less data centre resource; switchgear, UPS, power, cooling towers and pumps, chillers, lighting and so on.

The problem is that the latest more efficient higher density racks consume over 5kW and a growing number more than 10kW: few data centres can actually supply this level of power per rack today and this problem is only going to get worse.   

Integrated energy management

Central to optimising overall data centre energy efficiencies must be an advanced energy monitoring and management platform, capable of integrating the building management system, the electronic management system, PDUs and SCADA; data centres have historically used disparate systems which is considerably less efficient.       

With an integrated energy management system there is the means to closely monitor energy usage, highlighting any areas of concern where consumption is running at unexpected levels. This can then be addressed quickly and efficiently, ensuring a better service for customers of the data centre and potentially saving the operator and its customers thousands of pounds through reduced electricity costs and of course minimising the environmental impact of their operations.