Todas las entradas hechas por ngd

How to clear the final hurdle to public cloud adoption

(c)iStock.com/Michael Chamberlin

As demand for public cloud services continues to grow rapidly the major providers are busy developing hyperscale clouds supported by regional data centres. Both Amazon and Microsoft have announced their UK based cloud services with the goal to meet sovereignty needs while also helping organisations achieve their digital transformation objectives, allowing them to once and for all unshackle themselves from the constraints and costs of legacy IT systems.

All well and good but as was recently underlined by Gartner’s cloud adoption survey there are many enterprises out there still reticent to move forward with public cloud services until they have more assurances about performance and security. While Gartner and others clearly don’t dispute the continuing meteoric rise of public cloud there’s still much to be done to finally remove the fear, uncertainty and doubt surrounding public cloud, convincing users and boardrooms that it is safe and robust, even though in the majority of cases it is already more secure than what they using today.

So what more can CIOs and service providers do to deliver the missing ‘X Factor’ that many users and boardrooms still demand before fully embracing public cloud services, let alone mixing these with their private cloud and legacy systems?

Connectivity to these cloud services is increasingly a key part of the solution. It can no longer be an afterthought and must be seriously considered from the outset, particularly for applications which are sensitive to latency issues.

Companies cannot always rely on the vagaries of the public internet which can be the weakest link of any public cloud offering. They must invest in secure circuits and MPLS networks as they make the move to using cloud services. 

The development of independent cloud gateways and exchanges to access cloud services is a relatively new development in this area as it allows the end user to separate their connectivity provider from its cloud provider.  This allows greater flexibility and more control on costs than purchasing all aspects of the solution from one provider.

Connectivity to these cloud services is increasingly a key part of the solution – it can no longer be an afterthought and must seriously be considered from the outset

Cloud gateways allow fast, highly secure virtual private network connections directly into the global public cloud network infrastructures such as Microsoft’s Azure ExpressRoute.  Otherwise it’s rather like investing in a Ferrari but one that is powered by a Morris Minor engine.  

Seamlessly plugging into these global public cloud infrastructures – comprising of subsea cables and terrestrial networks and bypassing the public internet – will increase security, reduce latency and optimise bandwidth in one fell swoop. Furthermore, with multiple interfaces, private connectivity to multiple cloud locations can be achieved and improve resilience.

Certainty and predictability

As many enterprise organisations consider move to public cloud services they are also looking at on premise and colocation data centres and how to reduce costs and improve efficiencies.  They must consider how to meet compute and storage capacity requirements in the brave new world of cloud services without compromising on security and network performance. Rightly so, since at the end of the day when it comes to cloud adoption it is the data centre’s resilience and connectivity which can make or break any cloud model.

Very often hybrid cloud deployments are the answer to these questions and with it the choice of the data centre on where to deploy the private clouds. Combining colocation, private and public cloud is increasingly coming into play by making public services more ‘palatable’ among the ‘non-believers’. It enables users to have the best of both worlds by offering the comfort blanket of retaining core legacy software systems and IT equipment, but with the added flexibility of easy access to the public cloud for accessing non-core applications and services.           

With proven security accreditations, uptime track records and SLA histories all readily available when evaluating today’s modern facilities, the new ‘holy grail’ for data centres must now be in delivering cloud providers and their users’ consistency and certainty. Where connectivity is secure and always on and latency and response times are uniformly consistent: no matter if it’s just a cloud model supporting a few hundred users nationwide, or several thousand spread across the globe. In other words, it can scale without degradation.    

Only by data centres circumnavigating the public internet with private connections can public and hybrid cloud users expect to be on the same level as private cloud. At such point any pre-existing customer engagement issues over security and consistency will quickly disappear as their users will no longer be able to tell the difference, whichever variety or combination of cloud they are using. This is when that missing ‘X Factor’ will have truly arrived.          

Hung up on hybrid: The rise of cloud 2.0 and what it means for the data centre

(c)iStock.com/Spondylolithesis

By Steve Davis, marketing director, NGD

We’ve seen it many times before; first generation technology products creating huge untapped marketplaces but eventually being bettered either by their originators or competitors. Think VCRs and then CDRs, both were usurped by DVRs and streaming, or the first mobile phones becoming the smartphones of today – the list goes on.

Cloud computing is no exception. The original ‘product’ concept remains very much in vogue but the technology and infrastructure holding it together keeps on getting faster, more functional, more reliable – put simply, better. Growing user and cloud service provider maturity is seeing to that. After 10 years of cloud, the industry and users have learned valuable lessons on what does and doesn’t work. They still like it and want much more of it but there’s no longer room for a one size fits all approach.

With this evolution, cloud “1.0” has morphed into “2.0” in the past year or so; while the name has been around for a few years, 451 Research among others have recently put it again at the forefront. The two core varieties of public and private have ‘cross-pollenated’ and given rise to hybrid, an increasingly ‘virulent’ strain. This is because companies are realising that they need many different types of cloud services in order to meet a growing list of customer needs.

For the best of both worlds, hybrid cloud offers a private cloud combined with the use of public cloud services which together create a unified, automated, and well-managed computing environment.

Economics and speed are the two greatest issues driving this market change. Look at the numbers. According to RightScale’s 2016 State of the Cloud Report, hybrid cloud adoption rose from 58% in 2015 to 71% thanks to the increased adoption of private cloud computing, which rose to 77%. Synergy Research’s 2015 review of the global cloud market found public IaaS/PaaS services had the highest growth rate at 51%, followed by private and hybrid cloud infrastructure services at 45%.

It’s an incestuous business. Enterprises using public clouds for storing non-sensitive data and for easy access to office applications and productivity tools, automatically become hybrid cloud users as soon as they connect any of these elements with private clouds, and vice versa.  Many still prefer the peace of mind of retaining private cloud infrastructure for manging core business applications as well as embracing those still valuable on-premise legacy systems and equipment which just can’t be virtualised.

Equally, a company might want to use a public cloud development platform that sends data to a private cloud or a data centre based application, or move data from a number of SaaS (Software as a Service) applications between private or data centre resources. A business process is therefore designed as a service so that it can connect with environments as though they were a single environment.

Hybrid and the data centre

So where does cloud 2.0 and the rise of hybrid leave the data centre? Clearly, the buck must continue to eventually stop with the data centre provider as it is ultimately the rock supporting any flavour of cloud – public, private or hybrid. Whether you are a service provider, systems integrator, reseller or the end user you will want to be sure the data centres involved have strong physical security, sufficient power supply on tap for the high density racks that allow scaling of services at will, and of course, diverse high speed connectivity for reliable anyplace, anytime access.

But for implementing hybrid environments, the devil is in the detail.  Often what isn’t considered is how to connect public and private clouds. And don’t forget some applications may still remain outside of cloud type infrastructures. There is not only the concern around latency between these three models but the cost of connectivity needs to be built into the business plan.

Location of the public and private cloud is a primary concern and needs careful consideration. The time to cover large geographical distances must be factored in and as a result the closer the environments can be positioned the better. The security of the connections and how they are routed also needs to be examined. If the links between the two clouds was impacted then how might this affect your organisation?

Customers who are actively building hybrid solutions increasingly demand their private clouds to be as close to the public cloud as possible. This is because using public Internet connections to connect to public cloud can expose end users to possible congestion and latency whilst direct connections do not come cheap.  Sure, latency between private and public cloud can be reduced but with some costs. Caching can help sometimes and the use of traffic optimisation devices is well-proven but each adds more complexity and cost to what should be a relatively straightforward solution.  Developers need to be conscious of the fact that moving large amounts of data between private and public cloud will cause latency and sometimes will need to redesigned purely to get over latency problems.

In a perfect world it would be ideal to use a single facility to host both public and private cloud infrastructure and use the various backup solutions available for data stored in the private and public clouds. This would reduce latency, connectivity costs and provide a far higher level of control for the end user. Obviously the location would have to be in a scalable, highly secure data centre with good on-site engineering services available for providing remote hands as necessary.  And thanks to the excellent quality of modern monitoring and diagnostics tools, much of the technical support can now be done remotely by the provider or user these days.

How watertight is your case for keeping data safe and dry?

(c)iStock.com/xenotar

By Steve Davis, marketing director, NGD

It is 2016 and Britain is on flood alert – again. The latest terrible flooding suffered by residents and business owners in the North of England were made even worse with it happening in the run up to Christmas and through the New Year period.

Flooding of course is not limited to the North. It is a nationwide phenomenon bearing in mind the dreadful events down in Somerset a couple of years ago, the same winter which also saw the Thames burst its banks just a few miles short of central London. If it hadn’t been for the Thames Barrier things could have been truly devastating.  

With such events becoming more frequent and more severe, and the Environment Agency saying climate change will make existing defended areas more vulnerable over this century, it is clear that businesses using on-premise or third party data storage facilities located on or nearby floodplains are at increased risk. The resulting impact on business continuity, either directly from water ingress or from the knock-on effects such as power outages, cannot be underestimated.

When it comes to data centre operations the name of the game is to protect and ensure data is safe and secure. This must surely include from flood water but such logical thinking appears at odds judging by the many existing facilities and new builds still being located near rivers and on floodplains, London Docklands a prime example. While plenty of time and resource is spent on cyber and physical defences, risk from acts of God seem to take a back seat.  

Some continue to argue there is no other choice as we are, after all, living on an island with numerous rivers. But there are many other areas in the UK to choose from and this argument certainly doesn’t wash in the face of the near millisecond latency over several hundred miles now widely available nationwide – and at very low cost with fibre connectivity costing under a fiver per mile.

Nine times out of 10 the low latency available is more than adequate for all but the most time critical financial trading applications. Furthermore, powerful remote diagnostics has also ruled out the case for server hugging. You really don’t need to be onsite anymore to check the lights are on.    

All things considered, 2016 could well be the year many more businesses make their data strategy more watertight – in all senses of the word.  Well-connected carrier neutral data centre locations, well away from rivers, floodplains and coast lines should become increasingly popular.

A moral tale: The bank, the insurance company, and the ‘missing’ data

(c)iStock.com/MarianVejcik

By Steve Davis, Marketing Director, NGD

Last week, a well-known insurance firm revealed a report berating business owners for not safeguarding themselves against cyber attack and failing to take out sufficient cyber insurance cover.  At the same time, another leading insurer was found to have lost a portable data storage device containing thousands of customer files belonging to a major bank.

As for the first insurance company, sure, cyber data loss insurance is a line of defence but somewhat after the fact when considering the damage and chaos that happens in such circumstances. As for the other insurance company, if they and a bank can’t look after their data between them, who or what can? 

Well, there are plenty of alternatives these days for storing and transferring data to prevent carrying it around. How about trying secure networks connected to servers in high security data centres?

For me, physical and digital security should go hand in hand.  From prison grade perimeter fencing, security guards, CCTV, infra-red detection and lockable rack cabinets, to the latest most sophisticated anti-malware and virus software available. Clearly, even the largest organisations cannot consistently expect to attain such rigorous levels of security all on their own and it’s certainly out of reach for most small and medium size firms to attempt it. Sheer cost and keeping up with the ever changing technology landscape see to that.  

This is precisely why colocation data centres like NGD have been in the ascendancy for several years now. Organisations, of all sizes, choose to use them for one primary reason – to keep their data safe and continuously available. 

Modern data centres have multiple levels of physical and digital security in place to deter and prevent all perpetrators, from opportunists hoping to walk in and ‘lift’ a computer server, storage device or whatever, through to the highly organised and systematic cyber terrorist variety. Peace of mind is both available and affordable for all, from those customers requiring just a quarter or half rack up to others looking to run hundreds of racks. 

Although there’s irony in last week’s revelations there’s also a moral to the tale. No matter what, keep your data out of harm’s way at all times – or it may well come back and bite you! Prevention is much more preferable to finding a cure.

Feeling insecure? It’s time to find a data centre

(c)iStock.com/4x-image

By Steve Davis, marketing director, NGD

It seems hardly a day goes by without a market survey uncovering yet more shock horror findings about companies of all shapes and sizes believing or knowing they are insecure when it comes to their data security.  But many still admit to be avoiding best practice even though threat levels are higher than ever.

Equally surprising, many organisations still view colocation data centres as being out of reach and soldier on with their servers and critical data on the premises. On the contrary, colocation data storage and hosting is more accessible than ever to businesses in this country thanks to the tumbling cost of high speed fibre network connectivity. This is allowing more data centre operators to locate well away from the traditional data centre heartlands of London and inside the M25 to places where real estate and labour costs are cheaper. In turn this is creating more competition which is being reflected in lower customer pricing.

Some very large and modern data centres now have the economies of scale to go even further by combining the less costly location benefits with a much reduced minimum size threshold for data hosting and storage. This also opens the door for start-up and smaller businesses to take advantage. For example an offer of rack space, cooling, power and connectivity infrastructure for under £25 per day in a Tier 3 data centre was simply off the radar a year ago.

All aboard

So with the growing choice of good quality and affordable colocation facilities now available companies should now be deciding which colo partner to pick and how to differentiate between them. It is no longer a case of ‘how’ to keep data and businesses safe and secure as the solution to the problem is already out there.     

But when it comes to data centre pricing it’s rather like booking a flight and comparing a budget airline or charter fight versus a major scheduled carrier. On the face of it the budget and charter guys will most likely appear cheaper and better value. That’s until you click through to the next page and see all the catches such as additional compulsory and optional extras. Things like inconvenient flight times, less frequency, hold baggage weight limits, costs to sit together or for more legroom, meals and refreshments…it all starts to add up.       

Certainly sufficient space for now and the future, proven certifiable security and operational credentials, high levels of resilience and power, along with DR and business continuity contingencies are also very important criteria to carefully evaluate. But watch out for the small print and any hidden costs and get out clauses.

Similarly, to attract smaller customers into colo, some providers will not be as transparent as others. Be sure to look beyond the headline deals especially on rack power and connectivity, infrastructure level and official certifications, and type of service level agreements on offer.

There are also practical things to consider such as availability of (free) parking facilities and meeting rooms, as well transportation and installation of your existing and new IT systems – often known as server migration. Ideally you will want a provider who can move your IT in for you leaving you without the headache of setting things up.

A few colo providers will be all inclusive, some limited, and others offering pretty much everything but all charged on top.  All ways round, doing that extra mile of due diligence on who to select as your partner will ensure a safe and secure stay at your chosen data centre destination – without any unforeseen surprises!

The power game: Ensuring the right data centre relationship

(c)iStock.com/4x-image

By Steve Davis, Marketing Director, Next Generation Data

Britain’s internet demand is expanding so fast that it could consume the nation’s entire power supply by 2035, scientists told the Royal Society earlier this month.

Hopefully it won’t come to that but Andrew Ellis, professor of optical communications at Aston University, told the “Capacity Crunch” conference that data storage and transmission on the internet, along with devices such as PCs and TVs, are already consuming at least 8% and as much as 16% of Britain’s power — and doubling every four years.

This problem is already hitting some data centres and as a result they are limiting the power available to the racks hosted in their facilities. In some cases this can be as low as 2 kW. This was fine a few years ago, but now the higher density racks being installed demand much more power. And while virtualisation and other new technologies allow huge improvements of IT efficiency more power to each rack is a pre-requisite to run them.

Because of this it is not unusual for some data centres to force customers to take more space (and charge for it) than necessary purely to deliver the power required.

This problem is only going to get worse. Make sure you choose a data centre which has the ability to deliver the right levels of power now, have plenty in reserve for the future and won’t penalise you by making you take more space than you really need.

Also ensure your data centre provider can actually deliver the amount of power it has contracted to. There have been cases where power availability has been ‘oversold’ as some providers have gambled on the fact that not all customers will use all their allocated amount of power at the same time.

A data centre relationship should last for many years. Apart from risking business continuity and competitive edge through unreliable or insufficient power, having to make an unplanned move from one data centre provider to another will be a time consuming and costly exercise.

For data centre users it is essential to do your homework sooner than later as increasingly, when it comes to the power game, there will be those facilities that have it and those that don’t. 

Winds of change: Growing data centre industry brings new opportunities

(c)iStock.com/typhoonski

By Simon Taylor, Chairman, Next Generation Data

In this digital age, all businesses are effectively IT businesses no matter what they actually make or sell, whether in retail, professional services, manufacturing, and so on. IT underpins the systems and processes and more importantly will usually offer the key competitive edge, manage and pay the personnel, control suppliers and their costs, manage prospects and analyse customers buying habits, not to mention monitoring sales and powering web sites.

But the click-of-a-button efficiencies and competitive edge made possible by all of this sometimes obscures the realities of life for organisations of all shapes and sizes. That is, the web and cloud based computing solutions their business eco systems increasingly depend on are only as good as the quality and reliability of the servers, networks and data centres supporting them.

If these breakdown, suffer a security breach (digital or physical) or a natural disaster such as from fire or flood, some or all business operations are likely to be affected, often with serious consequences for them and their service providers. 

More often than not it’s been too costly for the majority to relocate to Tier 3 or higher data centres as these have typically been built in and around London, where real estate and labour costs are at a premium. To compound the problem further there’s been an acute shortage of similar calibre facilities available in the regions.

This has meant many cloud and hosting providers, SMBs and larger entities have had little choice other than to keep their valuable IT equipment and data on-site or in poor quality data facilities converted out of office buildings, when it would be far more prudent to relocate them to modern purpose-built facilities – offering optimised data centre environments and maximum security.

On the move

Fortunately there is an increasing wind of change blowing through the data centre services industry and it’s heading out into the regions. This is largely thanks to the tumbling cost of high speed fibre networks provided by telecom carriers and ISPs and which are essential for carrying data practically anywhere between everyone and everything. Their relative low cost now makes it much more viable for data centre operators to take the kind of next generation data centre facilities traditionally clustered around London close to the telecom exchanges and replicate them much further afield. 

At last larger, more scalable and higher calibre colocation data centres need no longer remain the exclusive preserve of very large businesses and will become increasingly accessible and affordable for small and growing businesses as well as the service provider channel.    

NGD, for example, has been one of the first operators to open a very large Tier 3+ data centre well outside of London and the M25. At 750,000 square feet, the facility has the economies of scale as well as the power and resilient infrastructure necessary for accommodating and future proofing any company’s data storage and processing requirements, large or small. It also functions as a major regional hub for multiple international telecom carriers and Internet Service Providers (ISPs) to ensure customers can be connected far and wide, including millisecond latency to London.

Channel benefits

As the strategic value of IT to businesses of all kinds continues to grow along with demand for more secure and resilient data facilities, it is only a matter of time until more data centre operators establish large and more affordable  state-of-the-art facilities well outside of the M25 and further NGD’s vision of establishing a serious alternative to London.

Apart from the security and business continuity benefits this new breed of mega data centre can undoubtedly provide compared to the alternatives discussed, it also brings significant business opportunities for IT resellers, cloud providers and smaller systems integrators requiring much more scalable data centre capacity, often initially requiring just a few racks. For larger users there are also considerable cost savings on energy usage and carbon emissions taxes due to major investments in the very latest energy optimisation and cooling systems and, in a few cases, a total commitment to renewable green energy.    

The growing change blowing through the data centre industry can only be good news for the continued security and future prosperity of businesses everywhere. Equally, in our digital age, the ‘magnetic’ effect of regionally located world class data centres on local economies cannot be underestimated in their ability to attract more businesses and fresh talent from far and wide – much like the motorways and railways achieved in the previous two centuries.  

Combining the physical with the digital: Joining the data self-preservation society

(c)iStock.com/4X-image

By Simon Taylor, Chairman, Next Generation Data    

As businesses continue to recognise the strategic importance of IT and data to the very existence of their businesses let alone performance, we can be under no illusion about the absolute necessity of keeping data safe and being alert to all the associated potential risks posed – from the inherent ‘fragility’ of web and cloud infrastructure, to things altogether more sinister such as cyber or even physical terror attack.

Whether your data is on your premises, stored in a colo data centre, in the cloud or otherwise, a comprehensive preventative data loss management and security strategy is essential. This means knowing exactly where and how your data is used, stored and secured, as well as being totally satisfied your organisation or your service provider has the ability to recover seamlessly from disasters we all hope will never happen.

Data loss prevention (DLP) strategies and software solutions are essential for making sure that users do not send sensitive or critical information outside the corporate network. IT administrators can then control what data can and cannot be transferred by monitoring, detecting and blocking sensitive data while in-use, in transit or while archived. The latter is very important as sensitive and valuable data that’s stored is often especially vulnerable to outside attack. 

But effective protection from data loss and ensuring its security cannot only be limited to data loss management and monitoring activities, or the implementation of back-up, firewall, intrusion detection and anti-malware software – as is all too often the case.

Getting physical

There are equally critical, often overlooked, physical factors to consider for ensuring your data security and business continuity. Factors such as supply of reliable and stable power, diversity of fibre network options and sufficient cooling/environmental services all need to be carefully considered, along with perhaps mirroring of data on servers in remote ‘second site’ physical locations.

Over the past twenty years or so larger businesses have typically addressed all or some of these issues by building their own data centres close to their office premises to house their mission critical servers and storage equipment. But this approach has had its own problems, not least the considerable capital expenditure involved in construction and the headache of keeping up to date with latest hardware and software developments.

With this in mind many businesses are increasingly outsourcing their IT operations and data storage to modern specialist ‘colocation’ data centre operators. These can provide customers with space, power, and infrastructure to more securely house and manage their own IT operations, or alternatively manage these for them.

While most certainly cloud providers can also offer business users many benefits in terms of pay as you go data storage and access to the very latest business applications, these services still depend on the reliability and security of servers in data centres somewhere. It is therefore prudent to find out from your cloud provider which data centres they are using and where they are located, and have them report on the data management and security credentials and procedures in place.

It is also highly advisable to establish with them what happens to accessing your data in the case of a third party going into administration. Having a legal escrow agreement in place at the outset of the relationship will help ensure you can retrieve your data from their premises more easily. Without the above assurances storing mission critical data in the cloud can be risky.   

Taking an integrated and holistic approach to data loss prevention and security will ensure both its security AND its continuous availability. But for maximum peace of mind that your data is always available, safe and where you expect to it be, also requires physical security to be given as much consideration as the digital aspects.  

Data loss prevention considerations  

1. Security: Physical security measures are often overlooked in favour of the digital variety but can often be prove to be the weakest links of all.

How physically secure is your building and IT equipment? Consider how its location may impact your business continuity and data availability  – being well away from areas susceptible to flooding, large urban areas and flight paths reduces exposure to the potential risks

2. Resilience: Are sufficient data back-up and replication fail-safe measures in place along with Uninterruptable Power Systems (UPS) to mitigate unplanned downtime?   

Has your data centre or computer room got access to abundant and redundant resilient power, and diverse fibre connectivity links?  Are servers being sufficiently cooled and energy optimised to ensure maximum availability?

3. Service provider credentials: If outsourcing data directly to a colo data centre or via a cloud provider, check all of the above.

Also their security and operational industry accreditations for actual proof (ISO, PCI DCI, SSAE16 etc.) and the calibre of on-site engineering personnel for handling technical support issues and Disaster Recovery situations.  Tier 3 category data centres should be used as a minimum. Putting in place an escrow agreement will also ensure you have legal access to retrieving your data in the event of their going into administration.

On the level about data centre location: How the industry is changing

(c)iStock.com/anafcsousa

By Nick Razey, CEO, Next Generation Data

In 1914, the Harrods Furniture Depository was completed on the banks of the Thames in leafy Barnes. It served as a warehouse for many years but in 2000 its conversion to 250 residential apartments was completed. With the penthouse flats selling for £6.5m, it is easy to understand why such prime real-estate could no longer be wasted on warehousing.

In the last 20 years, the majority of data centre development has used some of the most expensive real-estate in the UK. Given that data centres are the warehouses of the 21st century it seems obvious that such a situation is not economically sustainable.

Given data centres have used some of the most expensive real-estate in the UK in the last 20 years, it’s clear such a situation is not economically sustainable

Just as Harrod’s depository could move out of London as our nationwide road network improved along with falling transportation costs, so too have data centres – on paper at least – been set free to roam away from Docklands by plummeting fibre costs. 

So, given a blank sheet of paper, where should a data centre locate? Close to reliable, high capacity power supplies, away from flood and fire risks – a location with plentiful cheap and clean land, close to transportation links but sufficiently far from threats of riot or terrorism. Of course connectivity is still required and a choice of at least three high quality carriers is essential.

Unfortunately in reality high capacity power supplies are generally located in areas of heavy industry which often have contaminated land and/or high risk neighbours. Undeveloped land is usually free for a reason, often flood related, and remote rural land will not have the necessary fibre availability.

So ticking all these boxes is not easy, although there are already a few notable exceptions out there. While picking almost any data centre location will require some kind of a compromise on the above wish list, the fact that fibre costs and latency are now so low finally allows more UK out of town locations to be a valid option.

Undeveloped land is usually free for a reason

Of course, back in 2000 when, for example, it cost £40,000 per annum for a 2Mb/s circuit between London and Wales – now it’s just £5! – and latency was a large barrier, out of necessity it made sense to cluster data centres around the carrier interconnects in London’s Docklands and The City. But now latency is practically the same between any two points unless it’s thousands of miles. Along the M4 Corridor, at 1.2 ms between London and Wales, it’s more than adequate for 99% of today’s applications. With 1600 Gbit/s fibre at just £10 per kilometre, it’s also cheap at the price.  

In the end, do you really care where your Harrods sofa was shipped from and would you expect to view it in person prior to delivery? Surely not, and by the same token thanks to cheap low cost fibre and excellent remote diagnostics, you no longer need to make the trip to hug your servers either.

Consider the playing field well and truly levelled.

Why growing cloud commoditisation still won’t rain on the data centre’s parade

Picture credit: iStockPhoto

By Nick Razey, CEO, Next Generation Data

Rackspace’s recent decision to discontinue its pure infrastructure as a service (IaaS) offering in favour of “managed cloud” services is further evidence that it is the cloud rather than the data centre providers most at risk of becoming commodities – as I originally predicated here back in May.

Take Rackspace’s exit from IaaS as a sign of things to come. Many more Cloud services are already well on the way to being commoditised and their providers will see their margins increasingly competed away.

During the past couple of years Rackspace has been attempting to compete with low-price competitors such as Amazon Web Services, Microsoft with its Azure Service and Google in the IaaS market. But now they are going back to their roots as a managed provider rather than attempting to be a carbon copy of Amazon. That’s how they made it in the early days. The business model of a managed cloud provider is much different to the approach taken by AWS and Google which offer pay-as-you-go IaaS on-demand services.

Some cloud pundits believe Rackspace’s move is a sign of broader changes in the market. The pure IaaS market is now perhaps between Amazon, Google, Microsoft and IBM who have the financial muscle to invest in research and development and new data centres, and can also afford to do so without offering high-margin support services on top.

In contrast, the data centre product comes in many different ‘non-standard’ varieties such as where it’s located affecting price of real estate and wages, tier level, type of colo space required, total power capacity and what’s available per rack, connectivity choices, service levels, and so on.

All of these directly influence both the quality of the product and its price, therefore keeping modern purpose designed data centres well clear of the ‘commodity zone’. And because of the high upfront investment required for land, power, planning and construction, data centre operators expect long term contracts with customers for ensuring cast iron returns on investment.  

So for most data centre operators the forecast remains good – there’s still no sign of the cloud’s growing commoditisation raining on their parades.