Todas las entradas hechas por simonbearne

Spotting the elephant in the room: Why cloud will not burst colo’s bubble just yet

When it comes to the future demand for data centre colocation services, it would be easy to assume there’s a large elephant in the room – in the shape of a large cloud ready to consume all before it.  

From what we are seeing, however, alongside our cloud provider hosting services and in line with market forecasts, this is far from actual reality. The signs are that colocation can look forward to a vibrant long term market. CBRE, for example, recently reported 2019 was another record year of colocation market growth in the so-called FLAP (Frankfurt, London, Amsterdam, Paris) markets. There’s also a growing choice of high quality colocation facilities thriving in regional UK locations.

Perhaps more telling, however, amid all the excitement and market growth statistics surrounding cloud, some analysts are already predicting only about half of enterprise workloads will ultimately go into to it: best practice and business pressures will see most of the remaining share gradually moving from on-premise to colo – with only a minority remaining on-premise in the long term.

The case for colo

This is because a public cloud platform, while great for scalability, flexibility and ease of access, probably won’t totally satisfy all enterprise application and workload needs. Some will demand extremely high performance while others just need low-cost storage. And unless your own in-house data centre or hosting provider is directly connected to the cloud provider’s network infrastructure, latency is a consideration. This will impact on user experience as well as become a potential security risk. Then of course there’s the governance and security concerns around control of company data.

At the same time, there are serious engineering challenges and costs involved when running private cloud solutions on-premise. The initial set-up is one thing, but there’s also the ongoing support and maintenance involved. For critical services, providing 24 hour technical support can be a challenge.   

Sooner or later, therefore, enterprises will inevitably have to address the implications and risks of continuing to run servers in-house for storing and processing large volumes of data and applications.  Faced with solving the rising costs, complexities and security issues involved, many will turn to finding quality colocation facilities capable of supporting their considerable requirements – from housing servers for running day to day applications, legacy IT systems, and in some cases, mission-critical systems, and for hosting private or hybrid clouds. 

On the hunt

So where’s the elephant? Right now, the elephant is most likely residing in the board rooms of many enterprise businesses. However, the real-life issues and challenges associated with a ‘cloud or nothing’ approach will increasingly come to light and the novelty of instant ‘cloudification’ will wear off. CIOs will be once again able to see the wood for the trees. Many will identify numerous workloads that don’t go into cloud, and where the effort or cost of cloud is a barrier.  

This journey and eventual outcome is natural – an evolution rather than a sudden and dramatic revolution. It’s a logical process that enterprise organisations and CIOs need to go through, to finally achieve their optimum balance for highly effective, cost-efficient, secure, resilient, flexible and future-proofed computing. 

Nevertheless, CIOs shouldn’t assume that colocation will always be available immediately, exactly where they need it and at low cost. As the decade wears on, some colocation providers will probably need to close or completely upgrade smaller or power strapped facilities. Others will build totally new ones from the ground up. Only larger ones, especially those located in lower cost areas where real estate is significantly cheaper, may be capable of the economies of scale necessary for delivering affordable and future-proofed solutions for larger workload  requirements. Time is therefore of the essence for commencing the evaluation process for identifying potential colocation facilities.

In summary, the cloud is not going to consume colocation’s lunch. More likely, together, they will evolve as the most compelling proposition for managing almost all enterprise data processing, storage and applications requirements. They are complementary solutions rather than head to head competitors.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Back to black: How to ensure a data centre’s critical infrastructure works when needed

For cloud providers and their many customers a robust and continuously available power supply is amongst the most important reasons for placing IT equipment in a data centre. It’s puzzling therefore why so many data centres fail repeatedly in measuring up to such a mission critical requirement.  

Only last month, for example, cloud service providers and communications companies were hit by yet another protracted power outage affecting a data centre in London. It took time for engineers from the National Grid to restore power and meanwhile many thousands of end users were impacted.

Let’s face it – from time to time there will be Grid interruptions. But they shouldn’t be allowed to escalate into noticeable service interruptions for customers. Inevitably, such incidents create shockwaves among users and cloud service providers, their shareholders, suppliers, and anyone else touched by the inconvenience.

The buck stops here

While it’s clear something or someone (or both) are at fault the buck eventually has to stop at the door of the data centre provider.

Outages are generally caused by a loss of power in the power distribution network. This could be triggered by a range of factors, from construction workers accidently cutting through cables – very common in metro areas – to power equipment failure, adverse weather conditions, not to mention human error.       

Mitigating some such risks should be ‘easy’. Don’t locate a data centre near or on a flood plain and ideally choose a site where power delivery from the utilities will not be impaired. This is a critical point. Cloud providers and their customers need to fully appreciate how the power routes between their chosen data centre and through the electricity distribution network – in some cases its pretty tortuous.

Finding the ideal data centre location that ticks all the right boxes is often easier said than done, especially in the traditional data centre heartlands. Certainly, having an N+1 redundancy infrastructure in place is critical to mitigating outages due to equipment failure.

Simply put, N+1 means there is a more equipment deployed than needed and so allows for single component failure. The ‘N’ stands for the number of components necessary to run your system and the ‘+1’ means there is additional capacity should a single component fail. A handful of facilities go further. NGD for example has more than double the equipment needed to supply contracted power to customers, split into two power trains on either side of the building each of which is N+1. Both are completely separated with no common points of failure.

But even with all these precautions a facility still isn’t necessarily 100 percent ‘outage proof’. All data centre equipment has an inherent possibility of failure and while N+1 massively reduces the risks one cannot be complacent. After all, studies show that a proportion of failures are caused by human mis-management of functioning equipment. This puts a huge emphasis on engineers being well trained, and critically, having the confidence and experience in knowing when to intervene and when to allow the automated systems to do their job. They must also be skilled in performing concurrent maintenance and minimising the time during which systems are running with limited resilience.   

Rigorous testing

Prevention is always better than cure. Far greater emphasis should be placed on engineers reacting quickly when a component failure occurs rather than assuming that inbuilt resilience will solve all problems. This demands high quality training for engineering staff, predictive diagnostics, watertight support contracts and sufficient on-site spares.

However, to be totally confident with data centre critical infrastructure come hell or high water, it should be rigorously tested. Not all data centres do this regularly. Some will have procedures to test their installations but rely on simulating total loss of incoming power. But this isn’t completely fool proof as the generators remain on standby and the equipment in front of the UPS systems stays on. This means that the cooling system and the lighting remain functioning during testing.

Absolute proof comes with ‘Black Testing’. It’s not for the faint hearted and many data centres simply don’t do it. Every six months NGD isolates incoming mains grid power and for up to sixteen seconds the UPS takes the full load while the emergency backup generators kick-in.  Clearly, we are only cutting the power to one side of a 2N+2 infrastructure and it’s done under strictly controlled conditions.     

When it comes to data centre critical power infrastructure regular full-scale black testing is the only way to be sure the systems will function correctly in the event of a real problem. Hoping for the best in the event of real-life loss of mains power simply isn’t an option.    

Uptime check list

  • Ensure N+1 redundancy at a minimum, but ideally 2N+x redundancy of critical systems to support separacy, testing and concurrent access
  • Streamlining MTTF will deliver significant returns on backup systems availability and reliability, and overall facilities uptime performance
  • Utilise predictive diagnostics, ensure fit for purpose support contracts, and hold appropriate spares stock on-site
  • Regularly Black Test UPS and generator backup systems
  • Drive a culture of continuous training and practice regularly to ensure staff are clear on spotting incipient problems and responding to real time problems– what to do, and when/when not to intervene

Squaring the hybrid cloud circle: Getting the best out of all scenarios

There is clearly a growing need and place for both public and private clouds. But users are increasingly looking for solutions that give them the best of all worlds by seamlessly interconnecting the two together into hybrid solutions. In addition, many organisations need to encompass legacy IT systems so that they operate as seamlessly as possible alongside the hybrid cloud environment. 

It’s a tall order. However, there are already clear signs that such transformative solutions are making the step from concept to reality.

For one, some of the major public cloud providers are stepping up to make the development and deployment of hybrid solutions more straightforward. The newly launched Microsoft Azure Stack, for example, is intended to allow organisations to run Azure IaaS and PaaS services directly within their own data centres, whether in-house or in their chosen colocation facility.

On paper, this allows organisation to enjoy the full range of public Azure services on their own hardware, while also moving private workloads seamlessly between their chosen data centre and the Azure public cloud. The major advantages here are continued ownership of core and mission critical applications in a private cloud while also receiving the added benefits of continuous software updating and automated backups delivered with Azure public cloud service.

Such initiatives are clearly essential for getting hybrid clouds well and truly off the ground. There are many organisations out there, especially more heavily regulated ones, demanding the retention of private cloud infrastructures and certain legacy systems. An organisation might be happy enough using an Internet-based public cloud development platform for testing new applications, but not once it goes into production.

In practice, whether in-house or off-premise, the data centres supporting these hybrids will need to be equipped with fit for purpose IT infrastructure, suitable cooling and sufficient power to scale and manage the increasing draw of high density racks. They will also need highly skilled engineering personnel on hand as hybrid clouds are complex animals and cannot be built, tested and managed successfully without suitable facilities and training. High levels of physical and cyber security are also going to be of more importance than ever.         

But, above all, as demand for hybrid cloud environments continues to grow data centres must meet user expectations for application responsiveness and predictability. With the considerable amounts of data moving back and forth between the public and private cloud environments, and possibly legacy systems, a hybrid approach brings both latency considerations and the cost of connectivity  sharply into focus.

Taking Azure Stack as a working example, it is not designed to work on its own, rather alongside Azure Public Cloud as a peer system.  Therefore, latencies between the Azure Stack system and the Azure Public Cloud will determine how fast and seamless a hybrid cloud system is once deployed. 

Trans-facility networking

However, few private data centres will be able to afford to run the dedicated network links necessary for assuring consistent performance on an ongoing basis for workloads that may have variable resource needs.  While for ‘standard’ interlinks between existing Microsoft environments and Azure Public Cloud, Microsoft offers ExpressRoute as a low-latency dedicated connection, it is only available as a trunk connection to certain colocation, public cloud and connectivity operators.  These can connect directly with ExpressRoute at core data centre speeds and so largely eliminate latency issues and ensure bandwidth is optimised.

For those private or colocation data centres not directly connected, the only alternative is to find an equivalent fast and predictable connection from their facility to an ExpressRoute partner end point to make use of the system. As such, organisations using ExpressRoute for their own private data centre will still have to deal with any latency and speed issues in the ‘last mile’ between their facility and their chosen ExpressRoute point of presence.  This is the case even where connectivity providers are offering ExpressRoute to a private or colocation facility as they are layering their own connectivity from the edge of their network and the ExpressRoute core to the edge of the user network. 

In addition, if an organisation is planning on using a colocation facility for hosting some or all the hybrid cloud environment but keeping legacy workloads operating in its own data centre, the colo must offer a range of diverse connectivity options. Multiple connections running in and out of the facility will assure maximum performance and resilience.

In summary, the major cloud providers and data centre providers are working hard to meet growing demand for ‘best of all worlds’ hybrid cloud solutions. However, delivering the predictable and seamlessly interconnected public, private and legacy environments that users really want will call for fit for purpose trans-facility networking. This is essential for squaring the circle and enabling the unified fully automated computing environments enterprise organisations are searching for.