All posts by stuartnielsenmarsh

Living on the edge: The changing face of the data centre and public cloud

Due in part to the increased prevalence of Internet of Things (IoT) devices, it seems that more computing capacity is moving to the edge than ever before – and a lot of this is happening in the data centres that increasingly cloud-savvy organisations now rely upon. There are many drivers behind this change, including governance, security, regulation and IP protection.

Reducing latency

Moving datacentres closer to the edge can solve various issues, not least of which is latency. Looking into the near future with the likes of artificial intelligence (AI), there will need to be powerful analytics solutions located at the point of consumption, because network latency can be costly and even fatal to businesses.

Imagine the problems that latency could cause if we get to a point where AI is being used to support medical surgery, or if robotics becomes a critical part of vehicle navigation. These scenarios require real-time action and response — something that is prevented through latency.

A time to co-exist

That’s not to say we should forget about cloud entirely. When talking among peers, it is clear that edge computing and networks will co-exist for some time to come, and that a balance will be achieved based on use cases and business scenarios.

Moving to the edge doesn’t signal the end for core data centres. It’s about an appropriate use of edge and public cloud based on specific business requirements, rather than one being better than the other. The answer here lies in hybrid solutions.

A significant evolution

Going forward, we will see an increasingly significant evolution of the hybrid cloud model. People have been talking about hybrid for a little while now, but the evolution of services and architectures that support this model have yet to catch up to requirements.

That’s all changing now. Microsoft will later this year release Azure Stack, a solution that is very much all about putting services and capabilities at the edge, while ensuring that the benefits of hyper-scale computing and services in the (public) cloud are available when required.

The idea of buying a private version of Azure — with all of its inherent services and capabilities — and putting this at the edge to deal with latency, governance, IoT, security and other edge requirements (and doing so in essentially the same ecosystem), will prove to be a game changer.

Horses for courses

Businesses won’t move everything previously kept at the core towards the edge — they will pick and choose depending on circumstances. It is very much a case of horses for courses. We have lived through a period where public cloud has been seen as the bright new shiny toy, capable of solving all the ills of corporate IT. As the public cloud matures and evolves, people are naturally starting to see use cases where the edge has distinct advantages over centralised cloud scenarios.

Look at the differences between the edge versus public cloud. At the cloud end you have less control and less customisability, but better scalability and access to hyper-scale services that you couldn’t justify building for yourself. At the edge end, you have more control, more customisability, lower latency and the ability to apply greater control and regulation.

It’s about finding an appropriate use of these two models based on need rather than one being better than the other – the answer here is about hybrid solutions and building true end-to-end hybrid ecosystems that allow you to get the best of both worlds.

Embrace the change

With the release of Azure Stack, more businesses will be encouraged to put their services and capabilities at the edge, while ensuring that the benefits of hyper-scale computing and services in the (public) cloud are available when required. If a business can buy what is essentially a black box appliance supported by the hardware vendor and Microsoft directly, then all it really has to do is keep the lights on, so to speak.

This makes edge scenarios cost effective and easy to manage, but again with the added benefits of intelligent elasticity back to the (public) cloud, with little if any additional costs or changes required – true hybrid cloud that takes the benefits of both the edge and the centre and combines them together.

The idea of bringing private cloud capability back to the edge is one of the biggest game changers we have seen for some time. The other big public cloud players will undoubtedly try to keep up to ensure they remain relevant to the edge by evolving their true hybrid strategies – we know AWS is talking to VMware right now about this very possibility, so the wheels are already in motion. However, right now, it is Microsoft that has the edge.

Recovering from disaster: Develop, test, and assess

(c)iStock.com/natasaadzic

Disaster recovery (DR) forms a critical part of a comprehensive business continuity plan and can often be the difference between the success and failure of an organisation. After all, disasters do happen — whether that’s a DDoS attack, data breach, network failure, human error, or by a natural event like a flood.

While the importance of having such a strategy is well recognised, how many organisations actually have the right plan in place? Not many, according to the 2014 Disaster Recovery Preparedness Benchmarking Survey which revealed that more than 60% of companies don’t actually have a documented DR strategy. More than that, the survey found that 40% of those companies that do have one said it wasn’t effective during a disaster or DR event.

Taking the above into consideration, what can businesses do to ensure their plans are not only in place, but also work as they should and allow organisations to recover quickly and effectively post disaster?

One aspect to consider is using the cloud to handle your DR requirements as it is a cost-effective and agile way of keeping your business running during and after a disaster. DR cloud solutions or disaster recovery as-a-service (DRaaS) deliver a number of benefits to business. These include: faster recovery, better flexibility, off-site data backup, real-time replication of data, excellent scalability, and the use of secure infrastructure. In addition, there’s a significant cost saving as no hardware is required — hardware that would be sitting idle while your business is functioning as normal.

Another aspect is testing. Not only should DR strategies be continuously tested, but they should also be updated and adapted in line with changes in the business environment and wider technology ecosystem, as well as industry or market shifts. Again, this is seen as important, but practically, isn’t happening as it should. According to the same benchmarking survey, only 6.7% of organisations surveyed test their plans weekly, while 19.2% test annually and 23.3% never test them at all.

The practicalities of implementation can often be challenging — from budgetary issues, buy-in from CIOs and the type of solution itself. DR means different things to different people — from recovery time, in terms of minutes or weeks, to its scope covering just critical systems or encompassing all IT.

So where do you start?

Identify and define your needs

The first stage of defining these requirements includes performing a risk assessment often in conjunction with a business impact analysis. This includes considering how age, volume and criticality of data, and looks at your organisation’s entire IT estate. DR can be an expensive exercise and the initial stage of strategy development can help you with evaluating the risk versus the cost.

Your data could be hosted on or off site; and for externally hosted solutions this means making sure your hosting provider has the right credentials (for example, ISO 27001) and expertise to supply the infrastructure, connectivity and support needed to guarantee uptime and availability.

It is also during this phase that you should define your recovery time objectives — the anticipated time you would need to recover IT and business activities — and your recovery point objectives — the point in time to which you recover your backed up data.

Creating your DR plan

A successful DR strategy encompasses a number of components, from data and technology, to people and physical facilities. When developing the actual plan and the steps within it, you need to remember that it affects the entire organisation.

Connectivity plays a critical role here, specifically in how staff will access the recovered environment, i.e. though a dedicated link or VPN. Is additional connectivity needed for the implementation of the strategy to work? And if so, how much will this cost?

Test, assess, test, assess

The final stage is an ongoing one and is all about testing the plan. With traditional DR it is often difficult to do live testing without causing a significant system disruption. In additional testing complex plans comes with its own degree of risk. However, with DRaaS, many solutions on the market include no impact testing options.

At this point it is also important to assess how the plan performs in the event of an actual disaster. In this way weaknesses or gaps can be identified, driving areas of improvement for future plans.

Conclusion

In today’s business environment it is safe to assume that your organisation will experience a disaster or event of some kind that will affect operations, cause downtime or make certain services unavailable. Having a DR strategy in place — one that works, is regularly tested and addresses all areas of operations — will help mitigate the risk and ensure the organisation can recover quickly without the event having too much of a negative impact on customer experience, the brand or the bottom line.