All posts by graham.jarvis

How to accelerate your hyperconverged healthcare cloud: A guide

In the race for digital transformation, even in healthcare, there is a need to implement the right systems and solutions to maximise uptime, increase operational performance and to reduce downtime. However, it isn’t as simple as going out to a supermarket to buy a loaf of bread. There are so many potential solutions and systems on the market claiming that they do the job – but do they?

That’s the question that should arise whenever anyone is trying to resolve anything from latency to data storage. However, it’s not an easy question to answer, as the healthcare IT landscape is ever-changing.

Martin Bradburn, CEO of Peasoup Hosting, therefore comments: “For the healthcare sector, cloud infrastructure with its security and scalability can, and in some instances is already accelerating the development of clinical applications. Digital transformation is also changing the way healthcare operates, connecting remote clinical specialist resources directly to the patients for improved diagnosis on a worldwide scale.”

He adds that the use of cloud technology, more precisely "its ability to efficiently process and deliver data in a collaborative manner, analysing data into meaningful information… can relieve the current healthcare challenges. Equally, by using cloud IT infrastructure instead of the one on-premise healthcare, organisations pay more efficiently for what they use,” says Bradburn.

Hyperconverged benefits

In a January article for HealthTech magazine, ‘The Benefits of Hyperconvergence in Healthcare’, freelance journalist Tommy Peterson writes: “With expansions, mergers and an increased reliance on new technologies, the healthcare landscape is changing at dizzying speed.”

That’s because IT teams are being pushed to keep pace with digital transformation, to “navigate complex logistics and help to cut costs and deliver better patient care”, he says, while citing David Lehr, CIO of Luminis Health, a new regional healthcare system in Maryland, comprising Anne Arundel Medical Center and Doctors Community Medical Center.

Lehr adds: “We need to be good stewards of the investments our communities make in us. Driving up unnecessarily high costs on complicated IT infrastructure that takes an army of people to manage isn’t a great way to live up to that expectation.” So, in essence hyperconvergence is seen as the answer, which is to achieve through consolidation to save money and to enhance reliability; by using virtualisation to streamline applications; and he argues that simplification makes data more useful, secure and accessible by making it easier to move on-premise applications to the cloud.

David Trossell, CEO and CTO of Bridgeworks, argues hyperconverged systems have many benefits to the smaller healthcare providers. "By consolidating down multiple technologies from multiple providers for all the separate equipment that forms a modern data centre, along with all the differing support contracts and possible iteroparity issues that can occur, hyperconverged brings this all down to one or two providers," says Trossell.

“Whilst hyperconvergence solves many day-to-day issues in the data centre the biggest threats these days emanate from outside of it in the form of cyber-attacks, and these attacks are getting more and more sophisticated in their approach," Trossell adds. "Where once healthcare companies could revert to the backups and reload; these cyber criminals are content with just attacking online data, they have now even started to attack the backup software and backup data. This forces healthcare companies to seek new levels of data security that are on multiple levels and redundancy across multiple locations.” 

Technology management

Hyperconvergence can also permit healthcare organisations to know when they need to expand or upgrade their technology. When everything has been tested, and whenever there is a need to speak to a supplier’s technical support, they will only need to make one call.

Bradburn adds: “Hyperconvergence is becoming the new standard in infrastructure; there are many commercial offerings that simplify the management and provide greater control of the local infrastructure, ensuring higher availability. The physical infrastructure sizes are reduced, with lower power and cooling requirements and with less management complexity, which has obvious cost-saving benefits in the healthcare sector and reduces the risks of failure.

“A cloud service takes this reduction in risk and complexity to the next level by removing all the infrastructure management. This compliments the healthcare environment, removing the traditional budget limitations with over or under provisioning. This also makes it easier to extend the infrastructure into the cloud to provide the elastic growth, ensuring the infrastructure is always the right size to meet the demands of the users and applications.”

Team collaboration

With applications becoming more mobile and web-based, Bradburn notes there is an increasing team collaboration trend in real-time product development, analysis and reporting. With this in mind, he argues that the cloud environment is “perfect for big data sets – archiving, pulling out and data manipulation is fast and effortless. This is vital for all services when the response is urgent.”

He adds that a cloud can provide an air gap and be bundled with cloud back-up and disaster recovery services. These services 'minimalise and mitigate the risk of cyber-attacks such as hacking or ransomware', as he puts it.

“It's a great solution for all organisations seeking to leverage the cloud while keeping governance and privacy their highest priority. Cloud offers to healthcare organisations a cost-effective way to ensure complete availability of the IT infrastructure whilst limiting vulnerabilities.”

Transporting data

“Using the cloud as on offsite data protection facility has many advantages in cost and, if done correctly, an air-gapped depository”, says Trossell, who believes there is a need for new thinking about how data is transported. He says it’s imperative to move off-site data to a HIPPA compliance cloud, or to multiple cloud providers. 

Placing your data in one data or cloud is highly risky. Healthcare companies therefore need to recognise that not only does their data need be backed up in several locations, but also “that until the last byte of backup data has been received by the cloud, there is no backup.”

Trossell explains that the ability to move data in a timely fashion to the cloud is governed by three factors: latency, packet loss and the bandwidth of the WAN link. He adds: “In many cases we see the assumption that if you want to improve the way in which data moves across the WAN, the answer is to throw bandwidth as the problem.” The trouble is that the existing bandwidth may be completely adequate, and so it may just be latency and packet loss that are affecting WAN performance.

WAN acceleration needed

“The norm is to employ WAN optimisation to resolve the issue”, reveals Trossell. The trouble is that this commonly has little effect on WAN performance – particularly when data must be encrypted. He adds: “The other go to technology in the WAN armory is SD-WAN. Whilst this is great technology; it doesn’t solve the WAN latency and packet loss issues.” The answer to mitigating the effects of latency and packet loss is WAN acceleration, which is also referred to as WAN data acceleration.

Trossell adds: “WAN acceleration approaches the transport of data in a new way. Rather than trying to squash the data down, it uses parallelisation techniques controlled by artificial intelligence (AI) to manage the flow of data across the WAN, whilst mitigating the effects of latency and packet loss.”

What’s great about it, in his view, is that the solution doesn’t change the data in any way, and it can be used in conjunction with existing back-up technologies and SD-WANs. This in turn will drive up the utilisation of the WAN bandwidth to 95%. He therefore notes: “That bandwidth you currently have may, may just be enough for your needs.”

Reducing network congestion

“Due to the nature of this sector, healthcare organisations are often multi-located with other organisations and third-party suppliers in different countries”, says Bradburn. “To improve data transfer across entire network, including the cloud and even for mobile healthcare workers, organisations can implement WAN data accelerators.” He emphasises that the technology reduces network congestion, while optimising the traffic traversing the WAN.

“This improves performance and acceleration of applications across WAN considerably, enabling the real time collaboration required for effective patient healthcare”, he explains, commenting that the healthcare sector has made a big shift in recent years to the public cloud. Machine learning and artificial intelligence are “pushing the cloud adoption further.” Many of these technologies, clouds and cloud service depend on WANs, and so they can be affected by the spectres of latency and packet loss.

Hyperconverged cloud benefits

However, Trossell believes that many of these technologies offer significant benefits to healthcare organisations. “Properly deployed and managed, cloud-based solutions offer healthcare organisations unprecedented opportunities to innovate, develop and deploy applications while still maintaining privacy, security and compliance.”

As for hyperconverged infrastructure, he says: “A hyperconverged infrastructure, by combining and virtualising the components of networking, security storage and compute into a single box or clusters of multiple boxes removes the complexity of a separated traditional model”.

Trossell summarises the key benefits as being:

  • The physical size of the infrastructure is reduced, saving power and cooling cost
  • The management of the infrastructure is simplified, reducing the overhead of staff management time and different skill sets for each component
  • The resilience and performance are higher are there are fewer interconnecting
  • components that can cause bottlenecks or failures

New standard: Hyperconvergence

Hyperconvergence, Trossell finds, is rapidly becoming the new standard in performance and reliability. It also benefits healthcare organisations without impacting on management costs, maintenance and hosting.  Yet, there is no getting away from the importance of connectivity – particularly at a juncture when the uptake of cloud services is increasing.

“In some instances, the distance between on-premise servers and the cloud cause data transfer latency, which becomes the limiting factor, rather than the size of the bandwidth, especially when transferring large medical imagery," says Trossell.

“More and more healthcare organisations implement WAN accelerators to mitigate the above issues. By adding WAN data accelerators, the data can be moved at a high speed across lower bandwidth connections over substantial distances, providing faster access to information and better patient care.” 

Healthcare IT tips

In summary Bradburn and Trossell offer their 5 top tips for consolidating, simplifying, increasing the performance and reducing the cost of healthcare IT:

  • Research your cloud supplier – smaller cloud providers often offer predictable and simple pricing, unlimited bandwidth that’s preferable in the private sector. Always check the small prints and annexe. Some of the cloud providers offer a low cost per unit but additional charges for ingress and egress, which can make the whole solution twice as expensive
     
  • Deploy green technologies – we know that new technologies perform better and have a lower carbon footprint. Offsetting carbon emissions by planting more trees is one way to deal with climate change. But there are cloud suppliers that utilise ecological data centres. Some of them use a liquid technique for cooling IT infrastructure. Liquid cooling offers a much higher performance level ready for big data transfers. This is especially helpful in specific applications like x-ray or video surgeries
     
  • Consider data sovereignty – For example, GDPR compliance is one of the most important safeguards for any companies. Due to the nature of the healthcare sector it’s critical that personal data is stored securely and not replicated overseas. To stay competitive, some cloud providers use foreign data centres for data replication. So, don’t risk and check before you commit
     
  • Mitigate latency and packet loss – Use WAN Acceleration to improve WAN performance by mitigating latency and packet loss. Even SD-WANs will benefit from a WAN performance overlay
     
  • Consider the benefits of hyperconvergence and how it will enable your healthcare organisation’s own operational performance  

So, where next for the hyperconverged healthcare cloud? Bradburn concludes that the next steps are centred around compliance and standards. He thinks there is a need to provide a truly global healthcare service with access to specialists across the globe and to be able to call on their expertise for remote diagnosis.” For example, this could be about clinicians sharing best practice, data and techniques to prevent a coronavirus pandemic; to work together collaboratively from afar to find a cure for cancer. 

This requires healthcare organisations to build “networks that enable the efficiency in data transfer, storage facilities and security are the key challenges, whilst also defining the standards to ensure data formats from clinical systems and personal health IoT devices are universally understood.”

By undertaking these steps, and with the support of WAN acceleration, Bradburn believes the true power of the cloud can be utilised in the healthcare services of the future.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Exploring WAN data acceleration: Is edge computing really necessary?

Most traditional forms of dealing with network and data latency, as well as packet loss, fall short of their promise. This includes WAN optimisation. SD-WANs are great, but their performance can also be improved by adding a WAN data acceleration layer. With the growth of the Internet of Things, therefore, organisations are looking for new ways to reduce the impact of latency and packet loss.

David Linthicum, chief cloud strategist at Cisco Systems, explains edge computing’s benefits. “By eliminating the distance and time it takes to send data to centralised sources, we can improve the speed and performance of data transport, as well as devices and applications on the edge.” This sounds great, but why are people talking about edge computing, which while in many respects has many other names isn’t quite so new?

With technologies such as autonomous vehicles, the answer appears to lie in edge computing because it permits much of the data analysis to be conducted close to the data, while minimising latency and packet loss. However, questions should still be raised about whether this pushes data too close to the edge. One thing is true; edge computing won’t necessarily make data centres redundant. There will still be a need for data to be stored away from the edge to ensure business and service continuity.

Data processing trends

However, the new Gartner Trend Insight Report, ‘The Edge Completes the Cloud’, states: "By 2022, more than 50% of enterprise-generated data will be created and processed outside the data centre or cloud." David Trossell, CEO and CTO of Bridgeworks, thinks these findings are bang on. He says edge computing improves the ability of multi-national cloud companies to make deployment easier.

“In the past edge was difficult, as you would have to create own little data centre or facility, but now it’s all there in the cloud; and its savings in operational expenditure (OPEX) and capital expenditure (CAPEX) that everyone loves at the moment”, he adds.

Edge computing clarity

In response to whether people are confused about what entails edge computing, Trossell says: “I don’t think there is confusion – its function and reason are straightforward. Confusion only arises when people don’t understand why it is needed or try to misuse it.” 

Eric Simone, CEO of ClearBlade, a company that is promoting the benefits of edge computing, nevertheless explains that it is one of the latest ways of applying age-old concepts of distributed computing philosophy and practice.

He recently explained to IoT World Today that his definition of edge computing is about having a solution that runs on the cloud or elsewhere, and a solution that you sync completely with a device in a car, on a train, in a factor and “run it from there.” This requires some standardisation of the way organisations transmit – not just data, but also configure the entire system.

Nothing new

Trossell concurs that nothing is new in computing. “Edge is just a new way of solving age-old problems," he says. "Now that the market consists of the major cloud and other smaller local cloud providers, edge computing is more economically viable to do.

“Like every aspect of computing, edge computing is not a panacea for all problems, but It’s great for low latency applications. However, if the application doesn’t need that level of response or local computing capacity – then it is not the answer.”

Speaking about the question of whether edge will lead to the demise of data centres, he says there are different levels of storage and computation required. At some point, all of the edge data needs to come together for processing. “Data centres meet those high-end requirements – this could be on-premise or in the cloud”, he suggests.  

He observes that edge computing has a specific use case – that being for low power latency and critical applications, while emphasising that latency can only be mitigated with WAN data acceleration and other technologies. So, the answer is often to move the data closer to the end point (such as with edge computing). Despite this, WAN data acceleration is still used to improve the efficiency of WAN connection when moving the data back to the data centre.

Tips for mitigating latency

With these points in mind, Trossell offers his top five tips for mitigating latency and reducing packet loss, with or without edge computing:

  • Assess the impact of latency and packet loss on your project because they are a fact of life
  • Remember that even though may you have solved the latency by moving control to the edge, getting data to and from the edge can still scupper the project
  • Any technology if deployed at the edge to mitigate these, must be lightweight in terms of computing and storage requirements
  • Ensure that your edge computing works for all data types, especially encrypted
  • Consider deploying a combination of solutions, include WAN data acceleration – or even SD-WANs with a WAN data acceleration overlay

“As people move to more and more connected devices that have to respond immediately to situations, such as autonomous vehicles in motion, plant equipment or process controls, then there is an absolute need for edge computing," Trossell adds. "There is also an argument for edge-to-edge requirements; such as with autonomous vehicles. Their first primary edge within them requires the making of split-second decisions, and the secondary edge is receiving or transmitting information between the central data centre and the vehicle.”

The edge is but one solution

Considering the views of the industry experts cited in this article, it is clear that in many cases edge computing is required. However, it’s not the only technology that’s necessary. A plethora of technologies may provide the complete solution, and this could include WAN data acceleration. It uses machine learning to accelerate data across WANs, and in contrast to edge computing the mitigation of data doesn’t require each data centre to be situated in the same circles of disruption.

This means that any data that has been gleaned at the edge, can be sent for processing and analysis thousands of miles away. It can also be backed up and then rapidly restored whenever disaster strikes. WAN data acceleration can therefore complement edge computing – giving life to new ways to tackle latency and packet loss.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

SD-WAN and cloud: A marriage made in heaven or a match made in hell?

Paul Stuttard, director of Duxbury Networking, argues in South African online magazine IT Web that SD-WAN is a marriage made in heaven. Why? SD-WAN technology has the ability to optimise the cloud, while speeding up access to cloud-based applications, he claims. Stuttard further explains that just 10 years ago organisations were advised to prepare for the impact of cloud computing on their networks.

“Planning, they were told, was all-important,” he writes. “Before running significant applications in the cloud, it was vital to understand how data is sourced and stored. End-users were also warned about large-scale commitments to cloud technology and encouraged to plan and execute a carefully managed transition to the cloud.

“Fast forward 10 years. The cloud, once a disrupter in the IT firmament, has now matured, boosting the capabilities of IT departments across the globe in the process. Now the cloud is seen as a vehicle for innovation on a number of fronts.”

Facilitating digital business

“More specifically, the cloud is now facilitating digital business and, in parallel, promoting a new generation of scalable, agile solutions,” Stuttard adds. “Researchers predict the global public cloud market, expected to be worth $178 billion this year, will grow at a significant 22% per annum fuelled by global enterprises' drive to adopt digital transformation strategies.”

Such is the pace of digital transformation that he cites Mike Harris, executive vice president of research at Gartner, who believes the challenge that most organisations face today is their ability to keep up with the pace of ongoing developments. He says this requires them to constantly adapt and prepare their business and IT strategies for radical digital transformation, the all-cloud future, and the for progressive change – or what Gartner calls the ‘continuous next.’ 

Cloudy limitations

“One of the hard facts organisations will have to face when coming to terms with the continuous next is that traditional wide area network (WAN) architectures are not designed to support new cloud-based consumption models,” Stuttard adds. “Wasted bandwidth, higher data packet loss, increased latency, elevated inefficiency levels and, most importantly, higher operational costs, await organisations that opt for this hybrid solution.”

So, in Stuttard’s opinion, this is where SD-WANs step in. However, while SD-WAN is a capable technology, it doesn’t quite mitigate the issues of network and latency or help reduce packet loss, in a way that WAN data acceleration can achieve, according to David Trossell, CEO and CTO of Bridgeworks.

Valid points

Trossell neither agrees or disagrees with Stuttard’s assessment.“He has some valid points regarding legacy WAN, latency, packet loss, wasted bandwidth, and their related costs. However, I disagree with his view that legacy WANs are not designed for cloud-based models and he fails to say that latency packet loss and wasted bandwidth are also present in SD-WANs.”  However, he still agrees that latency and packet loss are the cause of poor ROI from many WAN installations.

The thing is, he claims, SD-WANs “don’t fix the problem of latency or packet loss. However, they can lower costs and layer different WAN requirements over different WAN links. The cost reduction comes from using broadband instead of expensive MPLS (which are meant to be low latency)”.  He nevertheless agrees that latency and packet loss are the cause of poor ROI from many WAN installations.

The problem with SD-WANs on their own appears to be that, while they can segregate data over different paths to maximise the best use of the WAN connections, they are still left with the underlying problems of latency. However, they can be enhanced with a WAN data acceleration overlay, which can mitigate the effects of latency and packet loss with machine learning and by using parallelisation techniques.

Overly simplified

Trossell believes that Stuttard “has simplified the argument too much because many of the cost savings from SD-WAN come from the use of broadband. “These tend to have much more latency and packet loss”, he explains while noting that data comes from a multitude of sources – including Internet of Things (IoT) devices, the web, [and] social media.

He suggests it’s important to remember that data may not always involve a traditional data centre, as it is possible to manage and store data in the cloud. This may lead to sanitised data or results. “The cloud is now rapidly becoming another storage tier extension for the data centre”, he remarks.

Trossell adds: “Cloud has allowed organisations to have that degree of separation between the wild west of the web with all its security issues and their datacentre. This has allowed them to drive innovation in the way they interact with their customers.”

Public sector drive

Businesses aren’t the only ones being affected by the thrust towards digital transformation. Public sector and government organisations are feeling the push too. Trossell says this is driven by two key factors: “First and foremost, there’s the need to reduce costs, the expectations  of our 24 hours a day, always-on society. This may involve technologies, such as chatbots and live chat, to offer people an alternative to waiting hours in a telephone queue to resolve an issue they need to raise, or to pay for something.

“Like every technology, SD-WAN as a technology has its place, but like many technologies it will not totally displace the existing technology," Trussell concludes. "SD-WAN is gaining market share in countries that have low bandwidth and very costly WAN connections. However, there is a competitive open market for WAN connections, and so the cost of bandwidth is dropping rapidly, whilst the availability of higher gigabit networks is rising rapidly. This could limit the expansion of SD-WANs.”

With this potential limit in mind, it may be time for organisations to look at how they can really reduce the impact of network latency and of pack loss. The answer may not come from a large vendor, nor from WAN optimisation, and so adding a WAN data acceleration layer to SD-WANs might be the answer to allow faster data flows with mitigated latency and packet loss. This, with the cloud in mind, could really be a marriage made in heaven – a cost-effective and long-lasting one, too.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Is hyperconvergence really key to your data centre cloud strategy?

Vendors often like to create a new name for an existing piece of technology that is essentially made up of the same components and fulfils the same functions. This is because of factors such as the competitive pressure to keep customers interested: application service provision is more commonly known today as the cloud, while converged infrastructure has led to hyperconverged infrastructure.

Sometimes there are actual technological differences between products, but this isn’t always the case. That’s because once a technology has reached its peak, the market could potentially drop off its perch. Vendors’ claims – and even media reports – should therefore be treated with a pinch of salt and some careful scrutiny.

For example, DABCC magazine (August 2017) highlighted: “Cloud is becoming a key plank in virtually every organisation’s technology strategy. The potential benefits are now widely understood, among them the ability to save money and reduce IT management overheads, meaning more resources can be ploughed into other parts of your business.”

The article, ‘Why Data Centre Hyperconvergence is Key to Your Cloud Strategy’, points out that moving to the cloud “…won’t necessarily deliver these benefits if done in isolation: organisations also need to look at their data centre operations and streamline how these are run. There’s a need to rationalise your data centre as you move to cloud.”

Cloud: Not for everyone

Let’s face it, the cloud isn’t for everyone, but nevertheless it has its merits. Yet before you go and invest in new technology or move to it, you should examine whether your existing infrastructure is sufficient to do the job you need it for. Ask yourself questions, including: “the hyperconvergence story: what’s really important?”

In response to this, David Trossell, CEO and CTO of data acceleration vendor Bridgeworks, notes: “We’ve been shouldered traditional system architecture for more than 50 years now”. He explains that there have only been a few significant changes along the way.  Apart from the likes of IBM, which has traditionally provided a one-stop shop, the company still purchases different parts of the system from different vendors.

This approach means customers can source parts for the most competitive price or which offer the best solution, from different vendors. However, the downside is the need to repeat the entire process of verifying compatibility, performance and so on.

“The other often unseen consequence is the time taken to learn new skill sets to manage and administer the varying products”, Trossell warns.  Yet he points out that there is increasing pressure on organisations’ budgets to spend less while achieving more for each pound or dollar.  This means that there is an expectation to deliver more performance and functionality from decreasing IT budgets.  

He adds: “With its Lego-style building blocks, where you add the modules you require knowing everything is interoperable and auto-configuring, increasing resources in an area becomes a simple task. Another key benefit is the single point of administration, which dramatically reduces the administrative workload and one product skill set.

“So, what about the cloud? Does this not simplify the equation even further?” he asks.  With the cloud, he says there’s no need to “…invest in capital equipment anymore; you simply add or remove resources as you require them, and so we are constantly told this is the perfect solution.”  To determine if it is the perfect solution, there is a need to examine other aspects of a cloud-only strategy. The cloud may be one of many approaches that’s needed to run your system.

Part of the story

Anjan Srinivas, senior director of product management at Nutanix – a company that claims to go beyond hyperconverged infrastructure – agrees that hyperconvergence is only part of the story.  He explains the history that led to this technological creation. “The origins of the name were due to the servers’ form factor used for such appliances in the early days,” he says. “The story actually hinges upon the maturity of software to take upon itself the intelligence to perform the functions of the whole infrastructure stack, all the way from storage, compute, networking and virtualisation to operations management, in a fault tolerant fashion.”

He adds: “So, it is fundamentally about intelligent software enabling data centre infrastructure to be invisible. This allows companies to operate their environments with the same efficiency and simplicity of a cloud provider. Hyperconvergence then becomes strategic, as it can stitch together the public cloud and on-premise software-defined cloud, making the customer agile and well-positioned to select multiple consumption models.”

Cost-benefit analysis

Trossell nevertheless believes that it’s important to consider the short-term and long-term costs of moving to the cloud: “You have to consider whether this is going to be a long-term or short-term process. This is about whether it is cheaper to rent or buy, and about which option is most beneficial.”

The problem is that although the cloud is often touted as being cheaper than a traditional in-house infrastructure, its utility rental model could make it far more expensive in the long-term; more than any the capital expenditure of owning and running your own systems.  

“Sometimes, for example, it is cheaper to buy a car than to rent one”, he explains.  The same principle applies to the cloud model. For this reason, it isn’t always the perfect solution.  “Done correctly, hyper-convergence enables the data centre to build an IT infrastructure capable of matching public cloud services in terms of elements like on-demand scalability and ease of provisioning and management”, adds Srinivas.

“Compared to public cloud services, it can also provide a much more secure platform for business-critical applications, as well as address the issues of data sovereignty and compliance. A hyper-converged platform can also work out more economical than the cloud, especially for predictable workloads running over a period.”

Silver linings

“Not every cloud has a silver lining”, says Trossell. He argues that believing the hype about the cloud isn’t necessarily the way to go. “You have to consider a number of factors such as hybrid cloud, keeping your databases locally, the effect of latency and how you control and administer the systems.”

He believes that there is much uncertainty to face, since the cloud computing industry expects the market to consolidate over the forthcoming years. This means there will be very few cloud players in the future. If this happens, cloud prices will rise and requests to cheapen the technology will be lost. There are also issues to address, such as remote latency and the interaction of databases with other applications.

Impact of latency

Trossell explains this: “If your application is in the cloud and you are accessing it constantly, then you must take into account the effect of latency on the users’ productivity. If most of your users are within HQ, this will affect it. With geographically dispersed users you don’t have to take this into account.

If you have a database in the cloud and you are accessing it a lot, the latency will add up. It is sometimes better to hold your databases locally, while putting other applications into the cloud.

“Databases tend to access other databases, and so you have to look at the whole picture to take it all into account – including your network bandwidth to the cloud.”

Your existing infrastructure, within your data centre and outside of it, therefore must be part of this ‘bigger picture’. So, with regards to whether hyperconvergence is the way to go, Trossell advises you to analyse whether you’re still able to gain a return on investment (ROI) from your existing infrastructure.

“Think about whether it is a has role in your cloud strategy”, he advises before adding: “With a hybrid cloud strategy can you downsize your data centre, saving on maintenance charges too. If you are going to go hyperconverged, then some training will be required. If you are going to use your existing infrastructure, then you will already have some skillsets on-site.”

He adds: “If the licence and maintenance costs of the existing infrastructure outweigh the costs of hyperconvergence, then there is a good business case for installing a hyperconverged infrastructure. This will allow everything to be in one place – a single point of administration.”

Data protection

There is still a need to consider data protection, which often gets lost in the balancing act. The cloud can nevertheless be used for backup as a service (BUaaS) and disaster recovery as a service (DRaaS), as part of a hybrid solution. Still, he stresses that you shouldn’t depend solely on the cloud and recommends storing data in multiple places.

This can be achieved with a solution, he claims, such as PORTrock IT: “If you decide to change over to the cloud, you need to be able to move your data around efficiently and at speed, as well as restore data if required. You need to keep it running to protect your business operations.”

Not just storage

Trossell and Srinivas agree that storage shouldn’t be your only consideration. “Storage is an important aspect, but that alone does not allow enterprises to become agile and provide their businesses with the competitive edge they expect from their IT”, says Srinivas. He argues that the advantage hyper-convergence offers is “the ability to replace complex and expensive SAN technology with efficient and highly available distributed storage, [which] is surely critical.

“What is critical, is how storage becomes invisible and the data centre OS – such as that built by Nutanix – can not only intelligently provide the right storage for the right application, but also make the overall stack and its operation simple”, believes Srinivas.

“Consider backups, computing, networks – everything”, says Trossell before adding: “Many people say it’s about Amazon-type stuff, but it’s about simplifying the infrastructure. We’re now moving to IT-as-a-service, and so is hyper-convergence the way to go for that type of service?”

Technology will no doubt evolve and by then, hyper-convergence may have transformed into something else. This means that it remains an open question as to whether hyper-convergence is key to your data centre cloud strategy.

It may be now, but in the future, it might not be. Nutanix is therefore wise to ensure that the Nutanix Enterprise Cloud “…goes beyond hyperconverged infrastructure.”  It would also be an idea for you to consider if there are other options which might serve your data centre needs better. That’s because some healthy scepticism can help us find the right answers and solutions.

Top tips for assessing whether hyperconverged is for you

  • Work out if there is any value in your existing infrastructure, and dispose of what no long has any value – which may not be all of it.
  • Calculate and be aware of the effects of latency on your users, including functionality and performance
  • Run the costings of each solutions out for three to fove years, and examine the TCO for those periods to determine which solution is right for your business
  • Look at what the savings on maintenance and licensing against the cost of moving to a hyper-converged infrastructure
  • Consider a hybrid solution – and don’t lose sight of your data protection during this whole process

Anything you can do, AI can do better: Making infrastructure smarter

(c)iStock.com/pixtum

Data flows and artificial intelligence (AI) are changing the globalised supply chain by sending data around the world. Using the cloud as the data transport method requires more efficient data acceleration. AI is infiltrating many places, and it’s not just about replicating human performance and taking jobs, or streamlining processes. It also helps make technology smarter.

AI can be part of IT infrastructure. David Trossell, CEO and CTO of Bridgeworks, argues that it can be a good thing, and needn’t involve employees being made redundant. In his view, artificial intelligence offers a good story as it can enable organisations to better manage their IT infrastructure, and enable organisations to improve their business intelligence, allowing them to make better decisions at a time when big data volumes are increasing at a tremendous rate. In fact, CTOVision.com claims that some experts predict that by the year 2020, the volume of digital data will reach as high as 40 trillion gigabytes.” So the task of sifting through it going to become harder and harder.

Lucas Carlson, senior vice president of strategy at Automic Software, believes it’s been an interesting summer – and for reasons most of us wouldn’t think about. He says that there are ‘5 ways artificial intelligence will change enterprise IT’ in his article for Venture Beat. He says artificial intelligence can be used for predicting software failures, detecting cyber-security issues, creating super-programmers, making sense of the Internet of Things, and managing robots in datacentres.

Rise of AI

Just a few short years ago artificial intelligence was at the beginning of the hype cycle along with fuzzy Logic and other such things, and so much was hoped for out of these technologies but they slowly faded away in to the background”, says Trossell. This pushed artificial intelligence out of favour, becoming more of an academic interest than anything else.

That was until the ground-breaking and great movie, AI, came along and shocked the world with life-like human thinking. But whilst it brought AI back to the forefront it did so with the fear that robots would displace humans as the superior race”, he explains. One could add that there have been many science fiction movies like this – such as the Terminator series of films. The films are about the battle between humans and killing machines – cyborgs – which take on a human form in order to enable them to walk unnoticed until they attack their targets with an eye on defeating humankind. Perhaps the most shocking aspect of the story is that the machines were originally built by our own race.

Trossell thinks the doom and gloom presented by these films will happen well after his lifetime.  Yet hopefully, mankind will choose to make more machine intelligence for our own good, rather than for our own destruction. Nevertheless, he thinks that AI is slowly re-emerging in a positive light, not as the prophecy of destruction of mankind but as a companion.  Many people are still concerned that this will displace jobs and lead to global unemployment with all the social upheaval and ills that accompanies it, but history teaches us otherwise.”

He explains why this is the case: At the start of the industrial revolution, James Hargreaves’ Spinning Jenny suddenly changed the lives of many cottage industry spinners, replacing many with one largely unskilled worker. Out of this there was a massive increase of employment as other industries mechanised to take opportunities with the increase in spun wool.  Yes, we did go through the age of the dark satanic mills, but industry is at the heart of our civilisation and many of us enjoy the benefits created by it with an exceptional quality of life.  What this shows us is while there is a short-term displacement in employment; this will be absorbed by new industries created later.”

Expert systems

He says that AI has the ability to create expert systems that never tire and they can augment humans.” For example, artificial intelligence is being used to spot breast cancer as it has the ability to learn – improving the cancer identification rate by up to 99.5% accuracy. AI can be employed in many other situations where it can augment experts to drive efficiency and ROI. Data is all around us – most of us are drowning in it – but it drives our modern society”, he comments. Yet this leaves the question of where to store this growing volume of data. Another question that often needs to be answered by many organisations today is about how to move it quickly and securely around the globe.

Data is at the heart of every organisation and many have a global presence as well as customers worldwide, and increased distances create a major bottleneck”, he warns. Although network speeds have increased exponentially over the past few years, this has not necessarily improved the performance of transmitting data over distance”, he explains. In his view, what it has done is exasperated the problem and delivered organisation a poor return on their investment. He says this is all caused by the ghosts that plagues networks; latency and packet loss, but these can be mitigated or reduced with the help of AI.

Mitigating latency

There are techniques that highly skilled engineers or programmers can employ to mitigate some of the effects of latency and packet loss, but the network – and more importantly wide area networks (WANs) – are a living, unstable entity to work with”, he warns. Therefore something has to be done, and preferably without much human intervention. After all, we as humans are all prone to making mistakes. Traditionally, to maintain the optimal performance of the data flowing across a WAN would require an engineer to constantly measure and tune the network, and this is where errors can be made.

With AI, the goal of maximising the performance removes the need for constant human intervention – leading to the potential benefits of reducing any human error, better network management, improve disaster recovery, reduced packet loss and increased accuracy. Artificial intelligence can dump a set of rules and learn in a similar way to how it is being deployed in the breast cancer article. It can learn from its experience the way the network behaves, and it also knows how data flows across the network”, he claims. This is enabling society to develop a fit and forget approach.

Two solutions that do this are PORTrockIT and WANrockIT. They help to mitigate the effects of latency and reduce packet loss. Beyond them Trossell believes that we have just begun to scratch the surface with the possibility of AI, and  there are many other thought and time intensive processes within the IT world that it can be  applied to, if only we had the courage to release control.” That must happen because we as humans are reaching the limit of our ability to manage the ever-increasing data volumes without some form of machine intelligence to support our endeavours. Could we hand over to AI the management of space, the location and the accessibility of data? Yes we can, because anything we can do, AI can often do it better, and we need its support going forth. 

Maximising the potential of the Industrial Internet – through the right networks

(c)iStock.com/DrAfter123

Computer-aided design (CAD) and computer-aided engineering (CAE) often use extremely large files, which make remote working and collaboration difficult to achieve. Most businesses are able to work remotely with laptops, smartphones and tablet PCs, but the processing power required to create, store and share CAD files over a wide area network is often prohibitive. Yet with the power of the internet they can work remotely and collaboratively.

The key challenge, because the files they send to their colleagues and CAD partners across the globe are so large, is about how their IT teams mitigate the effects of latency in wide area networks (WAN) to enable them to work uninterrupted by slow network connections. Strained WAN resources aren’t the only issue that concerns them. They need to deploy remote access control technology to protect their data as it flows across the internet, and from cloud to cloud, to ensure that only authorised individuals can work on any given CAD or CAE projects.

In essence, the internet and more recently cloud computing has become an enabler of remotely situated design, manufacturing, construction and engineering teams. Not only can they share their skills, knowledge and expertise, but also their data. Cloud can handle the peaks in storage and computing demand, but organisations must be able to get it up and down quickly in order to benefit from the potential cost efficiencies and the infrastructure agility that it can offer.

“With the ubiquitous access to the internet it is now possible to gather data from every part of the world and bring it back to a central hub for analysis, and you can design an aircraft or a car in one country while manufacturing it in another”, says David Trossell, CEO and CTO of data acceleration company, Bridgeworks. He points out that this means a huge amount of data is constantly being moved around and that all of the data is logged.

It’s about time

The Engineer says in its June article, ‘It’s About Time – Evolving Network Standards for the Industrial IoT’, “The Industrial Internet of Things (IIoT) promises a world of smarter, hyper-connected devices and infrastructure where electrical grids, manufacturing machines, and transportation systems are outfitted with embedded sensing, processing, control and analysis capabilities.”

The article recognises that latency can still be a problem, and it claims that:

“Much of today’s network infrastructure is not equipped to handle such time-sensitive data. Many industrial systems and networks were designed according to the Purdue model for control hierarchy in which multiple, rigid bus layers are created and optimised to meet the requirements for specific tasks. Each layer has varying levels of latency, bandwidth and quality of service, making interoperability challenging, and the timely transfer of critical data virtually impossible. In addition, today’s proprietary Ethernet derivatives have limited bandwidth and require modified hardware.”

The article adds: “Once networked together, they’ll create a smart system of systems that shares data between devices, across the enterprise and in the cloud. These systems will generate incredible amounts of data, such as the condition monitoring solution for the Victoria Line of the London Underground rail system, which yields 32 terabytes of data every day. This Big Analog Data will be analysed and processed to drive informed business decisions that will ultimately improve safety, uptime and operational efficiency.”

Commenting on the article, and specifically about the London Underground example, Trossell says: “Everything is real-time, and so the question has to be: How can we get the data back as fast as possible to analyse it and to inform the appropriate people. Some elements of this task may be in-house first for a quick exception analysis?” Some of this data may then be pushed to the cloud for further in-depth analysis by comparing present data with historical data to see whether anything can be learnt or improved from a maintenance and service perspective. With an unimpeded network, big data analysis from a wide range of data sources is possible, adding the ability to gain insights that were once not so easy to obtain.

From IoT to IIoT

Trossell thinks that the broader expression of the Internet of Things (IoT) is just one of the current buzzwords that everyone for a variety of reasons is getting excited about. “Most people think of this as their connected fridge, the smart meter for their utilities or the ability to control their heating system at home, but with the ever increasing diversity and the decreasing cost of sensors for industrial use, the term takes on a new level of sophistication and volume when applied to industry”, he explains.

In industry, IoT gives birth to IIoT, which involves monitoring the performance of complex machinery such as gas turbines, aircraft, ships, electrical grids and oil rigs. So it’s not just about a diversely spread group of CAD engineers working collaboratively across the globe. “IIoT has never been so diverse and in depth with vast amounts of data being created every second. To put this in perspective, each Airbus A350 test fight can received measurements from 60,000 separate sensors”, he claims. That’s a phenomenal amount of data that needs to be transmitted, backed-up, stored, and at some point it needs to be analysed in real-time in order to have any value.

“An example of this is a company that has developed a system where the aircraft technician can download the data from the black box flight recorder, send it over the internet where it is analysed with artificial intelligence for anomalies exceptions and then passed to an expert for investigation”, he explains. The benefit is that this approach can engage several experts, for example from an air transport safety board and involve manufacturers, across the globe to find out, unusual pilot activity, or sensor data can be collated to enable an airline to reduce maintenance and unplanned outages and possible safety implications  to a minimum therefore improving availability and profitability.

“However, just like the consumer internet, moving vast amount of data across the internet has it challenges – especially when the data may be half the world away”, he warns. These challenges include increased network latency due to the teams working at a distance over a WAN, and potential security breaches. He adds: “Moving files around between various data silos can be inhibitive even over a LAN – the cost of 10Gb networks are dropping considerably, but with WANs the problem is about moving data over distance because of latency.”

Yet in spite of the gremlins posed by security threats and network latency, there are many companies around the world that are established virtually thanks to the internet. They are often specialists in their chosen disciplines, and each of them can add a bit to the whole picture, but Trossell believes it’s no good to collect or generate data if you can’t use it to encourage and enable the collaboration of globally dispersed multi-disciplinary teams, to allow for innovation and for the creation of efficiencies. The data – including sensor data – must get to the right people at the right time if it is to add any value, but latency can prevent this from happening and latency can turn invaluable data into redundant and out of date data that adds nothing of worth or merit. 

Being smart

Companies investing in IIoT and remote working therefore need to protect their businesses by investing in solutions that can mitigate the impact of network latency while enabling data to be securely sent at velocity between the various data users and analysers. With smart systems, the challenges can be harder to overcome because in a traditional Purdue system data flows up and down the model, but Trossell says that smart systems and IIoT data tends to flow in all directions like a web. Being smart is also about mitigating latency and reducing the potential threats to data. With this in mind, Trossell offers five top tips that could ensure that your company gains the most from its data:

  • Remember that the two biggest killers of performance in WAN is packet loss and latency. When you have them together then you will suffer massive performance hits.
  • Adding bandwidth to your WAN will not necessarily increase performance, but it will increase costs!
  • Marshal and consolidate data if possible rather than allowing lots of individual streams as this is a more effective use of WAN bandwidth.
  • Use a product such as PORTrockIT to accelerate data transmissions, and use applications that pre-compress and encrypt data before it is sent to the WAN.

In essence, by mitigating latency and improving data security, industrial organisations can maximise the potential of the industrial internet. With the growing amount of sensor data, and the growing need to work collaboratively with remote teams across the world, these challenges are going to become more and more prevalent and obstinate. Industrial organisations therefore need to act today to ensure they protect their businesses well into the future, to enable them to participate in the industrial internet of things, and to allow them to benefit from real-time big data analysis right now. In other words, there is no point in using smart technology if you aren’t being smart with it too. 

Data resilience: Why CEOs and CFOs need to understand the CIO agenda

(c)iStock.com/Akindo

The CIO agenda changes each year, albeit not drastically. In 2015 part of their focus was on hybrid cloud spending, but this year research and analyst firm Gartner says that it’s now about how they enhance the power of digital platforms. The company’s ‘Building the Digital Platform: Insights From the 2016 Gartner CIO Agenda Report’ says: “As digitalisation moves from an innovative trend to a core competency, enterprises need to understand and exploit platform effects throughout all aspects of their businesses.”

If organisations fail to understand this agenda their companies could face a number of issues that could inevitably hold them back. For example they won’t be able to deliver, attract and retain talent; they won’t be perceived by their customers as being value-added; they might lose the opportunity to either develop new products and services or to sell them. They could potentially lose any opportunity they might otherwise have had to create a competitive advantage too.

In summarising the findings, the report adds: “The 2016 CIO Agenda Survey data shows that digitalisation is intensifying. In the next five years, CIOs expect digital revenues to grow from 16% to 37%. Similarly, public-sector CIOs predict a rise from 42% to 77% in digital processes.”

CEOs now require their CIOs to be a first amongst equals – and CIOs should consider data resilience, rapid backup and restore to maintain business continuity

Gartner also claims the “deepening of digital means that lines are becoming increasingly blurred, and boundaries semiporous — both inside and outside the enterprise — as multiple networks of stakeholders bring value to each other by exploiting and exploring platform dynamics.” To achieve this, CEOs now require their CIOs to be a first amongst equals, and to succeed, Gartner advises CIOs to “re-think and to re-tool their approach to all layers of their business’s platforms, not just the technical one.” But what about data? Without data these platforms are as good as redundant. CIOs should therefore consider data resilience, rapid back-up and restore to maintain business continuity.

Ransomware

“Forbes’ top 10 concerns for 2016 are costs, IoT, agility, time to market and perennial problem of increasing data volumes and where to store it”, says David Trossell, CEO of Bridgeworks. He explains that this issue brings the cloud into play and a whole set of new risks such as ransomware. For example a report by The Register on 23rd March 2016, ‘Your Money Or Your Life! Another Hospital Goes Down To Ransomware’, reveals that the records of the Methodist Hospital in Kentucky, USA, were scrambled by ransomware to try to extort money from the organisation.

Fortunately it called in the FBI and refused to pay, but the Hollywood Presbyterian Medical Center also found itself infected by ransomware recently and paid malware operators $11,900 to get their documents. Similar attacks have been reported by hospitals worldwide. “Hacking, traditional malware and viruses still offer a significant threat, but those that create it are constantly changing their tactics to develop new ways to either damage data records, to steal or prevent access to data.

Be the disruptor

“In a recent article, ‘The Three Ds of Payment’ by Peter Diamandis, there is a mind provoking quote: – ‘if you don’t disrupt your business someone else will’, says Trossell. He therefore argues that data security should be top of the CIO agenda, and that disruptive companies have the potential to offer new solutions to old and new problems. They also have the potential to change markets, but as has happened in other high profile examples, they won’t often be initially welcomed by the existing players. Just take a look at Uber for example; it shows that technology is a key part of the equation – including the web and the cloud, and it is changing the way people order a taxi to the extent that traditional black cab taxi drivers hate it.

Technology is the heart of challenging traditional ways of operating. Trossell therefore argues that it has to be “an integral part of the revenue generation process as much as sales and marketing is – with equal budgets.” Technology isn’t just about the platforms. It’s also about how companies can protect their data, collate and manipulate it for the betterment of their customers and themselves. CIOs are now in a situation where they can play a leading role to achieve this goal – not alone, but in collaboration with chief financial officers (CFOs), chief executive officers (CEOs) and chief marketing officers (CMOs).

Drive change

“CIOs now have to drive change through the organisation, and so technology has to be part of the future strategy: Look how Amazon changed the way we buy, and how Netflix altered how we watch movies, how payment systems are driving cash out of our pockets and into our phones”, says Trossell. CIOs therefore need to ensure that they gain recognition by using technology to demonstrably drive and create value that can be added to the bottom line, enabling the organisation to expand. By demonstrating value they will gain the support of chief financial officers (CFOs) and other senior executives. If they are unable to demonstrate value their ability to innovate will be adversely affected.

CIOs now have to drive change through the organisation, and so technology has to be part of the future strategy

So why is data resilience important? Trossell explains: “In the end the traditional values and responsibilities are still there and all that data is becoming more valuable so like any asset it has to be protected. What happens if Uber dies – or loses data? The platform fails.”  Without a functioning platform, revenue can’t be generated for such companies such as these, and the value that’s intrinsically locked into the data will be rendered valueless. This problem should concern everyone within the C-Suite – and individuals need to realise that the buck stops with them.

Collective responsibility

With the growth of digitalisation, the ransomware incidents at the two hospitals show why they should all be concerned with protecting their data. It’s quite easy with fault tolerant hardware to become complacent, rather than to recognise that the risks are ever changing. Networks, for example, remain vulnerable to air gaps and distance issues that create latency and packet loss. The CIO agenda should therefore consider how data is going to be safely transmitted and received, at speed, by mitigating the effects of latency. A service continuity, business continuity and recovery plan is therefore essential.

The C-Suite as a whole should also prioritise data security and data recovery to ensure that they can retrieve data quickly whenever either a human-created disaster or a natural one threatens the ability of the organisation to continue to operate. The outcomes of failing to understand the current CIO agenda and what else should be on it can lead to lost time and lost revenue. It should also be borne in mind that Twitter offers a charter for those that wish to complain about an organisation, and it can lead to a rapid dissemination of unfavourably information that could cause reputational damage.

So at the end of the day data resilience is about protecting your organisation to access and exploit data in order to enable it to prosper. This is why CFOs, CMOs and CEOs should work with CIOs to ensure that they have technologies such as WANrockIT and PORTrockIT in place. It’s better to disrupt those that would like to disrupt you, and it’s better to act now to prevent natural disasters from costing your business its income than to potential far more money on trying to fix the problem post-datum. At now time has this been more important than today because of increasing digitalisation.

After the flood: Why IT service continuity is your best insurance policy

(c)iStock.com/monkeybusinessimages

The severe floods that hit the north of England and parts of Scotland in December 2015 and January 2016 devastated both homes and businesses, and led to questions about whether the UK is sufficiently prepared to cope with such calamities.

On December 28, the Guardian newspaper went so far as to say that the failure to ensure that flood defences can withstand the unprecedented high water levels, would cost at least £5bn. Lack of investment was cited as the cause of the flooding.

Even companies such as Vodafone were reported to have been affected. The IT press said that the floods had hit the company’s data centre. A spokesperson at Vodafone, for example, told Computer Business Review on January 4: “One of our key sites in the Kirstall Road area of Leeds was affected by severe flooding over the Christmas weekend, which meant that Vodafone customers in the North East experienced intermittent issues with voice and data services, and we had an issue with power at one particular building in Leeds.”

Many reports said that the flooding restricted access to the building, which was needed in order to install generators after the back-up batteries had run down. Once access became possible engineers were able to deploy the generators and other disaster recovery equipment. However, a recent email from Jane Frapwell, corporate communications manager at Vodafone, claimed: “The effects on Vodafone of flooding were misreported recently because we had an isolated problem in Leeds, but this was a mobile exchange not a data centre and there were no problems with any of our data centres.”

While Vodafone claims that its data centres weren’t hit by the flooding, and that the media had misreported the incident, it is a fact that data centres around the world can be severely hit by flooding and other natural disasters. Floods are both disruptive and costly. Hurricane Sandy is a case in point.

Hurricane Sandy

In October 2012 Data Center Knowledge reported that at least two data centres located in New York were damaged by flooding. Rich Miller’s article, ‘Massive Flooding Damages Several NYC Data Centres’ said: “Flooding from Hurricane Sandy has hobbled two data centre buildings in Lower Manhattan, taking out diesel fuel pumps used to refuel generators, and a third building at 121 Varick is also reported to be without power…” Outages were also reported by many data centre tenants at a major data hub on 111 8th Avenue.

At this juncture it’s worth noting that a survey by Zenium Technology has found that half of the world’s data centres have been disrupted by natural disasters, and 45% of UK companies have – according to Computer Business Review’s article of June 17 – experienced downtime due to natural causes.

Claire Buchanan, chief commercial officer at Bridgeworks, points out that organisations should invest in at least two to three disaster recovery sites, but quite often like most insurance policies they often just look at the policy’s price rather than as the total cost of not being insured. This complacency can lead to a disaster, costing organisations their livelihood, customers and their hard fought for reputations.  “So I don’t care whether it’s Chennai, Texas or Leeds. Most companies make do with what they have or know, and they aren’t looking out of the box at technologies that can help them to do this”, says Buchanan.

Investment needed

Buchanan suggests that rather than accepting that the flood gates will be opened, drowning out their data centres of their ability to operate, organisations should invest in IT service continuity.

The problem is that traditionally, the majority of data centres are quite often placed within the same circle of disruption. This could lead to all of an organisations data centres being put out of service. The main reason why they place their data centres within close proximity to each other is caused by the limitations of most of the technologies available on the market. Placing data centres and disaster recovery sites at a distance brings the latency issues. Buchanan explains: “Governed by an enterprises Recovery Time Objective (RTO), there has been a requirement for organisations to place their DR centre within fairly close proximity due to the inability to move data fast enough over distance.”

She adds: “Until recently, there hasn’t been the technology available that can addresses the effect of latency when transferring data over distance. The compromise has been how far away the can DR centre be without too much of a compromise on performance? ” With the right technology in place to mitigate the effects of latency it should, however, be possible to situate an organisation’s disaster recovery site far away as you like, for example in green data centres in countries such as Iceland or Scandanavia, as well as other countries to ensure that each data centre is not located within the same circles of disruption.

Green data centres have many plus points in their favour, most specifically the cost as power and land are comparatively inexpensive. The drawback has always been the distance from European hubs and the ability to move the data taking into account distance and bandwidth. With 10Gb bandwidth starting to become the new normal, coupled with the ability to move data unhindered at link speed, there is no reason why enterprises cannot now take this option.

Traditional approach

Clive Longbottom, client services director at analyst firm Quocirca, explains the traditional approach. “The main way of providing business continuity has been through hot replication,” he explains.  “Therefore, you need a full mirror of the whole platform in another data centre, along with active mirroring of data.  This is both costly and difficult to achieve.”

But as most companies already have the network infrastructure in place, they should be looking for solutions that won’t cost the Earth. For this reason, organisations should look outside of the box and consider smaller and more innovative companies to find solutions to the problems they face – solutions that can mitigate latency with the organisation’s existing infrastructure, making in unnecessary to buy new kit in order to have a dramatic impact.

“With products like WANrockIT and PORTrockIT you don’t need dark fibre networks or low latency network because the technology  provides the same level of performance whether the latency is 5, 50 or 150ms of latency”, says David Trossell, CEO of Bridgeworks. He claims that the biggest cost is the network infrastructure, but “you can reduce the costs considerably with these solutions, and it widens up the scope of being able to choose different network providers as well.”

“CVS Healthcare, for example, wanted electronic transfer DR from 2 of their sites, but the latency killed performance and so they still had to use a man in the van to meet their required recovery time objectives (RTO)”, explains Trossell. He adds: “They had electronic transfer to improve the RTO, but this was still too slow and yet with WANrockIT in the middle we got the RTO down to the same or better, and we reduced the RPO from 72 hours down to 4 hours.” Before this CVS Healthcare was doubling up on its costs by using the “man in the van” and electronic transfer. 

Plan for continuity

While companies need to plan for business continuity first and foremost, they also need to have a good disaster recovery plan. Buchanan and Trossell have found that many organisations lack adequate planning. They don’t see a need for it until disaster strikes – ‘like everything else organisations quite often don’t think it would happen to them.’ For example, what would happen in the Thames Barrier failed to prevent Canary Wharf from being flooded? It’s after all located on a flood plain and there are many disaster recovery sites in its vicinity.

Longbottom raises a key challenge. “If flooding such as we have just seen happens – what was meant to be a once in a hundred years’ event – then planning for that puts the costs to the data centre owner out of reach,” he says. “Having water levels two or more metres above normal means that attempting to stop ingress of water become exceedingly difficult, and pumping it out just as hard.” 

He therefore advises organisations to have two plans, for disaster recovery and business continuity. It’s also important to remember that IT service continuity is multi-tiered, and these two considerations are a part of it. 

To ensure that they work effectively as well as efficiently together, there is a need to understand the business-related risk profile. He says this will also help organisations to define how much the business is willing to spend on continuity, and it will allow for some forethought into the types of risks that will affect the business. Disaster recovery sites may need to be located in different countries to ensure that investment in IT service continuity is the best insurance policy.

Disaster recovery: How to reduce the business risk at distance

(c)iStock.com/natasaadzic

Geographic distance is a necessity because disaster recovery data centres have to be placed outside the circle of disruption.

The meaning of this term depends on the type of disaster. This could be a natural phenomenon like an earthquake, a volcanic eruption, a flood or a fire. Calamities are caused by human error too; so the definition of the circle of disruption varies. In the past, data centres were on average kept 30 miles apart as this was the wisdom at the time.   But then today, the circle’s radius can be up to 100 miles or more. In many people’s views, a radius of 20 or 30 miles is too close for comfort for auditors, putting business continuity at risk.

With natural disasters and global warming in mind David Trossell, CEO of leading self-configuring infrastructure and optimise networks (SCIONs) vendor Bridgeworks, therefore ponders on what is an adequate distance between data centres in order to ensure that business goes on, regardless of what happens within the vicinity at one of an organisation’s data centres:

“Many CIOs are faced with a dilemma of how to balance the need of having two data centres located within the same Metro area to ensure synchronisation for failover capability yet, in their hearts they know that both sites will probably be within the circle of disruption,” Trossell explains. He adds that, in order to ensure their survival, they should be thinking of what their minimum proximity from the edge of the circle is, for a tertiary DR site.

“After all, Hurricane Sandy ripped through 24 US states, covering hundreds of miles of the East Coast of the USA and caused approximately $75bn worth of damage and. earthquakes are a major issue throughout much of the world too, so much that DR data centres need to be to be located on different tectonic plates”, he explains.  

A lack of technology and resources is often the reason why data centres are placed close to each other within a circle of disruption. “There are, for example, green data centres in Scandinavia and Iceland which are extremely energy efficient, but people are put off because they don’t think there is technology available to transfer data fast enough – and yet these data centres are massively competitive”, says Claire Buchanan, chief commercial officer at Bridgeworks.

Customer risk matrix

Michael Winterson, EMEA managing director at Equinix says that as a data centre provider, his company provides a physical location to its customers. “When we are talking with any one of our clients, we usually respond to their pre-defined risk matrix, and so they’ll ask that we need to be a minimum of ‘x’ and a maximum of ‘y’ kilometres away by fibre or line of sight”, he explains. His company then considers and vets the ‘red flags’ to consider the data centres that fall within or outside of the criteria set by each customer. Whenever Equinix goes outside of the criteria, research is undertaken to justify why a particular data centre site will be adequate.

“We will operate within a circle of disruption that has been identified by a client, but a lot of our enterprise account clients opt for Cardiff because they are happy to operate at distances in excess of 100 miles from each data centre”, says Winterson. Referring back to Fukushima and to Hurricane Sandy he claims that all of Equinix’s New York and Tokyo data centres were able to provide 100% uptime, but some customers still experience many difficulties with transportation and access to networks and to their primary data centres.

“If you operate your IT system in a standard office block that runs on potentially three or four hours of generator power, within that time, you’re now in the dark and so we saw a large number of customers who tried to physically displace themselves to our data centres to be able to operate the equipment directly over our Wi-FI network in our data centres, but quite often they would have difficulty moving because of public transportation issues because the areas were blocked”, explains Winterson. Customers responded by moving their control system in Equinix’s data centres to another remote office in order to access the data centre system remotely.

Responding to disasters

Since Fukushima his company has responded by building data centres in Osaka because Tokyo presents a risk to business continuity at the information technology and network layers. Tokyo is not only an earthquake zone; power outages in Japan’s national grid could affect both its east and west coast. In this case the idea is to get outside the circle of disruption, but Equinix’s New Jersey-based ‘New York’ and Washington DC data centres are “unfortunately” located within circles of disruption – within their epicentre because people elect to put their co-location facilities there.

“In the City of London for instance, for active-active solutions our London data centres are in Slough, and they are adequately placed within 65 kilometres of each other by fibre optic cable, and it is generally considered that you can run an active-active solution across that distance with the right equipment”, he says. In Europe customers are taking a two city solution, and they are looking at the four hubs of telecommunications and technology – London, Frankfurt, Amsterdam and Paris because they are really 20 milliseconds apart from each other over an Ethernet connection.

Internet limitations

With regards to time and latency created by distance, Clive Longbottom, client service director at analyst firm Quocirca, says: “The speed of light means that every circumnavigation of the planet creates latency of 133 milliseconds, however, the internet does not work at the speed of light and so there are bandwidth issues that cause jitter and collisions.”

He then explains that active actions are being taken on the packets of data that will increase the latency within a system, and says that it’s impossible to say “exactly what level of latency any data centre will encounter in all circumstances as there are far too many variables to deal with.”

Longbottom also thinks that live mirroring is now possible over hundreds of kilometres, so long as the latency is controlled by using packet shaping and other wide area network acceleration approaches. Longer distances, he says, may require a store-and-forward multi-link approach which will need active boxes between the source and target data centres “ensure that what is received is what was sent”.

Jittering networks

Trossell explains that jitter is defined as packets of data that arrive slightly out of time. The issue is caused, he says, by data passing through different switches and connections which can cause performance problems in the same way that packet loss does. “Packet loss occurs when the line is overloaded – this is more commonly known as congestion, and this causes considerable performance drop-offs which doesn’t necessarily reduce if the data centres are positioned closer together.”

“The solution is to have the ability to mitigate latency, to handle jitter and packet loss”, says Buchanan who advises that this needs to be done intelligently, smartly and without human intervention to minimise the associated costs and risks. “This gives IT executives the freedom of choice as to where they place their data centres – protecting their businesses and the new currency of data”, she adds.

Mitigating latency

A SCION solution such as WANrockIT offers a way to mitigate the latency issues created when data centres are placed outside of a circle of disruption and at a distance from each other. “From a CIO’s perspective, by using machine intelligence the software learns and makes the right decision in a micro-second according to the state of the network and the flow of the data no matter whether it’s day or night”, Buchanan explains. She also claims that a properly architected SCION can remove the perception of distance as an inhibitor for DR planning.

“At this stage, be cautious, however it does have its place and making sure that there is a solid plan B behind SCION’s plan A, means that SCIONs can take away a lot of uncertainty in existing, more manual approaches”, suggests Longbottom. 

One company that has explored the benefits of a SCION solution is CVS Healthcare. “The main thrust was that CVS could not move their data fast enough, so instead of being able to do a 430 GB back-up, they could just manage 50 GB in 12 hours because their data centres was 2,800 miles away – creating latency of 86 milliseconds.  This put their business at risk, due to the distance involved”, explains Buchanan.

Their intermediate solution was to send it offsite to Iron Mountain, but CVS wasn’t happy with this solution as it didn’t meet their recovery requirements. Using their existing 600Mb pipe and WANrockIT and each end of the network, CVS was able to reduce the 50 GB back-up from 12 hours to just 45 minutes irrespective of the data type.  Had this been a 10 Gb pipe, the whole process would have taken just 27 seconds. This magnitude change in performance enabled the company to do full 430 GB back-ups on a nightly basis in just 4 hours. The issues associated with distance and latency was therefore mitigated.

The technology used within SCION, namely machine intelligence, will have its doubters as does anything new. However, in our world of increasingly available large bandwidth, enormous data volumes and the need for velocity, it’s time to consider what technology can do to help businesses underpin a DR data centre strategy that is based upon the recommendations and best practice guidelines that we have learnt since disasters like Hurricane Sandy.

Despite all mankind’s achievements, Hurricane Sandy taught us many lessons about the extensive destructive and disruptive power of nature.  Having wrought devastation over 24 States this has dramatically challenged the traditional perception of what is a typical circle of disruption in planning for DR. Metro connected sites for failover continuity have to stay due to the requirements of low delta synchronicity but this is not a sufficient or suitable practice for DR. Sandy has taught us that DR sites must now located be hundreds of miles away if we are to survive.

Accelerating cloud storage and reducing the effects of latency

(c)iStock.com/ktsimage

Over a number of years there has been a long and hard fought battle to secure the ability to‘accelerate anywhere’any data type to, from and across a cloud area network (ClAN) to allow fast access to applications, or to secure data as a part of a back-up and archiving strategy. According to Claire Buchanan, chief commercial officer at self-configuring infrastructure optimise networks (SCION) vendor Bridgeworks, this battle is still ongoing. With the use of traditional WAN optimisation techniques the long drawn out battle has still to be won.

“It may not long be the case as with the advent of machine intelligence and technologies such as SCION, The problem has been that of small pipes and the inability to accelerate data.  Therefore, the use of deduping and compression tools have been the only way to gain a perceived performance improvement”, she explains. 

With this in mind Tony Lock, programme director at analyst firm Freeform Dynamics, advises people to closely scrutinise the available WAN acceleration solutions against their business level requirements for WAN network performance. “They need to match them to what is currently being delivered combined with assessing what improvements can be achieved”, he adds.

Yet larger network connections or ‘pipes’ of 100Mb/s ,1Gb/s  and greater are becoming the norm. So Buchanan therefore thinks that the main challenge has changed to one of how to fill the pipes in order to maximise the utilisation of the network rather than minimising the amount of data sent. “With SCION this can be achieved and the network performance battle can be won”, she claims. With SCION she argues that the traditional problems relating to WANs are flipped on their head, because she says the technology works “sympathetically with TCP/IP in tandem with its strengths whilst overcoming its greatest inhibitor – latency.”

Mitigating latency

Mitigating latency is a crucial challenge because it can slow down the transmission of data to and from   public, private and hybrid clouds. It can make back-up and disaster recovery more challenging than it really need be. Buchanan therefore argues that this can, however, be resolved by radically reducing the effects of latency. Performance and utilisation can climb up beyond 90% to allow data to move close to the maximum capability of an organisation’s bandwidth. This in turn will make it easier for customers to move data to the cloud, and they will gain the ability to spin up at will and spin down their servers were it is deemed appropriate.

Lock adds: “Everything depends on the nature of the applications and business services being run over the communications links, and so the tip is to ensure that you really understand what is needed to warrant that the business gets the service quality it needs from the user of the application.” He says it is also essential to make sure that IT understands how to manage and administer it over its working life, which could be over the course of many years. “The key is to put in place good management tools and processes – especially if these are new operations to IT”, he suggests.

Data deduplication

In many cases machine learning technologies such as SCIONs will limit the need for human intervention and enable the more efficient management of network performance. Yet Buchanan says deduplication has traditionally always been an “acceptable way to move data where pipe size is a limitation and chatty protocols are the order of the day, but it is heavy on computational power, memory and storage.” She therefore advises organisations to ask questions:

  • What is the hidden cost of WAN optimization: What is the cost of the kit to support it? As one technology starts to peak at, for example, 1Gb/s you have to look at the return on investment. With deduplication you have to look at the point where the technology tops out as performance of the technology flattens off, and the cost benefit ratio weakens. Sometimes it’s better to take a larger pipe with different technology to get better performance and ROI.
  • Are the traditional WAN optimisation vendors really offering your organisation what it needs?  We are now seeing Vendors other than WAN Optimisation vendors, that are increasingly using deduplication and compression as part of their offering  As it’s not possible to “dedupe data already deduped”. This means that the traditional WAN optimisation tools simply pass the data through untouched and therefore no performance improvement. 
  • What will traditional WAN optimisation tools become in the new world of larger pipes?  Lock adds that “data deduplication is now reasonably mature, but IT has to be comfortable that it trusts the technology and that the organisation is comfortable with the data sets on which it is to be applied.” He also says that there are some industries that may require a sign-off by auditors and regulators on the use of deduplication on certain data sets.

Fast restoring

Organisations that what to fast restore encrypted data from off-site facilities need to consider the network delays caused by latency. “This is coloured by IT executives thinking with regards to the location of their secondary and tertiary datacentres, and so they have sought to minimise time and perceived risk by locating their datacentres within the circle of disruption”, says Buchanan.

She adds that distance is normally a reflection of latency as measured by milliseconds, but this apparently isn’t always the case dependent on the network. The laws of physics doesn’t allow for latency to be eliminated, but they can be mitigated with SCION technologies. She argues that SCIONS can enable organisations to move encrypted data just as fast as anything else because it doesn’t touch the data and is therefore data agnostic.

Lock advises that there are many factors that have to be considered, such as the location of the back-up data, the resources available (network, processors, storage platforms and so on) to perform the restoration of the encrypted data. “However, the long-term management of the encrypted keys will certainly be the most important factor, and it’s one that can’t be overlooked if the organisation needs large scale data encryption”, he explains.

With regards to SCION he says that traditional WAN networks have been static: “They were put in place to deliver certain capacities with latency, but all resilience and performance capabilities were designed up-front and so the ideas behind SCION, looking at making networks more flexible and capable of resolving performance issues automatically by using whatever resources are available to the system – not just those furnished at the outset is an interesting divergence.”

Differing approaches

According to Buchanan the traditional premise has been to reduce the amount of data to send. “In contrast SCION comes from the premise of acceleration, maximising the efficiency of the bandwidth to achieve its ultimate speed”, she explains.

In her opinion the idea is that by paralleling data on virtual connections, filling the pipes and then using machine intelligence to self-configure, self-monitor and self-manage the data that is controlled from ingress to egress ensures optimal performance as well as optimal utilisation and the fastest throughput speed possible.

Cloud: Nothing special

Both Lock and Buchanan agree that there is nothing special about the cloud. In Buchanan’s view it’s just one more choice that’s available to CIOs within their global strategy. “From a data movement perspective the fact remains that whatever strategy is chosen with regards to public, private or hybrid cloud, the underlying and fundamental problem remains – that being how to get your data to and from whichever location you have chosen without impediment”, she explains.

She adds that IT is under pressure to deliver a myriad of initiatives, whether that is cloud or big data, IoT or digital transformation: “Couple that with the data deluge that we are experiencing as shown by IDC’s prediction that there will be 40 ZB of data by 2020, and so there is a high mountain to climb.” For this reason she argues that organisations need to find smart ways to do things. This is crucial if organisations are going to be able to deliver better and more efficient services over the years to come. It’s time for new approaches to old problems.

Become smarter

Most of the innovation is coming from SMEs and not large corporate enterprises.  “Small companies are doing really clever things that flip old and established problems on their heads, and this kind of innovation only really comes from SMEs that are focused on specific issues – and as we all saw in 2008 with Lehman Brothers long gone are the days when being big meant you were safe”, she argues.

She therefore concludes that CFOs and CIO should look at SCION solutions such as WANrockIT from several angles such as cost optimisation by doing more with their existing pipes. Connectivity expansion should only occur if it’s absolutely necessary. With machine intelligence it’s possible to reduce staffing costs too because SCIONs require no manual intervention. SCION technology can enable organisations to locate their datacentres for co-location or cloud anywhere without being unhinged by the negative effects of network latency.

In fact a recent test by Bridgeworks involving 4 x 10Gb connections showed that the data was moving at 4.4GB per second, equating to 264GB per minute or 15,840GB per hour. So SCIONs open up a number of opportunities for CFOs and CIOs to support. In essence they will gain better service at a lower cost. However, Lock concludes that CFOs should not investigate this kind of proposition alone. The involvement of IT is essential to ensure that business and service expectations are met from day one of the implementation of these technologies. Yet by working together, CFOs and CIOs will be able to accelerate cloud storage by mitigating latency.