Software-Defined WANs (SD-WANs) are, along with artificial intelligence, the talk of the town, but they have their limitations for fast cloud back-up and restore. However, before SD-WANs organisations had to cope with conventional wide area networks – just plain old WANs – with all applications, bandwidth congestion and heavy quality of service (QoS) going through one pipe using multi-protocol label switching (MPLS) to connect each branch office to one or more clouds.
The advent of the SD-WAN was a step forward to a certain extent, allowing branch offices to be connected to wireless WANs, the internet, private MPLS, cloud services and to an enterprise data centre using a number of connections. In essence, SD-WANs are great for midsized WAN bandwidth applications with their ability to pull disparate WAN connections together under a single software- managed WAN. Yet, they don’t sufficiently resolve latency and packet loss issues. This means that any performance gains are, again, usually due to inbuilt deduplication techniques.
SD-WANs and SDNs
Some people may also think of SD-WANs as being the little brother of their better-known sibling: Software-defined networking (SDN). Although they are related since they are both software-defined, the difference is that SDN is often for internal data centre use at a branch or an organisation’s headquarters, while SDN is perceived as being architecture.
In contrast, SD-WANs are a technology you can buy to help manage a WAN. This is done by using a software-defined approach that allows for branch office network configurations to be automated, compared to the past when they were handled manually. This latter, traditional approach required an organisation to have an on-site technician present. So, if for example, an organisation decides that it wants to roll out teleconferencing to their branch offices, the pre-defined network bandwidth allocations would have to be manually re-architected at each and every branch location.
SD-WANs allow all of this to be managed from a central location using a graphical user interface (GUI). It can also allow organisations to buy cheaper bandwidth while maintaining a high level of uptime. Yet much of the SD-WAN technology isn’t new, and organisations have also had the ability in the past to manage WANs centrally. So, SD-WANs are essentially an aggregation of technologies that create the ability to dynamically share network bandwidth across several connection points. What’s new about them is how they package all of the technologies together to make a whole new solution.
However, buying a cheaper bandwidth won’t often solve the latency and packet loss issues. Nor will WAN optimisation sufficiently mitigate the effects of latency and packet loss, and it won’t improve an organisation’s ability to back up data to one or more clouds. So, how can it be addressed? Well the answer is that a new approach is required. By adding a WAN data acceleration overlay, it will be possible to resolve the inherent WAN performance issues head on. WAN data acceleration can also handle encrypted data, and it allows data to be moved at speed at a distance over a WAN.
This is because WAN data acceleration takes a totally different approach in the way it addresses latency and packet loss issues. The only limitation is the speed of light, which is simply not fast enough. Yet, it governs latency. So, with traditional technologies, latency decimates WAN performance over distances. This will inevitability effect SD-WANs and adding more bandwidth won’t change the impact that latency can have on WAN performance.
By using TCP/IP parallelisation techniques and artificial intelligence to control the flow of data across the WAN, it’s possible to mitigate the effects of latency and packet loss – typically customers see a 95% WAN utilisation rate. The other upside of not using compression or dedupe techniques is that WAN data acceleration will accelerate any and all data in identical ways. There is no discrimination about what the data is.
This permits it to reach Storage Area Networks (SANs), and by decoupling the data from the protocol, customers have been able to transfer data between SAN devices across thousands of miles. One such Bridgeworks customer, CVS Caremark, connected two virtual tape libraries over 2,860 miles at full WAN bandwidth. This achieved a performance gain of 95 times the unaccelerated performance. So, imagine the performance gains that could be achieved by overlaying SD-WANs with WAN data acceleration solutions such as PORTrockIT and WANrockIT.
Making a difference
These WAN performance gains could make the difference to cloud or data centre to data centre backup times, while also having the ability to improve recovery time objectives (RTOs) and recovery point objectives (RPOs). So, rather than having to cope with disaster recovery, organisations could use SD-WANs with WAN data acceleration overlays to focus on service continuity. They would also be wise to back-up their data to more than one location, including to more than one cloud.
Furthermore, the voluminous amounts of data that keep growing daily can make backing up data to a cloud or simply to a data centre a very slow process. Restoring the data could also take too long whenever a disaster occurs, whether it be caused by human error or by a natural disaster. Another tip would be to ensure that more than one disaster recovery site is used to back-up and restore the data. These DR sites should be located outside of their own circles of disruption to increase the potential of being able to maintain uptime whenever, for example, a flood affects one or more of them. You might also like to keep certain types of sensitive data elsewhere by creating an air gap.
Cloud backups and security
Whenever the cloud is involved in backing up and storing data – or any network connectivity for that matter – there should also be some consideration about how to keep the data safe from hackers. Cloud security has improved over the years, but it’s not infallible – even the largest of corporations are fighting to prevent data breaches on a daily basis, and some including Facebook have been hacked.
Not only can this lead to lost data, but it can also create unhappy customers and lead to huge fines – particularly since the European Union’s General Data Protection Regulations came into force in May 2018. The other consequence of data breaches is lost reputation. So, it’s crucial to not just think about how to back up data to the cloud, but also to work on making sure its security is tight.
That aside, you may also wish to move data from one cloud to another for other reasons because latency and packet loss doesn’t only affect an organisation’s ability to back and restore data from one or several clouds. It can also make it harder for people to simultaneously share data and to collaborate on certain types of data-heaving projects, such as those that use video data. Yet, CVS Healthcare has found that WAN data acceleration can mitigate latency and packet loss while increasing its ability to back up, restore, transmit and receive, as well as share data at a higher level of performance.
Case Study: CVS Healthcare
By accelerating data, with the help of machine learning, it becomes possible to increase the efficiency and performance of the data centre, backing up data to more than one cloud, and thereby improving the efficiency and performance of their clients. CVS Healthcare is but one organisation, that has seen the benefits of WAN data acceleration. The company’s issues were as follows:
• Back-up RPO and RTO
• 86ms latency over the network (>2,000 miles)
• 1% packet loss
• 430GB daily backup never completed across the WAN
• 50GB incremental taking 12 hours to complete
• Outside RTO SLA – unacceptable commercial risk
• OC12 pipe (600Mb per second)
• Excess Iron Mountain costs
To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up from 12 hours to 45 minutes. That equates to a 94% reduction in backup time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than 4 hours per day. So, in the face of a calamity, it could perform disaster recovery in less than 5 hours to recover everything completely.
Amongst other things, the annual cost-savings created by using data acceleration amounted to $350,000. Interestingly, CVS Healthcare is now looking to merge with Aetna, and so it will most probably need to roll this solution out across both merging entities.
Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and backup performance.
Eight tips for cloud back-ups
To improve the data acceleration of SD-WANs, as well as the ability to perform a cloud backup and restore large amounts of data fast, consider the following 8 best practice tips:
Defer to the acronym PPPPP, which means Proper Planning Prevents Poor Performance because of network upgrades – whether they be LAN or WAN upgrades.
Begin by defining the fall-back plan for cloud back-up, and at what stage(s) this should be invoked. Just pushing on with the hope that you will have fixed all the issues before it is time to hand it over is crazy, and no one will thank you for it.
Know when you have to use the fall-back plan because you can learn the lesson for next time while keeping your users and operations working as your primary focus. This may involve having more than one cloud to back up data to, and some types of sensitive data may require you to create an air gap to ensure data security remains very tight.
Remember that SD-WAN have great potential to manage workflow across the WAN. You can still overlay data accelerations solutions, such as WANrockIT and PORTrockIT, to mitigate the effects of latency for faster cloud back-up and restore.
Consider whether you can implement the fall-back plan in stages, rather than a Big Bang implementation. If it’s possible, can you run both in parallel? By implementing a fall-back plan in stages, you can take time to learn what works and what doesn’t to allow for improvement to be made to your ability to SD-WAN-data acceleration overlays to improve cloud back-up and restore efficiency.
Work with users and your operations team to define the data groups and hierarchy and to get their sign off for the plan. Different types of data may require different approaches or require a combination of potential solutions to achieve data acceleration.
Create a test programme to ensure the reliability and functionality as part of the implementation programme.
Monitor and feedback – is it performing as you expected? This has to be a constant process, rather than a one-off.
SD-WANs are a popular tool; they can gain marginal performance achievements by using WAN optimisation, but this does not address the underlying cause of poor WAN performance: latency and packet loss. To properly address the increasing latency caused by distance, organisations should consider opting for an SD-WAN- with a data acceleration overlay for cloud back-ups.
To achieve business and service continuity, they should also back up their data to more than one cloud. This may require your own organisation to engage with more than one cloud service provider, with each one located in different circles of disruption. So, when one fails for whatever reason, back-ups from the other disaster recovery sites and clouds can be restored to maintain business operations.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.