All posts by davidtrossell

How to backup to the cloud with a WAN data acceleration layer

Software-Defined WANs (SD-WANs) are, along with artificial intelligence, the talk of the town, but they have their limitations for fast cloud back-up and restore. However, before SD-WANs organisations had to cope with conventional wide area networks – just plain old WANs – with all applications, bandwidth congestion and heavy quality of service (QoS) going through one pipe using multi-protocol label switching (MPLS) to connect each branch office to one or more clouds.

The advent of the SD-WAN was a step forward to a certain extent, allowing branch offices to be connected to wireless WANs, the internet, private MPLS, cloud services and to an enterprise data centre using a number of connections. In essence, SD-WANs are great for midsized WAN bandwidth applications with their ability to pull disparate WAN connections together under a single software- managed WAN.  Yet, they don’t sufficiently resolve latency and packet loss issues. This means that any performance gains are, again, usually due to inbuilt deduplication techniques.

SD-WANs and SDNs

Some people may also think of SD-WANs as being the little brother of their better-known sibling: Software-defined networking (SDN). Although they are related since they are both software-defined, the difference is that SDN is often for internal data centre use at a branch or an organisation’s headquarters, while SDN is perceived as being architecture.

In contrast, SD-WANs are a technology you can buy to help manage a WAN. This is done by using a software-defined approach that allows for branch office network configurations to be automated, compared to the past when they were handled manually.  This latter, traditional approach required an organisation to have an on-site technician present. So, if for example, an organisation decides that it wants to roll out teleconferencing to their branch offices, the pre-defined network bandwidth allocations would have to be manually re-architected at each and every branch location.

SD-WANs allow all of this to be managed from a central location using a graphical user interface (GUI). It can also allow organisations to buy cheaper bandwidth while maintaining a high level of uptime. Yet much of the SD-WAN technology isn’t new, and organisations have also had the ability in the past to manage WANs centrally. So, SD-WANs are essentially an aggregation of technologies that create the ability to dynamically share network bandwidth across several connection points. What’s new about them is how they package all of the technologies together to make a whole new solution.

Bandwidth conundrum

However, buying a cheaper bandwidth won’t often solve the latency and packet loss issues. Nor will WAN optimisation sufficiently mitigate the effects of latency and packet loss, and it won’t improve an organisation’s ability to back up data to one or more clouds. So, how can it be addressed? Well the answer is that a new approach is required. By adding a WAN data acceleration overlay, it will be possible to resolve the inherent WAN performance issues head on.  WAN data acceleration can also handle encrypted data, and it allows data to be moved at speed at a distance over a WAN.

This is because WAN data acceleration takes a totally different approach in the way it addresses latency and packet loss issues. The only limitation is the speed of light, which is simply not fast enough. Yet, it governs latency. So, with traditional technologies, latency decimates WAN performance over distances. This will inevitability effect SD-WANs and adding more bandwidth won’t change the impact that latency can have on WAN performance.

TCP/IP parallelisation

By using TCP/IP parallelisation techniques and artificial intelligence to control the flow of data across the WAN, it’s possible to mitigate the effects of latency and packet loss – typically customers see a 95% WAN utilisation rate. The other upside of not using compression or dedupe techniques is that WAN data acceleration will accelerate any and all data in identical ways. There is no discrimination about what the data is.

This permits it to reach Storage Area Networks (SANs), and by decoupling the data from the protocol, customers have been able to transfer data between SAN devices across thousands of miles. One such Bridgeworks customer, CVS Caremark, connected two virtual tape libraries over 2,860 miles at full WAN bandwidth. This achieved a performance gain of 95 times the unaccelerated performance. So, imagine the performance gains that could be achieved by overlaying SD-WANs with WAN data acceleration solutions such as PORTrockIT and WANrockIT.

Making a difference

These WAN performance gains could make the difference to cloud or data centre to data centre backup times, while also having the ability to improve recovery time objectives (RTOs) and recovery point objectives (RPOs). So, rather than having to cope with disaster recovery, organisations could use SD-WANs with WAN data acceleration overlays to focus on service continuity. They would also be wise to back-up their data to more than one location, including to more than one cloud.

Furthermore, the voluminous amounts of data that keep growing daily can make backing up data to a cloud or simply to a data centre a very slow process. Restoring the data could also take too long whenever a disaster occurs, whether it be caused by human error or by a natural disaster. Another tip would be to ensure that more than one disaster recovery site is used to back-up and restore the data. These DR sites should be located outside of their own circles of disruption to increase the potential of being able to maintain uptime whenever, for example, a flood affects one or more of them. You might also like to keep certain types of sensitive data elsewhere by creating an air gap.

Cloud backups and security

Whenever the cloud is involved in backing up and storing data – or any network connectivity for that matter – there should also be some consideration about how to keep the data safe from hackers. Cloud security has improved over the years, but it’s not infallible – even the largest of corporations are fighting to prevent data breaches on a daily basis, and some including Facebook have been hacked.

Not only can this lead to lost data, but it can also create unhappy customers and lead to huge fines – particularly since the European Union’s General Data Protection Regulations came into force in May 2018. The other consequence of data breaches is lost reputation. So, it’s crucial to not just think about how to back up data to the cloud, but also to work on making sure its security is tight.   

That aside, you may also wish to move data from one cloud to another for other reasons because latency and packet loss doesn’t only affect an organisation’s ability to back and restore data from one or several clouds. It can also make it harder for people to simultaneously share data and to collaborate on certain types of data-heaving projects, such as those that use video data. Yet, CVS Healthcare has found that WAN data acceleration can mitigate latency and packet loss while increasing its ability to back up, restore, transmit and receive, as well as share data at a higher level of performance.

Case Study: CVS Healthcare

By accelerating data, with the help of machine learning, it becomes possible to increase the efficiency and performance of the data centre, backing up data to more than one cloud, and thereby improving the efficiency and performance of their clients. CVS Healthcare is but one organisation, that has seen the benefits of WAN data acceleration. The company’s issues were as follows:

• Back-up RPO and RTO

• 86ms latency over the network (>2,000 miles)

• 1% packet loss

• 430GB daily backup never completed across the WAN

• 50GB incremental taking 12 hours to complete

• Outside RTO SLA – unacceptable commercial risk

• OC12 pipe (600Mb per second)

• Excess Iron Mountain costs

To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up from 12 hours to 45 minutes. That equates to a 94% reduction in backup time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than 4 hours per day. So, in the face of a calamity, it could perform disaster recovery in less than 5 hours to recover everything completely.

Amongst other things, the annual cost-savings created by using data acceleration amounted to $350,000. Interestingly, CVS Healthcare is now looking to merge with Aetna, and so it will most probably need to roll this solution out across both merging entities.

Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and backup performance.

Eight tips for cloud back-ups

To improve the data acceleration of SD-WANs, as well as the ability to perform a cloud backup and restore large amounts of data fast, consider the following 8 best practice tips:

  • Defer to the acronym PPPPP, which means Proper Planning Prevents Poor Performance because of network upgrades – whether they be LAN or WAN upgrades.

  • Begin by defining the fall-back plan for cloud back-up, and at what stage(s) this should be invoked. Just pushing on with the hope that you will have fixed all the issues before it is time to hand it over is crazy, and no one will thank you for it.

  • Know when you have to use the fall-back plan because you can learn the lesson for next time while keeping your users and operations working as your primary focus. This may involve having more than one cloud to back up data to, and some types of sensitive data may require you to create an air gap to ensure data security remains very tight.

  • Remember that SD-WAN have great potential to manage workflow across the WAN. You can still overlay data accelerations solutions, such as WANrockIT and PORTrockIT, to mitigate the effects of latency for faster cloud back-up and restore.

  • Consider whether you can implement the fall-back plan in stages, rather than a Big Bang implementation. If it’s possible, can you run both in parallel? By implementing a fall-back plan in stages, you can take time to learn what works and what doesn’t to allow for improvement to be made to your ability to SD-WAN-data acceleration overlays to improve cloud back-up and restore efficiency.

  • Work with users and your operations team to define the data groups and hierarchy and to get their sign off for the plan.  Different types of data may require different approaches or require a combination of potential solutions to achieve data acceleration.

  • Create a test programme to ensure the reliability and functionality as part of the implementation programme.

  • Monitor and feedback – is it performing as you expected? This has to be a constant process, rather than a one-off.

SD-WANs are a popular tool; they can gain marginal performance achievements by using WAN optimisation, but this does not address the underlying cause of poor WAN performance: latency and packet loss. To properly address the increasing latency caused by distance, organisations should consider opting for an SD-WAN- with a data acceleration overlay for cloud back-ups.

To achieve business and service continuity, they should also back up their data to more than one cloud.  This may require your own organisation to engage with more than one cloud service provider, with each one located in different circles of disruption. So, when one fails for whatever reason, back-ups from the other disaster recovery sites and clouds can be restored to maintain business operations.  

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why 2018 will be a year of innovation and the ‘cloud on edge’

During much of 2017, it was possible to read many articles that predicted the end of cloud computing and favour edge computing instead. However, there is also the view that believes edge computing and cloud computing are extensions of one another. In other words, the two technological models are expected to work together. Cloud computing therefore has much life in it yet.

With the increasing use of artificial intelligence, machine learning, biometric security and sensors to enable everything, from connected and autonomous vehicles to facial and iris recognition in smartphones such as Apple’s 10th anniversary iPhone X, questions are also arising about whether Big Brother is taking a step too far into our private lives. Will the increasing use of body-worn video cameras, sensors and biometrics mean that our every daily movements will be watched? That’s a distinct possibility, which will concern many people who like to guard their lives like Fort Knox.

The myriad of initiatives, such as edge computing, fog computing and cloud computing that have emerged to connect devices together have created much confusion

Arguably, the use of biometrics on smartphones isn’t new though. Some Android handsets have been using iris recognition for a while now. Yet, with the European Union’s General Data Protection Regulations now less than five months away at the time of writing this article, the issue of privacy and how to protect personal data is on everyone’s lips. However, for innovation to occur, there must sometimes be a trade-off because some of today’s mobile technologies rely upon location-based services to indicate our whereabouts, to determine our proximity to points of interest; machine learning is deployed to learn our habits to make life easier.

Looking ahead

So, even Santa has been looking at whether innovation will reside in the cloud or the edge in 2018. He thinks his sleigh might need an upgrade to provide autonomous driving. Nevertheless, he needs to be careful because Rudolph and his red-nosed reindeer might not like being replaced by a self-driving sleigh. Yet, to analyse the data and the many opportunities that will arise from autonomous vehicles as time marches on, he thinks that having much of the data analysis should be conducted at the edge.

By conducting the analysis at the edge, it becomes possible to mitigate some of the effects of latency, and there will be occasions when connected and autonomous vehicles will need to function without any access to the internet or to cloud services. The other factor that is often considered, and why an increasing number of people are arguing that innovation will lie in edge computing, is the fact that the further away your datacentre is located, the more latency and packet loss traditionally tend to increase. Consequently, real-time data analysis becomes impossible to achieve.

Foggy times

However, the myriad of initiatives, such as edge computing, fog computing and cloud computing, that have emerged over the past few years to connect devices together have created much confusion. They are often hard to understand if you are somebody looking at the IT world from the outside. You could therefore say we live in foggy times because new terms are being bounced around that often relate to old technologies that have been given a new badge to enable future commercialisation.

I’ve nevertheless no doubt that autonomous vehicles, personalised location-aware advertising and personalised drugs – to name but a few innovations – are going to radically change the way organisations and individuals generate and collect data, the volumes of data we collect, and how we crunch this data. Without doubt too, they will have implications for data privacy. The perceived wisdom, when faced with vast new amounts of data to store and crunch, is to therefore run it from the cloud. Yet, that may not be the best solution. Therefore, organisations should consider all the possibilities out there in the market – and some of them may not emanate from the large vendors. That’s because smaller companies are often touted as the better innovators.

Autonomous cars

Autonomous cars, according to Hitachi, will create around 2 petabytes of data a day. Connected cars are also expected to create around 25 gigabytes of data per hour. Now, consider that there are currently about 800+ million cars in USA, China and Europe. So, if there were to be 1 billion cars in the near future, with about half of them being fully connected and assuming that they are used for an average journey of 3 hours per day, 37,500,000,000 gigabytes per day would need to be created.

If, as expected, most new cars will be autonomous by the mid-2020s, that number will look insignificant. Clearly, not all that data can instantaneously be shipped back to the cloud without some level of data verification and reduction. There must be a compromise, and that’s what edge computing can offer in support of such technologies, such as autonomous vehicles.

Storing the ever-increasing amount of data is going to be a challenge from a physical perspective. Data size sometimes does matter of course. With it comes a financial and economic matter of cost per gigabyte. So, for example, while electric vehicles are being touted as the flavour of the future, power consumption is bound to increase. So too will the need to ensure that the personal or device-created data doesn’t fall foul of data protection legislation.

Data acceleration

Yet, as much of the data from connected and autonomous vehicles will need to be transmitted to a cloud service for deeper analysis, back-up, storage and data-sharing with an ecosystem of partners, from vehicle manufacturers to insurers, some of the data still needs to be able to flow to and from the vehicles. In this case, to mitigate the effects of network and data latency, there may be a need for data acceleration with solutions such as PORTrockIT.

Unlike edge computing, where data is analysed close to its source, data acceleration can permit the back-up, storage and analysis of data at speed and at distance by using machine learning and parallelisation to mitigate packet loss and latency.  By accelerating data through this approach, it becomes possible to alleviate any pain that organisations feel. CVS Healthcare, is but one organisation, that has seen the benefits of taking such an innovative approach.

The company’s issues were as follows: back-up RPO and RTO; 86ms latency over the network (>2,000 miles); 1% packet loss; 430GB daily backup never completed across the WAN; 50GB incremental taking 12 hours to complete; outside RTO SLA – unacceptable commercial risk; OC12 pipe (600MB per second); excess Iron Mountain costs.

To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up from 12 hours to 45 minutes. That equates to a 94% reduction in back-up time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than 4 hours per day. So, in the face of a calamity it could perform disaster recovery in less than 5 hours to recover everything completely.

Amongst other things, the annual cost-savings created by using data acceleration amounted to $350,000. Interestingly, CVS Healthcare is now looking to merge with Aetna, and so it will most probably need to roll this solution out across both merging entities.

Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and back-up performance.

Data value

Moving away from healthcare and autonomous vehicles and back to GDPR, the trouble is that there are too many organisations that collate, store and archive data without knowing its true value. Jim McGann, VP Marketing & Business Development at Index Engines, says most organisations find it hard to locate personal data on their systems or in paper records.

This issue makes it impossible to know whether the data can be kept, modified, deleted permanently or rectified – making it harder to comply with GDPR, and I would argue it also makes it harder to know whether the data can be used to legitimately drive innovation. So, instead of being able to budget for innovation, organisations in this situation may find that they need to spend a significant amount of money on fines rather than on developing themselves.

Organisations are therefore going to need infrastructure that provides a limited level of data computation and data sieving at the edge

He explains: “Much of this is very sensitive and so many companies don’t like to talk on the record about this, but we do a lot of work with legal advisory firms to enable organisations with their compliance.” Index Engines, for example, completed some work with a Fortune 500 electronics manufacturer that found that 40% of its data no longer contained any business value. So, the company decided to purge it from its datacentre.

Limited edge

Organisations are therefore going to need infrastructure that provides a limited level of data computation and data sieving at the edge – perhaps in expanded base stations and then shipped back or from the cloud. This may, for example, involve a hybrid cloud edge infrastructure. Does this solve everything? Not quite! Some fundamental problems remain, such as the need to think about how to move vast amounts of data around the world – especially if it contains personal encrypted data.

More to the point, for innovation to lie anywhere, it’s going to continue to be crucial to consider how to get data to users at the right time, and to plan now how to store the data well into the future.

Is edge computing set to blow away the cloud?

Just about every new piece of technology is considered disruptive to the extent that they are expected to replace older technologies. Sometimes as with the cloud, old technology is simply re-branded to make it more appealing to customers and thereby to create the illusion of a new market. Let’s remember that cloud computing had previously existed in one shape or form. At one stage it was called on-demand computing, and then it became ‘application service provision’.

Now there is edge computing, which some people are also calling fog computing and which some industry commentators feel is going to replace the cloud as an entity. Yet the question has to be: Will it really? The same viewpoint was given when television was invented. Its invention was meant to be the death of radio. Yet people still tune into radio stations by their thousands each and every day of every year.

Of course, there are some technologies that are really disruptive in that they change people’s habits and their way of thinking. Once people enjoyed listening to Sony Walkmans, but today most folk listen to their favourite tunes using smartphones – thanks to iPods and the launch of the first iPhone by Steve Jobs in 2007, which put the internet in our pockets and more besides.

Levine’s prophecy

So why do people think edge computing will blow away the cloud? This claim is made in many online articles. Clint Boulton, for example, writes about it in his Asia Cloud Forum article, ‘Edge Computing Will Blow Away The Cloud’, in March this year. He cites venture capitalist Andrew Levine, a general partner at Andreessen Horowitz, who believes that more computational and data processing resources will move towards “edge devices” – such as driverless cars and drones – which make up at least part of the Internet of Things. Levine prophesises that this will mean the end of the cloud as data processing will move back towards the edge of the network.

In other words, the trend has been up to now to centralise computing within the data centre, while in the past it was often decentralised or localised nearer to the point of use. Levine sees driverless cars as being a data centre; they have more than 200 CPUs working to enable them to operate without going off the road and causing an accident. The nature of autonomous vehicles means that their computing capabilities must be self-contained, and to ensure safety they minimise any reliance they might otherwise have on the cloud. Yet they don’t dispense with it.

Complementary models

The two approaches may in fact end up complementing each other. Part of the argument for bringing data computation back to the edge falls down to increasing data volumes, which lead to ever more frustratingly slow networks. Latency is the culprit. Data is becoming ever larger. So there is going to be more data per transaction, more video and sensor data. Virtual and augmented reality are going to play an increasing part in its growth too. With this growth, latency will become more challenging than it was previously. Furthermore, while it might make sense to put data close to a device such as an autonomous vehicle to eliminate latency, a remote way of storing data via the cloud remains critical.

The cloud can still be used to deliver certain services too, such as media and entertainment. It can also be used to back up data and to share data emanating from a vehicle for analysis by a number of disparate stakeholders. From a data centre perspective, and moving beyond autonomous vehicles to a general operational business scenario, creating a number of smaller data centres or disaster recovery sites may reduce economies of scale and make operations more inefficient than efficient. Yes, latency might be mitigated, but the data may also be held within the same circles of disruption with disastrous consequences when disaster strikes; so for the sake of business continuity some data may still have to be stored or processed elsewhere, away from the edge of a network. In the case of autonomous vehicles, and because they must operate whether a network connection exists or not, it makes sense for certain types of computation and analysis to be completed by the vehicle itself. However, much of this data is still backed up via a cloud connection whenever it is available. So, edge and cloud computing are likely to follow more of a hybrid approach than a standalone one.

Edge to cloud

Saju Skaria, senior director at consulting firm TCS, offers several examples of where edge computing could prove advantageous in his LinkedIn Pulse article, ‘Edge Computing Vs. Cloud Computing: Where Does the Future Lie?’. He certainly doesn’t think that the cloud is going to blow away.

“Edge computing does not replace cloud computing…in reality, an analytical model or rules might be created in a cloud then pushed out to edge devices… and some [of these] are capable of doing analysis.” He then goes on to talk about fog computing, which involves data processing from the edge to a cloud. He is suggesting that people shouldn’t forget data warehousing too, because it is used for “the massive storage of data and slow analytical queries.”

Eating the cloud

In spite of this argument, Gartner’s Thomas Bittman, seems to agree that ‘Edge Will Eat The Cloud’. “Today, cloud computing is eating enterprise datacentres, as more and more workloads are born in the cloud, and some are transforming and moving to the cloud… but there’s another trend that will shift workloads, data, processing and business value significantly away from the cloud. The edge will eat the cloud… and this is perhaps as important as the cloud computing trend ever was.”

Later on in his blog, Bittman says: “The agility of cloud computing is great – but it simply isn’t enough. Massive centralisation, economies of scale, self-service and full automation get us most of the way there – but it doesn’t overcome physics – the weight of data, the speed of light. As people need to interact with their digitally-assisted realities in real-time, waiting on a data centre miles (or many miles) away isn’t going to work. Latency matters. I’m here right now and I’m gone in seconds. Put up the right advertising before I look away, point out the store that I’ve been looking for as I driver, let me know that a colleague is heading my way, help my self-driving car to avoid other cars through a busy intersection. And do it now.”

Data acceleration

He makes some valid points, but he falls into the argument that has often been used about latency and data centres: They have to be close together. The truth, however, is that wide area networks will always be the foundation stone of both edge and cloud computing. Secondly, Bittman clearly hasn’t come across data acceleration tools such as PORTrockIT and WANrockIT. While physics is certainly a limiting and challenging factor that will always be at play in networks of all kinds – including WANs, it is possible today to place your datacentres at a distance from each other without suffering an increase in data and network latency. Latency can be mitigated, and its impact can be significantly reduced no matter where the data processing occurs, and no matter where the data resides.

So let’s not see edge computing as a new solution. It is but one solution, and so is the cloud. Together the two technologies can support each other. One commentator says in response to a Quora question about the difference between edge computing and cloud computing that “edge computing is a method of accelerating and improving the performance of cloud computing for mobile users.” So the argument that edge will replace cloud computing is a very foggy one. Cloud computing may at one stage be re-named for marketing reasons – but it’s still here to stay.

How data acceleration will make the blockchain even more secure

Michael Salmony, executive adviser at Equens, spoke about ‘Making Tea on the Blockchain’ at the Financial Service Club in January 2017. He argued that blockchain is but one solution. Other options for making smart and secure transactions should also be considered. He even goes so far as to ask on Linkedin: ‘Blockchain – not for payments?

Questions like this one are important because so many people have been jumping onto the Bitcoin and blockchain bandwagon over the last few years with the view that it’s the answer to all of their prayers. In some cases it might; in others it might not be.

Blockchain will revolutionise the financial systems overnight, taking away the need for the banks’ stuffy 300 year old way of doing things

In his blog he refers to Satoshi Nakamoto, the creator of Bitcoin, who described in one of his papers “how to make remote payments ‘without a trusted third party’ (like a bank) that are ‘computationally impractical to reverse’.” The problem is that much of the hype around Bitcoin has largely gone away as it has been subsumed by massive episodes of fraud and losses, and internal political wrangles. The analyses of the European Banking Authority and other organisations has also laid bitcoin bare to the point that its volatility became too much. Their scrutiny opens up the fact that there are other opportunities for manipulation and so he claims its flaws become very apparent.

Applying caution

“However, the underlying distributed consensus algorithm now popularly called blockchain still is the subject of heated discussion and massive investments of time and money”, he says before adding: “Normally I am very much in favour of innovations, but in this case I lean towards…being cautious about blockchain for payments (although the discussion around this may be anything but short-lived).”

His caution is based on the following:

  • Blockchain is a solution looking for a problem. He argues that innovation should always put the customer first and not the technology. “People are madly trying to work out what this Blockchain solution could be used for”, he explains.
  • It isn’t new. He says that Blockchain is often praised for its novelty, but points out the distributed ledgers have been around for decades.
  • It isn’t good technology. He comments that some people say that Blockchain is a much better system than what has previously existed. They argue that it maydrastically improve cost, security, speed, and user friendliness compared to current existing systems.

He explains that this often isn’t the case: “Increasingly experts agree that there is little evidence for this – especially regarding the public blockchain that would be needed for global payments. Its major pitfall is that it does not scale well (transaction limits, latency, storage explosion), uses extraordinary amounts of resources (energy, processing power), has severe security concerns and is even surprisingly bad at privacy. Should we really base a critical infrastructure like payments on this?”

He therefore argues that it is becoming increasingly clear “that central entities are actually required in real world blockchain implementations.” As a result of this situation blockchain won’t favour any ideologists whom are in favour of a system without central points of trust – such as banks. This leads to a lack of compliance and to the technology being favoured by darknet users looking to undertake a number of dubious activities. However, banks and financial services institutions have to live in the real world to ensure that they are compliant with Know Your Customer and anti-money laundering (AML) legislation and regulations.

Subsequently they need to provide an infrastructure they can all depend upon. This need for compliance may in turn lead to more technological leadership in the financial services community. This leads to companies and individuals prioritising their own agenda and business decisions concerning business issues and about technologies such as blockchain, API, NFC, quantum, identify, authentication, wearables, and so forth. In essence, what matters is whether a certain technology works in terms of scalability, compliance, cost effectiveness, and proven capabilities.

Changing the world

In contrast to Salmony’s personal viewpoint I think that blockchain technology will likely, maybe, or possibly change the world. It will revolutionise the financial systems overnight, taking away the need for the slow incumbent banks and their stuffy 300 year old way of doing things. For anyone that has been in the IT industry for the last 20 years will know and recognise the hype cycle that tends to happen when a new technology emerges – just look at the internet.

The revamped Gartner hype cycle – Throw VC Money At It, the Peak of Oversold Over Promises, the Trough of Reality, and finally the Gentle Slope of Making It Work in the Real World

Hang on, I hear you say. The internet has changed the world of information, social interaction, consumer purchasing, and so on.  In reality it has shrunk and transformed the world beyond all recognition, but those of us who are ‘long in the tooth’ will remember the hype that went into it in the early days. This involved a lot of money, speculative ventures with mind boggling valuations that really didn’t have at the centre a solid business idea or plan. 

Some of these were just too far ahead of their time. Take for example what author Steven Banks highlighted in his book “The Four Steps to the Epiphany: Successful Strategies for Products That Win”. In it he wrote about the home grocery delivery Webvan. This company focused so much on the back end functions to the detriment of gaining customers – or perhaps the market just wasn’t ready yet.

Technology dèja vu

So what has this all got to do with blockchain? Well, it does have a bit of dèja vu feeling similar to that of the internet where just maybe, the hype doesn’t quite match reality. Over the past 300 or so years, the world’s banking infrastructure has grown up on a basis of trust with traceability, governance, dispute resolution, etc. This postulates that in reality there is no need or appetite within the core business of banking and finance. In fact how is one going to regulate such subjects as money laundering local reporting requirements? The list of compliance issues is quite extensive.

However, the proponents of blockchain technology or distributed ledger technology (DLT) as it is now known, talk of the speed of transactions compared to traditional banking process. With a large DLT the norm for confirming transactions on the original bitcoin DLT is around 10 minutes. This is a minuscule amount of time when compared to foreign currency movements through our traditional banking system that can take up to five days. Well, in our hearts of hearts, we all know that this is not the fault of the underlying technology in the banking system, but percentage skimming by the banking system.

When you consider the sheer amount of money passing through banks each day, holding it for a couple of days before processing it can create some considerable bonuses.  Let’s go over that part again and consider the number of transactions each day on a DLT, which is increasing as performance increases. Firstly, it will never reach the numbers required say for the Visa network that handles on average 47,000 transactions per second or Nasdaq’s potential 1M tps. All this brings into question its scalability. But does this mean that there are no uses for DLT – far from it.

Let’s go back and review that wonderful Gartner hype cycle where they talk about the Innovation Trigger, the Peak of Inflated Expectations  and that all important Trough of Disillusionment. I would like to call them – Throw VC Money At It! People do this in the belief that it will solve everyone’s problem. There’s also the Peak of Oversold Over Promises, then the Trough of Reality, and finally, the Gentle Slope of Making it Work in the real world.

Dot-com peak

Now with these in mind, cast your minds back to the turn of the century in just the same way the dot-com peak turned itself into real businesses. I think we are nearly there with distributed ledger technology despite what a lot of detractors claim that this is a solution looking for a problem.  To understand this it’s important to take a step back, to gain a more pragmatic viewpoint, because blockchain could well be at the heart of many of the trusted transactions in the world because it has some highly valuable features. Much depends on whether it is used in the right manner. 

DLT (blockchain) has some valuable practical usage in markets segments outside of the core banking world where traceability is vital such as in the diamond trading (this could possibly bring an end to Blood Diamonds), fine art, medical records, land transactions and deeds. In fact, the USA grocery giant Walmart is about to enter trials of a DLT for tracking foods from the point of origin all the way through distribution and inspection to the shelf in the store.  Should anything raise the suspicion as to the safety of the items, these can be traced over every single store and the whole supply chain. With this they are looking to provide a greater level of food safety, reducing not only the costs possible litigation but also minimising food wastage.

Security and consensus

Great emphasis has been put on the security of the blockchain with its encryption and its distributed consensus model. For the technology to work on a global scale in a commercial world then the performance and flexibility has to improve dramatically. One of the concerns with the consensus model is about the computing power required. This could lead to the possibility of a minimum number of nodes being concentrated in one region. That happened to Bitcoin in China where 80% of the mining nodes are based.

If the number of blockchains could multiply rapidly globally, it no longer becomes a computational burden but also a data transportation problem

As this technology spreads into other walks of life, not only could the size of the chains increase but also the number of blockchains could multiply rapidly globally, as well as within a single organisation such as Walmart where it has a diverse supply infrastructure. This no longer becomes just a computational burden but also a data transportation problem. 

To maximise the security and performance, the greater the number and diversity of nodes or miners the better, but this leads to a problem of how to transfer this encrypted data around the world in a performant and efficient way. In the past we have used compression to improve the performance of data transmission over large distances, but trying to compress encrypted data is very inefficient. What is therefore needed instead is a technology such as PORTrockIT to enable data to travel more securely and faster than before – making it harder for hackers to gain unauthorised access to even blockchain data. This is a need that the customer wants to be addressed, and it can now be done well.

How data acceleration will make the blockchain even more secure

Michael Salmony, executive adviser at Equens, spoke about ‘Making Tea on the Blockchain’ at the Financial Service Club in January 2017. He argued that blockchain is but one solution. Other options for making smart and secure transactions should also be considered. He even goes so far as to ask on Linkedin: ‘Blockchain – not for payments?

Questions like this one are important because so many people have been jumping onto the Bitcoin and blockchain bandwagon over the last few years with the view that it’s the answer to all of their prayers. In some cases it might; in others it might not be.

Blockchain will revolutionise the financial systems overnight, taking away the need for the banks’ stuffy 300 year old way of doing things

In his blog he refers to Satoshi Nakamoto, the creator of Bitcoin, who described in one of his papers “how to make remote payments ‘without a trusted third party’ (like a bank) that are ‘computationally impractical to reverse’.” The problem is that much of the hype around Bitcoin has largely gone away as it has been subsumed by massive episodes of fraud and losses, and internal political wrangles. The analyses of the European Banking Authority and other organisations has also laid bitcoin bare to the point that its volatility became too much. Their scrutiny opens up the fact that there are other opportunities for manipulation and so he claims its flaws become very apparent.

Applying caution

“However, the underlying distributed consensus algorithm now popularly called blockchain still is the subject of heated discussion and massive investments of time and money”, he says before adding: “Normally I am very much in favour of innovations, but in this case I lean towards…being cautious about blockchain for payments (although the discussion around this may be anything but short-lived).”

His caution is based on the following:

  • Blockchain is a solution looking for a problem. He argues that innovation should always put the customer first and not the technology. “People are madly trying to work out what this Blockchain solution could be used for”, he explains.
  • It isn’t new. He says that Blockchain is often praised for its novelty, but points out the distributed ledgers have been around for decades.
  • It isn’t good technology. He comments that some people say that Blockchain is a much better system than what has previously existed. They argue that it maydrastically improve cost, security, speed, and user friendliness compared to current existing systems.

He explains that this often isn’t the case: “Increasingly experts agree that there is little evidence for this – especially regarding the public blockchain that would be needed for global payments. Its major pitfall is that it does not scale well (transaction limits, latency, storage explosion), uses extraordinary amounts of resources (energy, processing power), has severe security concerns and is even surprisingly bad at privacy. Should we really base a critical infrastructure like payments on this?”

He therefore argues that it is becoming increasingly clear “that central entities are actually required in real world blockchain implementations.” As a result of this situation blockchain won’t favour any ideologists whom are in favour of a system without central points of trust – such as banks. This leads to a lack of compliance and to the technology being favoured by darknet users looking to undertake a number of dubious activities. However, banks and financial services institutions have to live in the real world to ensure that they are compliant with Know Your Customer and anti-money laundering (AML) legislation and regulations.

Subsequently they need to provide an infrastructure they can all depend upon. This need for compliance may in turn lead to more technological leadership in the financial services community. This leads to companies and individuals prioritising their own agenda and business decisions concerning business issues and about technologies such as blockchain, API, NFC, quantum, identify, authentication, wearables, and so forth. In essence, what matters is whether a certain technology works in terms of scalability, compliance, cost effectiveness, and proven capabilities.

Changing the world

In contrast to Salmony’s personal viewpoint I think that blockchain technology will likely, maybe, or possibly change the world. It will revolutionise the financial systems overnight, taking away the need for the slow incumbent banks and their stuffy 300 year old way of doing things. For anyone that has been in the IT industry for the last 20 years will know and recognise the hype cycle that tends to happen when a new technology emerges – just look at the internet.

The revamped Gartner hype cycle – Throw VC Money At It, the Peak of Oversold Over Promises, the Trough of Reality, and finally the Gentle Slope of Making It Work in the Real World

Hang on, I hear you say. The internet has changed the world of information, social interaction, consumer purchasing, and so on.  In reality it has shrunk and transformed the world beyond all recognition, but those of us who are ‘long in the tooth’ will remember the hype that went into it in the early days. This involved a lot of money, speculative ventures with mind boggling valuations that really didn’t have at the centre a solid business idea or plan. 

Some of these were just too far ahead of their time. Take for example what author Steven Banks highlighted in his book “The Four Steps to the Epiphany: Successful Strategies for Products That Win”. In it he wrote about the home grocery delivery Webvan. This company focused so much on the back end functions to the detriment of gaining customers – or perhaps the market just wasn’t ready yet.

Technology dèja vu

So what has this all got to do with blockchain? Well, it does have a bit of dèja vu feeling similar to that of the internet where just maybe, the hype doesn’t quite match reality. Over the past 300 or so years, the world’s banking infrastructure has grown up on a basis of trust with traceability, governance, dispute resolution, etc. This postulates that in reality there is no need or appetite within the core business of banking and finance. In fact how is one going to regulate such subjects as money laundering local reporting requirements? The list of compliance issues is quite extensive.

However, the proponents of blockchain technology or distributed ledger technology (DLT) as it is now known, talk of the speed of transactions compared to traditional banking process. With a large DLT the norm for confirming transactions on the original bitcoin DLT is around 10 minutes. This is a minuscule amount of time when compared to foreign currency movements through our traditional banking system that can take up to five days. Well, in our hearts of hearts, we all know that this is not the fault of the underlying technology in the banking system, but percentage skimming by the banking system.

When you consider the sheer amount of money passing through banks each day, holding it for a couple of days before processing it can create some considerable bonuses.  Let’s go over that part again and consider the number of transactions each day on a DLT, which is increasing as performance increases. Firstly, it will never reach the numbers required say for the Visa network that handles on average 47,000 transactions per second or Nasdaq’s potential 1M tps. All this brings into question its scalability. But does this mean that there are no uses for DLT – far from it.

Let’s go back and review that wonderful Gartner hype cycle where they talk about the Innovation Trigger, the Peak of Inflated Expectations  and that all important Trough of Disillusionment. I would like to call them – Throw VC Money At It! People do this in the belief that it will solve everyone’s problem. There’s also the Peak of Oversold Over Promises, then the Trough of Reality, and finally, the Gentle Slope of Making it Work in the real world.

Dot-com peak

Now with these in mind, cast your minds back to the turn of the century in just the same way the dot-com peak turned itself into real businesses. I think we are nearly there with distributed ledger technology despite what a lot of detractors claim that this is a solution looking for a problem.  To understand this it’s important to take a step back, to gain a more pragmatic viewpoint, because blockchain could well be at the heart of many of the trusted transactions in the world because it has some highly valuable features. Much depends on whether it is used in the right manner. 

DLT (blockchain) has some valuable practical usage in markets segments outside of the core banking world where traceability is vital such as in the diamond trading (this could possibly bring an end to Blood Diamonds), fine art, medical records, land transactions and deeds. In fact, the USA grocery giant Walmart is about to enter trials of a DLT for tracking foods from the point of origin all the way through distribution and inspection to the shelf in the store.  Should anything raise the suspicion as to the safety of the items, these can be traced over every single store and the whole supply chain. With this they are looking to provide a greater level of food safety, reducing not only the costs possible litigation but also minimising food wastage.

Security and consensus

Great emphasis has been put on the security of the blockchain with its encryption and its distributed consensus model. For the technology to work on a global scale in a commercial world then the performance and flexibility has to improve dramatically. One of the concerns with the consensus model is about the computing power required. This could lead to the possibility of a minimum number of nodes being concentrated in one region. That happened to Bitcoin in China where 80% of the mining nodes are based.

If the number of blockchains could multiply rapidly globally, it no longer becomes a computational burden but also a data transportation problem

As this technology spreads into other walks of life, not only could the size of the chains increase but also the number of blockchains could multiply rapidly globally, as well as within a single organisation such as Walmart where it has a diverse supply infrastructure. This no longer becomes just a computational burden but also a data transportation problem. 

To maximise the security and performance, the greater the number and diversity of nodes or miners the better, but this leads to a problem of how to transfer this encrypted data around the world in a performant and efficient way. In the past we have used compression to improve the performance of data transmission over large distances, but trying to compress encrypted data is very inefficient. What is therefore needed instead is a technology such as PORTrockIT to enable data to travel more securely and faster than before – making it harder for hackers to gain unauthorised access to even blockchain data. This is a need that the customer wants to be addressed, and it can now be done well.

Building your data castle: Protecting from ransomware and restoring data after a breach

(c)iStock.com/Pobytov

The data centre is the castle. You can pull up the drawbridge, fill up the moat, or pull down the portcullis. But at some point, you have to let data in and out, and this opens up the opportunity for ransomware attacks.

No longer is it a matter of pride and peer recognition in the hacker community for circumnavigating and exposing the security of an organisation because it is now a fully-fledged industry in its own right with the use of ransomware.  That cybersecurity company Herjavec Group estimates to top a $1 Billon in 2016. In the past, those under siege used to flood the moats, pull up the drawbridges and drop the portcullis to protect themselves but with the modern data centre organisations life blood is the movement of data in and out of the data centre. 

The question now is not just how can organisations protect themselves from ransomware, but also what are the best practices and policies for recovery in case they get through.  Data has to flow in and out and that opens up the route in for security breaches and the most profitable one is ransomware. So can it be prevented from ever occurring, and how can that be achieved? After all, as always, prevention is better than cure and the first line of defence has to involve firewalls, email virus scanners and other such devices. The problem is that the writers of the code of computer viruses are always one step ahead of the data security companies that offer solutions to protect their customers. This is because the industry tends to be reactive to new threats rather than proactive.

With so many devices connecting to the corporate network, including bring your own devices (BYOD), there will always be an attack that gets through, especially as many end users are not totally savvy with how viruses and other such scams can be attached to emails while masquerading as normal everyday files. A certain amount of end user education will help but there will be the one that gets through.  So to protect ourselves, organisations have to have back-up plans on policies to deal with the situation when it does happen because we can’t keep the drawbridge up forever.

Is ransomware new?

So how long have ransomware attacks been around? Well excluding the viruses written by governments for subversion, we have always had viruses that hackers write for fun, notoriety, or to use as a robot in a denial of service attack. They may also use an email relay. With the coming of Bitcoin, where payments can be received anonymously and as you see from the Herjavec Group’s estimates it can be very lucrative while also being very costly to the organisations that are attacked. This is why companies should be creating their very own data castles, and they should only drop their drawbridges whenever it is absolutely safe or necessary to do so. Due diligence at all times is otherwise crucial.

One of the key weapons against ransomware is the creation of air gaps between data and any back-ups.  A solid back-up system is the Achilles heel of any ransomware and it has been proven many times over, such as in the case of Papworth Hospital. However, with the ever-increasing sophistication of ransomware and the use of online back-up devices, it won’t be long before it turns its attention to those devices as well. It’s therefore important to have back-up devices and media that have an air gap between themselves and the corporate storage network. This is going to be crucial in the future.  When you think about it, there is a lot of money at stake here on both sides if ransomware becomes back-up aware. So it’s important to think and plan ahead, and it’s perhaps a good idea to make back-ups appear less visible to any ransomware that might be programmed to attack them.

Disaster recovery

So what is the most effective way to recover from an attack? Any and every back-up strategy should be based around the recovery strategy for the organisation. Once the offending programs, and all its copies are removed.  Obviously, the key systems should be recovered first, but this will depend on the range and depth of the attack. One of the things that is easily overlooked in a recovery plan is the ability to reload the recovery software with standard operating system tools – it is something that is often overlooked in recovery scenario tests.

The key is to have a back-up plan. In the future that ransomware will, rather than blasting its way through the file systems, work silently in the background encrypting files over a period of time so that these files become a part of the back-up data sets. It is therefore important to maintain generations of data sets, not only locally but offsite in a secure location. Remember the old storage adage that your data is not secure until you have it in 3 places and in 3 copies.

I’d also recommend the following top 5 tips for protecting your organisation against ransomware:

  • Educate your end-users to make them more aware of the implications of ransomware and how it is distributed
  • Ensure that you deploy an up-to-date firewall and email scanners
  • Air gap your backups and archives from the corporate network
  • Maintain good generation controls for backups
  • Remember that backup is all about recovery; it’s better to prevent the need to recover by planning ahead for disasters such as a ransomware attack to maintain business continuity

These principles don’t change for enterprises that are based in the cloud. Whilst the cloud provides some resilience through the economies of scale that many could not afford in their own data centre, one should not assume that the data is any more secure in the cloud than in your own data centre.  Back-up policies for offsite back-ups and archive should still be implemented.

Inflight defence

But how can you prevent an attack while data is inflight? Whilst we have not seen this type of attack yet, it is always a strong recommendation that data inflight is encrypted preferably with your own keys before it hits your firewall. However, as many companies use WAN optimisation to improve their performance over WAN networks transporting encrypted files means little or no optimisation is possible. This can affect those all-important offsite DR, backup and archive transfers.  Products such as PORTrockIT can, however, enable organisations to protect their data while mitigating the effects of data and network latency. Solutions like this can enable you to build and maintain your data castle. 

Building your data castle: Protecting from ransomware and restoring data after a breach

(c)iStock.com/Pobytov

The data centre is the castle. You can pull up the drawbridge, fill up the moat, or pull down the portcullis. But at some point, you have to let data in and out, and this opens up the opportunity for ransomware attacks.

No longer is it a matter of pride and peer recognition in the hacker community for circumnavigating and exposing the security of an organisation because it is now a fully-fledged industry in its own right with the use of ransomware.  That cybersecurity company Herjavec Group estimates to top a $1 Billon in 2016. In the past, those under siege used to flood the moats, pull up the drawbridges and drop the portcullis to protect themselves but with the modern data centre organisations life blood is the movement of data in and out of the data centre. 

The question now is not just how can organisations protect themselves from ransomware, but also what are the best practices and policies for recovery in case they get through.  Data has to flow in and out and that opens up the route in for security breaches and the most profitable one is ransomware. So can it be prevented from ever occurring, and how can that be achieved? After all, as always, prevention is better than cure and the first line of defence has to involve firewalls, email virus scanners and other such devices. The problem is that the writers of the code of computer viruses are always one step ahead of the data security companies that offer solutions to protect their customers. This is because the industry tends to be reactive to new threats rather than proactive.

With so many devices connecting to the corporate network, including bring your own devices (BYOD), there will always be an attack that gets through, especially as many end users are not totally savvy with how viruses and other such scams can be attached to emails while masquerading as normal everyday files. A certain amount of end user education will help but there will be the one that gets through.  So to protect ourselves, organisations have to have back-up plans on policies to deal with the situation when it does happen because we can’t keep the drawbridge up forever.

Is ransomware new?

So how long have ransomware attacks been around? Well excluding the viruses written by governments for subversion, we have always had viruses that hackers write for fun, notoriety, or to use as a robot in a denial of service attack. They may also use an email relay. With the coming of Bitcoin, where payments can be received anonymously and as you see from the Herjavec Group’s estimates it can be very lucrative while also being very costly to the organisations that are attacked. This is why companies should be creating their very own data castles, and they should only drop their drawbridges whenever it is absolutely safe or necessary to do so. Due diligence at all times is otherwise crucial.

One of the key weapons against ransomware is the creation of air gaps between data and any back-ups.  A solid back-up system is the Achilles heel of any ransomware and it has been proven many times over, such as in the case of Papworth Hospital. However, with the ever-increasing sophistication of ransomware and the use of online back-up devices, it won’t be long before it turns its attention to those devices as well. It’s therefore important to have back-up devices and media that have an air gap between themselves and the corporate storage network. This is going to be crucial in the future.  When you think about it, there is a lot of money at stake here on both sides if ransomware becomes back-up aware. So it’s important to think and plan ahead, and it’s perhaps a good idea to make back-ups appear less visible to any ransomware that might be programmed to attack them.

Disaster recovery

So what is the most effective way to recover from an attack? Any and every back-up strategy should be based around the recovery strategy for the organisation. Once the offending programs, and all its copies are removed.  Obviously, the key systems should be recovered first, but this will depend on the range and depth of the attack. One of the things that is easily overlooked in a recovery plan is the ability to reload the recovery software with standard operating system tools – it is something that is often overlooked in recovery scenario tests.

The key is to have a back-up plan. In the future that ransomware will, rather than blasting its way through the file systems, work silently in the background encrypting files over a period of time so that these files become a part of the back-up data sets. It is therefore important to maintain generations of data sets, not only locally but offsite in a secure location. Remember the old storage adage that your data is not secure until you have it in 3 places and in 3 copies.

I’d also recommend the following top 5 tips for protecting your organisation against ransomware:

  • Educate your end-users to make them more aware of the implications of ransomware and how it is distributed
  • Ensure that you deploy an up-to-date firewall and email scanners
  • Air gap your backups and archives from the corporate network
  • Maintain good generation controls for backups
  • Remember that backup is all about recovery; it’s better to prevent the need to recover by planning ahead for disasters such as a ransomware attack to maintain business continuity

These principles don’t change for enterprises that are based in the cloud. Whilst the cloud provides some resilience through the economies of scale that many could not afford in their own data centre, one should not assume that the data is any more secure in the cloud than in your own data centre.  Back-up policies for offsite back-ups and archive should still be implemented.

Inflight defence

But how can you prevent an attack while data is inflight? Whilst we have not seen this type of attack yet, it is always a strong recommendation that data inflight is encrypted preferably with your own keys before it hits your firewall. However, as many companies use WAN optimisation to improve their performance over WAN networks transporting encrypted files means little or no optimisation is possible. This can affect those all-important offsite DR, backup and archive transfers.  Products such as PORTrockIT can, however, enable organisations to protect their data while mitigating the effects of data and network latency. Solutions like this can enable you to build and maintain your data castle. 

How to support video with mitigated latency

(c)iStock/AndreyPopov

nScreenMedia claims that: “data from Ericsson and FreeWheel paints a rosy picture for mobile video. Mobile data volume is set to increase sevenfold over the next six years, with video’s share increasing from 50% to 70%. The smartphone looks to be in the driver’s seat.”

To top this, Forbes reported in September 2015 that “Facebook users send on average 31.25 million messages and view 2.77 million videos every minute, and we are seeing a massive growth in video and photo data, where every minute up to 300 hours of video are uploaded to YouTube alone.”

Cisco also finds that “annual global IP traffic will pass the zettabyte ([ZB]; 1000 exabytes [EB]) threshold by the end of 2016, and will reach 2.3 ZB per year by 2020.

By the end of 2016, global IP traffic will reach 1.1 ZB per year, or 88.7 EB per month, and by 2020 global IP traffic will reach 2.3 ZB per year, or 194 EB per month.” The firm also predicts that video traffic will grow fourfold from 2015 to 2020, a CAGR of 31 percent.

More recently, a blog by FPV Blue claims that it can solve the latency problems that can dog many marketers and consumers by suggesting that ‘Glass to glass video latency is now under 50 milliseconds’.

Previously, the company announced that this video latency figure stood at 80 milliseconds. To reduce this latency, the firm needed to undertake a hardware revision.

Its blog post nevertheless questions the industry standard for measuring First-Person View latency (FPV latency).

Defining latency

FPV Blue defines latency as follows:

“Before measuring it, we better define it. Sure, latency is the time it takes for something to propagate in a system, and glass to glass latency is the time it takes for something to go from the glass of a camera to the glass of a display.

However, what is that something? If something is a random event, is it happening in all of the screen at the same time, or is restricted to a point in space?

If it is happening in all of the camera’s lenses at the same time, do we consider latency the time it takes for the event to propagate in all of the receiving screen, or just a portion of it? The difference between the two might seem small, but it is actually huge.”

Therefore, whether the video is being used for flying drones or for other purposes, people need to consider how they can accurately measure and mitigate the effects of video latency because in general, video traffic is increasing exponentially.

Cisco’s Visual Networking Index claim: “It would take more than 5 million years to watch the amount of video that will cross global IP networks each month in 2020. Every second, a million minutes of video content will cross the network by 2020.”

The findings of Cisco’s survey also reveal that video traffic over the internet will equate to 82% of all Internet Protocol (IP) traffic, relating to businesses and consumers.

Video: The TV star

Michael Litt, CEO and co-founder of Vidyard, also claims that the future of the internet is television because more and more people are using streaming services for entertainment, which means that the major broadcasters are also having to play catch-up.

At this juncture, it’s worth noting that BBC 3 has moved online to meet the demands of a younger and digital device savvy audience.

Election coverage

Talking about Facebook, Mashable reports that its livestream coverage of the third US presidential debates has one big advantage over everyone else.

“Facebook was delivering its stream at a 13 second delay, on average, compared to radio”, writes Kerry Flynn. The network suffering from the slowest latency was Bloomberg at an arduous 56 seconds.

She rightly adds that the disparity between the different networks should worry the traditional broadcast networks: “Watching the debate on Facebook meant that a viewer not only did not have a TV or pay for cable, they also had the fastest stream accompanied by real-time commentary and reactions.”

The surprise was that Facebook, according to the findings of Wowza Media Systems, managed to – pardon the pun – trump the satellite and cable networks for some viewers.

“Facebook’s livestream setup isn’t that different from what other companies use. Cable systems, however, tend to outsource livestreaming to content delivery networks (CDNs) that are easy to integrate and reliable — but also relatively slow”, writes Flynn.

With a lot of streaming data you have to get the CDN closer to the viewers to improve the user experience. This gives you the problem of getting the content to the CDN in the first place.

When the CDN is at distance from the centralised source to the CDN, the latency will be considerably higher which in turn affects the data throughput to the CDN. As this rich data is compressed, traditional WAN optimisation techniques are ineffective.

The problem: Latency

With the increasing proliferation of video content, why should anyone by concerned about the volume of video that is being produced and latency?

High viewing figures can after all lead to higher advertising revenues for the many broadcasters

From a competitive advantage perspective, increasing volumes of video data means that there is more noise to contend with in order to get marketing messages across to one’s target audiences.

So there is more pressure on internet and on content delivery services with increasing demand and higher quality play out, but on the whole many of these facilities have been sorted even with seamless stitching advertising services.

If latency impinges on livestream services, too, then the viewer is likely to choose the network with the fastest stream.

The key problem is that video and audio can be impeded by the effects of network latency. Slow networks can leave the reputations of customers – whose own ‘consumers’ use video for a variety of reasons – tarnished.

In a commercial situation, this could lead to lost business. A fast network from any datacentre will in contrast engender confidence. You can’t accelerate it all because it’s going at a fixed speed.

It’s about video in general. There are so many different applications for video, and all of them can be affected by bandwidth or latency – or both.” How we produce, consume, and store information has changed dramatically over the past few years with the YouTube and Facebook generation growing up.

Supporting video

To support video companies using video for broadcasting, advertising, video-conferencing, marketing, or for other purposes, need to avoid settling for traditional WAN optimization.

Instead they should employ more innovative solutions that are driven by machine intelligence – such as PORTrockIT, which accelerates data while reducing data packet loss and it mitigates the effects of latency.

Adexchanger offers some more food for thought about why this should concern marketers in particular: “Video on a landing page can increase conversion rates by 80%? Or, that 92% of mobile video consumers share videos with others.” 

Marketers should therefore ask their IT departments to invest in solutions that enable them to deliver marketing messages without their conversations being interrupted by network latency.

Similarly, broadcasters should invest in systems that mitigate the impact that latency can have on their viewers to maintain their loyalty.

High viewing figures can after all lead to higher advertising revenues for the many broadcasters, social media networks and publishers whom are offering video content as part of their service.

They may also need to transfer and back up large and uncompressed video files around the world quickly – that’s a capability which WAN optimisation often fails to deliver, but it can be achieved with the right solution.

It’s therefore important to overview the alternative options that exist on the market.