All posts by davidrichards

Four nines and failure rates: Will the cloud ever cut it for transactional banking?

Banks looking to take their cloud plans to the next level are likely to have returned to the drawing board following the latest Amazon Web Services outage, which disrupted the online activities of major organizations from Apple to the US Securities and Exchange Commission. One estimate suggests US financial services companies alone lost $160 million – in just four hours. It’s been a timely reminder that any downtime is too much in an always-on digital economy, certainly for financial services.

The sobering point is that AWS was still delivering within the terms of its service-level agreement (SLA). This promises 99.99% service and data availability (otherwise known as “four nines” availability). This may be good enough for a lot of things, but it won’t do for banking. 

Over a year that 0.01% scope for unavailability equates to almost nine hours of unplanned outage – and that’s on top of any planned downtime for maintenance or updates. Combine the two and you’re looking at more than a day’s worth of service loss across a 12-month period. It’s hardly a recommendation for banks to move critical, live data into the cloud – however compelling the business drivers.

Banks need five nines (99.999%) service and data availability – the levels aimed for on their own premises. That’s a downtime tolerance of between 0.32 and three seconds per year. And public cloud services are not set up to match that. It would be uneconomical.

The end doesn’t justify the means

Moving real-time transactions into the cloud is the final frontier for traditional regulated financial institutions. And there’s no question that they need, and want, to do this. It’s vastly more cost-efficient, and it’s the only way they can hope to compete with nimble financial upstarts, whose agility owes everything to being able to crunch huge numbers at high-speed using someone else’s top-of-the-range server farms.

Financial authorities such as the UK’s Financial Conduct Authority have already accepted the cloud, which on the face of it gives banks the green light to be more ambitious. But not really, because the issued guidance doesn’t bridge the reality gap traditional banks need to get across – in other words, the inadequate service level for scenarios other than data archiving or disaster recovery.

In data archiving and backup applications, the cloud’s appeal hinges on its cost-efficiency, scalability and durability. But durability should not be confused with availability. Even if data is tightly safeguarded, and can be brought back online efficiently after a system crash or other crisis, this adds no value in a live-data scenario. If there is any chance that at some point access may be interrupted, the other merits of cloud don’t matter in this context.

And that’s why banks haven’t made the final leap to using cloud in a production environment – because these otherwise very viable on-demand data centres can’t offer them the very high availability assurances they need.

Lost market opportunity

So banks are stuck. The inability to move core systems and live data into the cloud is costing them competitively in lost market opportunity.

If they could make the leap, it would pave the way foradvanced customer analytics, intelligent service automation, complex stock correlations, and predictive fraud detection: data-intensive applications that demand massive computer power – at a scale that their proprietary data centres simply can’t deliver.

But AWS and other mainstream cloud infrastructure providers have designed their services and service level agreements to meet the needs of the majority: where the risk of interrupting a morning’s business, social feeds or even hedge fund activity, though costly, is at least partly offset by huge infrastructure savings.

Remaining open to new options

Banks absolutely need to be more ambitious and creative in their use of the cloud. Their future differentiation depends on having access to the same computer power, speed and flexible resource as their more nimble, less risk-averse competitors. But they are not going to make the transition until the service levels they rely on for core systems can be delivered.

Inadequate service levels are a significant stumbling block, but lessons will be learnt each time a high-profile cloud service is compromised. In the meantime, barriers to what banks need to do can be overcome. Solving the data availability issue comes down to the way data is synchronized between sites (e.g. primary servers and secondary data centres), so that live data is always available in more than one place at the same time. It sounds impossible, but it isn’t.

Achieve this (and at WANdisco we have) and the nines will take care of themselves.

Is hybrid cloud in danger of becoming lost in translation?

(c)iStock.com/DundStock

In the last year hybrid cloud adoption has ramped up as both cloud users and cloud vendors have matured. Yet there is still confusion in the market about what it means to go truly hybrid with many CIOs unable to agree when it comes to the true definition of hybrid cloud.

According to the National Institute of Standards and Commerce, “[Hybrid] cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community or public) that remain unique entities, but that are bound together by standardised or proprietary technology that enables data and application portability (e.g. cloud bursting for load balancing between clouds).” This original definition seems to have been lost in vendor marketing jargon. Why? Because the ability to manage and move data and applications across cloud and non-cloud infrastructure environments is complicated.

If data is batch, static or archival it is relatively easy to move that data between on-premises and the cloud. A much harder problem is how to move active data, data which is continually changing, across different storage environments. In a hybrid cloud model the remote application’s transactions must stay in sync with on-premises one while avoiding any inconsistencies in the transaction processing. As a result, many companies find they have some workloads running against data in the cloud and others that run against data on-premises as they can’t guarantee complete consistency between the two.

Yet in today’s market there is an answer to this problem. Active Data Replication™ gives continuous consistent connectivity to data as it changes wherever that data is located. It ensures access to data anytime and anywhere with no downtime and no disruption. This matters because those with active data simply cannot afford the downtime traditionally associated with moving their changing data to the cloud.

Yet they still need to be able to take advantage of the economies, elasticity and efficiencies a hybrid cloud infrastructure can offer: the ability to retain sensitive data behind the firewall while exploiting the lower cost and flexibility of cloud; improved scalability and provisioning at a decreased cost; the ability to allocate short-term projects at a much lower cost than upgrading on-premises infrastructure and the important advantage of being able to undertake burst-out processing on demand for real-time analytics by using the a wide range of applications available in the cloud that would be impossible to deploy and maintain on-premises without additional hardware and staff.

The benefits of running a hybrid cloud infrastructure using Active Data Replication™ are tangible. For instance, with continuous access to the latest information across multiple geographies, a bank is able to effectively detect credit card fraud and undertake timely business and consumer loan risk analysis. Similarly, a utility company can improve its engineering operations and sell data related products to its partners with continuous access to smart meter data. In the field of healthcare, where real-time access to data can be a matter of life and death, Active Data Replication™ can enable patients to be monitored remotely whether they are at home, in hospital or on the move. IDC estimates that by 2020 organisations able to analyse all relevant data and deliver actionable information will achieve an extra $430 billion in productivity benefits over their less analytically oriented peers.

Vendors need to stop hiding the fact they can’t guarantee complete consistency between on-premises and the cloud. Their distortion of the hybrid cloud definition results in many companies having to buy more cloud hardware and software expecting both efficiency and cost savings but in reality ending up with little added value.

Amazon, IBM, Microsoft and Google offer a solution which not only guarantees complete consistency between on-premises and the cloud but also enables their customers to avoid any vendor lock-in. For a successful hybrid cloud infrastructure CIOs need to remember what was at the heart of the original definition – exactly the same data on-premises and on the cloud with guaranteed consistency, no downtime and no disruption – something that is only possible with Active Data Replication™.

Is hybrid cloud in danger of becoming lost in translation?

(c)iStock.com/DundStock

In the last year hybrid cloud adoption has ramped up as both cloud users and cloud vendors have matured. Yet there is still confusion in the market about what it means to go truly hybrid with many CIOs unable to agree when it comes to the true definition of hybrid cloud.

According to the National Institute of Standards and Commerce, “[Hybrid] cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community or public) that remain unique entities, but that are bound together by standardised or proprietary technology that enables data and application portability (e.g. cloud bursting for load balancing between clouds).” This original definition seems to have been lost in vendor marketing jargon. Why? Because the ability to manage and move data and applications across cloud and non-cloud infrastructure environments is complicated.

If data is batch, static or archival it is relatively easy to move that data between on-premises and the cloud. A much harder problem is how to move active data, data which is continually changing, across different storage environments. In a hybrid cloud model the remote application’s transactions must stay in sync with on-premises one while avoiding any inconsistencies in the transaction processing. As a result, many companies find they have some workloads running against data in the cloud and others that run against data on-premises as they can’t guarantee complete consistency between the two.

Yet in today’s market there is an answer to this problem. Active Data Replication™ gives continuous consistent connectivity to data as it changes wherever that data is located. It ensures access to data anytime and anywhere with no downtime and no disruption. This matters because those with active data simply cannot afford the downtime traditionally associated with moving their changing data to the cloud.

Yet they still need to be able to take advantage of the economies, elasticity and efficiencies a hybrid cloud infrastructure can offer: the ability to retain sensitive data behind the firewall while exploiting the lower cost and flexibility of cloud; improved scalability and provisioning at a decreased cost; the ability to allocate short-term projects at a much lower cost than upgrading on-premises infrastructure and the important advantage of being able to undertake burst-out processing on demand for real-time analytics by using the a wide range of applications available in the cloud that would be impossible to deploy and maintain on-premises without additional hardware and staff.

The benefits of running a hybrid cloud infrastructure using Active Data Replication™ are tangible. For instance, with continuous access to the latest information across multiple geographies, a bank is able to effectively detect credit card fraud and undertake timely business and consumer loan risk analysis. Similarly, a utility company can improve its engineering operations and sell data related products to its partners with continuous access to smart meter data. In the field of healthcare, where real-time access to data can be a matter of life and death, Active Data Replication™ can enable patients to be monitored remotely whether they are at home, in hospital or on the move. IDC estimates that by 2020 organisations able to analyse all relevant data and deliver actionable information will achieve an extra $430 billion in productivity benefits over their less analytically oriented peers.

Vendors need to stop hiding the fact they can’t guarantee complete consistency between on-premises and the cloud. Their distortion of the hybrid cloud definition results in many companies having to buy more cloud hardware and software expecting both efficiency and cost savings but in reality ending up with little added value.

Amazon, IBM, Microsoft and Google offer a solution which not only guarantees complete consistency between on-premises and the cloud but also enables their customers to avoid any vendor lock-in. For a successful hybrid cloud infrastructure CIOs need to remember what was at the heart of the original definition – exactly the same data on-premises and on the cloud with guaranteed consistency, no downtime and no disruption – something that is only possible with Active Data Replication™.

What the Brexit vote will mean for data sovereignty

(c)iStock.com/john shepherd

The only certainty about Brexit, the UK’s departure from the European Union, is that it is going to create uncertainty in terms of data sovereignty, particularly in the field of cloud computing. Data sovereignty refers to data being held in a country in adherence to the laws of that state. That is fine if your company is based in a single location and single market but it becomes trickier if you have diverse locations and lots of different laws with which you need to comply.

When the UK is part of the European Union it has the same data sovereignty laws as other countries in the EU but when the UK breaks away those laws could change. In time companies operating in Europe may have to manage one set of data laws for the UK and another for EU member countries. By voting to leave the European Union the UK fractured what was becoming a single digital market into potentially two or more jurisdictions for technology issues.

If the UK wants to participate in the free flow of data across European borders after leaving the EU, it will have to adopt the same data-protection standards as the EU’s new General Data Protection Regulation. As the UK’s Information Commissioner’s Office has stated “international consistency around data protection laws and rights is crucial, both to businesses and organisations and to consumers and citizens”. Unless the UK follows the new EU rules foreign companies may lose the ability to process European consumer data in the UK. This has ramifications for companies that want to use data centres in the UK – even just as backups – if their data centres in other EU countries go down.

People are waking up and recognising that the cloud is about data centres built on land under national laws. Cloud and managed service providers may need to offer additional options for customers to host data across Europe and enterprise end users may need to reconsider where their data is stored and ask themselves “if I move my data to your cloud, where will it be stored and what sovereign laws will it be subject to?”. For UK and international companies moving data in and out of Europe this could become a minefield but it doesn’t have to be.

The fact is cloud computing companies are getting used to dealing with issues of data sovereignty. Gavin Jackson, Amazon Web Services (AWS) UK and Ireland managing director said, whilst speaking at the recent AWS Summit in London, that in spite of the referendum result, Amazon was still committed to opening a new data centre in the UK by the end of this year.  Whilst in the past restricting types of data that can be stored in specific locations hampered their flexibility to move data from one data centre to another, patented technology now solves that problem.

By opening a data centre in the UK Amazon can guarantee UK data will remain in the UK whilst other data can still be available to the rest of Europe to be shared and processed accordingly. How is this possible? In a shameless plug for my own company, WANdisco has patented “active transactional data replication” technology which led us to be one of Amazon’s partners (along with IBM, Microsoft and Google). The advantage of our WANdisco Fusion technology is that it doesn’t have to replicate all the data which allows data controllers to apply security controls to the data that is replicated. This means that cloud computing companies can quickly control where data is shared and ensure that data sovereignty requirements are met.

The hybrid cloud model can also help with issues of data sovereignty as companies keep sensitive data on site behind the firewall and only move certain data-processing activities to the cloud. Our technology means businesses can migrate data with no downtime thus potentially saving thousands in potentially loss revenue or capital outlay.

Ultimately Brexit doesn’t have to mean the balkanisation of a company’s data strategy but clarity is needed so the right measures can be put in place. 

Hybrid cloud and software defined data centres: How companies can have it both ways

(c)iStock.com/4X-Image

Amazon Web Services (AWS) is the fastest growing enterprise company that the world has ever seen – a testament to the fact that huge numbers of businesses are moving their data to the cloud not only to save on costs but to be able to analyse their data more effectively.  Netflix, Airbnb, the CIA and other high-profile organisations now run large portions of their businesses on AWS. Yet with lingering concerns about a ‘big bang’ move to cloud, many businesses are adopting a hybrid cloud approach to data storage.

Hybrid cloud is the middle ground of private storage with lower infrastructure overheads plus superfast, low latency connection to the public cloud.  Companies increasingly have their own mix of cloud storage systems reflective of their legacy IT infrastructure, current budgets and current and future operational requirements. As a result, many CIOs are having to think about how data is moving back and forth between various on-premise systems and cloud environments. This can be challenging when the data is transactional and the data set changes frequently.  In order to move this data, without interruption, active-active replication technology like WANdisco Fusion is required so the data can be moved and yet still fully utilised with business still operating as usual.

Software defined data centres (SDDC) – with virtualisation, automation, disaster recovery and applications and operations management – are making it easier for businesses to build, operate and manage a hybrid cloud infrastructure. Such a system enables businesses to move assets wherever they need to whilst maintaining security and availability.

As a result, according to an IBM study, “Growing up Hybrid: Accelerating digital transformation” many organisations that currently leverage hybrid cloud and use it to manage their IT environment in “an integrated, comprehensive fashion for high visibility and control” say that have already gained a competitive advantage from it. In many cases software defined data centres with hybrid cloud are accelerating the digital transformation of organisations as well as making it easier for companies to use cognitive computing such as predictive intelligence and machine learning.

Although hybrid cloud is rapidly becoming the option of choice by reducing IT costs and increasing efficiency, this shift is creating new concerns as CIOs must ensure a seamless technology experience regardless of where the company’s IT infrastructure resides. Whilst businesses are increasingly comfortable transitioning business-critical computing processes and data between different environments, it is vital on-premise infrastructure, private clouds and public clouds are monitored as changes can occur in any of these segments without notice.

In January this year HSBC saw a failure in its servers that left UK online banking customers unable to log in to their accounts for 9 hours. It took more than a day to identify the cause of the issue and as a result customers vented their anger on social media and the case made national headlines. Failures that cannot be quickly identified have the potential to cause huge financial losses as well as significant reputational damage.  Businesses must have real time visibility in a hybrid cloud environment to enable them to be able to head off or respond to issues in real time.

With companies needing to indefinitely maintain legacy IT infrastructure, a software defined data centre supporting a hybrid cloud can allow a business to have the best of both worlds – the cost effectiveness with elastic expansion & contraction of public cloud computing and the security of a private cloud. If you want to future proof your business and remain at the cutting edge of innovation in your sector, hybrid cloud and software defined data centres are what you need to ensure you can access public cloud resources, test new capabilities quickly and get to market faster without huge upfront costs.