All posts by bill.mew

What is cyber insurance truly worth? Analysing the risks and responses

Analysis Cyber risk has overtaken financial risk as the greatest threat that we all face, according to PwC’s 2019 global crisis survey. There are also concerning parallels between the global financial crisis of 2009, and the current cyber threat landscape

The question is, to what extent is cyber insurance the answer?

Currently most companies don’t have any cyber insurance. Coverage is only 40% in the US, and 10% in the UK. Elsewhere, it’s even lower. Many cyber insurers boast that they can provide an insurance quote in under an hour. If they are able to provide cover for such a complex policy in such a short period of time then this should ring alarm bells. You should be concerned with their ability not only to accurately assess your risk position, but also to price the policy accurately.

Some insurers base their risk assessment on cyber security risk ratings. Some of these ratings are produced by firms that use web crawlers that check externally facing endpoints for known vulnerabilities. This is a fairly crude method, but it’s probably still the best way to address the mass market at low cost. The problem is, it’s a bit like evaluating fire-safety risk by looking at a photograph of a building taken from across the street. You can get an idea of the building’s shape and size, but you can’t tell if there’s flammable material inside, or if the building is equipped with fire alarms, or sprinkler systems. A photo like this is better than nothing; but it still provides only a basic, limited idea of the real risk.

The reason that some insurers can probably afford to base premiums on such crude risk metrics is that cyber insurance policies often include a host of provisions and exclusions that in effect make it impossible to claim for almost any incident of any kind. If they want to refuse to pay out, they're probably going to find a way of justifying this. Indeed almost the only reason they would pay out at all is to encourage other clients to sign up.

So if there is a global cyber crisis they may well refuse to pay out on any policies and consider withdrawing from the market entirely.

Examples of common cyber insurance terms or exclusions are as follows:

  • Policies tend to only cover 'a hacker who specifically targets you alone'. Unfortunately, cyberattacks are rarely focused on a single victim. Often either the same attack vector is used on many victims in a scattergun approach (phishing attacks) or malware is used that is contagious in nature (WannaCry)
  • Policies tend not to cover 'any failure…by a cloud/infrastructure provider…unless you own the hardware and software'. Unfortunately, this would not only exclude almost all cloud use, but also exclude almost anything other than hosted services which exclusively use kit you own
  • Policies tend not to cover incidents involving a 'third party…not unduly restricted or financially limited by any term in any of your contracts'. This is meant to ensure that the insurer is able to pursue any third party involved for unlimited damages. Unfortunately, this excludes almost all service providers as they themselves tend to specify some limitation to damages in their contracts, such as damages being limited to the value of the contract. No service providers these days offers unlimited liability
  • Policies tend not to cover incidents involving 'any individual hacker within the definition of you'. Unfortunately, this would exclude all insider threats
  • Policies tend not to cover 'the use by you of any software or systems that are unsupported by the developer'. This clause rarely specifies that the unsupported software needs to be part of the attack vector, which means that you could be excluded if you had a single instance of something like Windows XP on your technology estate, even if this was not part of the attack at all
  • Policies tend not to cover incidents 'attributable to any failure…by the Internet Service Provider (ISP) that hosts your website, unless such infrastructure is under your operational control'. Unfortunately, this would exclude all incidents involving any ISP as it is unheard of for ISP infrastructure to be under your operational control
  • Policies tend not to cover 'acts of foreign enemies, terrorism, hostilities or warlike operations (whether war is declared or not)'
  • Policies tend not to cover 'any error or omission arising out of the provision of negligent professional advice or design'. Unfortunately, if at any time you have tested or assessed your security (as is required under GDPR), but then failed to implement all the resulting recommendations then your cover could be void. So, if you have had penetration testing or certification audits (for ISO 27001 or PCI say) then you need to address every single recommended revision or recommendation or you risk voiding your cover
  • Policies tend not to cover 'anything likely to lead to a claim, loss or other liability under this section, which you knew or ought reasonably to have known about before we agreed to insure you'. This is the pre-existing condition provision. This means that if in any business case that your team make for adopting cyber insurance, you cite potential vulnerabilities as reasons for this adoption, then these very vulnerabilities could then be excluded from any cover

For these reasons we have already seen that some claims are not being paid. For example, several major insurers have declined to pay for damages caused by the NotPetya ransomware attack a few years ago. They say it was a “hostile or warlike action” and therefore not covered.

On top of this other claims have only been paid in part. For example, Norsk Hydro received an insurance payout of $3.6 million. That’s only about 6% of the overall damage that was estimated to be as much as $71 million. It covered the cost of the technical fix, but that was it.

Cyber insurance, while important, simply isn’t a substitute for prevention or for crisis preparedness. You need to have all three.

Here are a few measures to consider:

We need increased adoption of cyber insurance cover, with organisations being far more discerning about the policies they adopt:

  • Clients need to understand their risk appetite – you could spend an almost infinite amount on cybersecurity, but you don’t necessarily need to do so
  • They need to be far more aware of the exclusions in the policies on offer and to base their choice on the nature of the cover rather than purely on price – there’s no point in paying for a cheap policy that won’t pay out
  • They need to choose policies that are appropriate for their business and for their risk position – specialist brokers can help you find a policy that is right for you
  • They also need to consider separate specialist incident response cover if this is not included in their cyber insurance policy (most don’t include it) – while an elite team could save you from disaster, the wrong team won’t just fail to fix the problem, they could actually make it worse

What we tend to find is those organisations who have incident response cover tend to call in the experts straight away, while those without it often attempt a DIY fix before calling for help. By the time they do call for help though it’s often too late – the impact and exposure have magnified significantly – and they call in the wrong people, not having time to accurately select the right experts.

Almost worse than a policy that won’t pay out is one that won’t provide top quality incident response. Whether your insurer is footing the bill or you are, here’s what you will really need:

  • The technical fix: Get expert help from a specialist security response team to identify and the fix problem(s), and do forensics to diagnose the cause and full scope. Getting an immediate fix to resolve the problem, stem any data loss and recover any systems is essential. Any delay will magnify the impact of the incident and damages incurred
  • The legal defence: Seek expert advice in cyber and data law to rapidly develop a legal strategy and a legally defensible narrative based on the forensics. Having the right legal strategy and narrative are both essential to limit legal and regulatory exposure
  • The brand defence: Get expert cyber comms support to help your internal and agency teams deal with the added complexity and enhanced comms workload. The standard PR approach to crisis management simply won’t work in a cyber incident and may even make things worse
  • Social response: Get top global privacy/security influencers to act as trusted voices to counter misinformation with authority and hysteria with reach and credibility. To counter misinformation and hysteria when your own credibility is at an all-time low, you’ll need the support of authoritative opinion leaders in privacy and security

Part of the reason that you need specialists is the fact that traditional crisis tactics don’t work in a cyber crisis.

In a conventional crisis, you need to understand that with most crises or crimes, the criminals get the blame and the company and customers are seen as victims. The conventional PR tactics in a crisis scenario are to contain any issue until it becomes public and then to show empathy for your customers in order to gain sympathy from the press and general public for both you and the clients. It tends to work well.

A cyber incident is different. You’re likely to be on the back foot: a cyber incident could well be public before you even become aware yourselves. What’s more, cyber incidents aren’t instantaneous: the average breach occurs long before it is detected.

Unfortunately, cybercrime is about the only crime where the victim gets the blame. However much you spent on cybersecurity, the press and public will blame you and not the hackers. You need to be prepared to face the regulators, a hostile press and inevitable hysteria and misinformation.  Containment is not possible due to GDPR disclosure obligations and showing empathy won’t gain you any sympathy. It’ll simply put your executives in the firing line.

Crisis preparedness is also critical. Scenario planning and realistic simulation exercises are essential for preparedness, and indeed testing and assessment are mandated under GDPR. So if companies don’t do it, and they then have an incident – the regulatory action will be far harsher.

For companies of any size, it’s probably not a matter of ‘if’ they’ll get hit, but ‘when’. And since the average breach takes more than six months to detect, it may well already have happened.

If ever there was a time to make a case to the board for the need for cyber insurance and crisis preparedness, it is now – with a looming pandemic. The last crisis may have been financial, the current one may be health related, but the chances are that the next one with be a cyber crisis. We all need to be prepared for this. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Think of data as the new uranium rather than the new oil – and treat it like it’s toxic

In May 2017, The Economist famously ran with a front-page headline proclaiming that “The world’s most valuable resource is no longer oil, but data.” It focused on big tech’s collection and use of data and argued that the data economy demands a new approach to antitrust rules.

I agree with the idea that data is now about the world’s most valuable resource, but would suggest that it is more like uranium. It has power and energy, but too much of it can be potentially explosive. Indeed, thinking about data as if it were like uranium, might be a good way to approach data protection.

Handling data

You would not expect your staff to handle uranium without caution or without the right protective gear. Nobody treats nuclear fuels the way that Homer Simpson does! Likewise, you need to educate your staff to handle data with equal care and need to equip them with the tools that they need to do so. 

Numerous studies have found that the greatest data protection threat to a business is the one that walks out of the business at the end of each day – your staff. The insider threat, as it is known, outweighs all others.

If your staff was handling nuclear fuel, you’d expect them to do so with the utmost care. But with data, even after extensive education and training programs, the temptation can be to take short cuts or overlook the proper procedures. For this reason, ease of use (making it as easy to do the right thing as it is to do anything else) is as important in security terms as functionality.

The problem is that the cybersecurity arena is exceedingly fragmented, and we are typically expected to understand how to use a number of different tools.

Thankfully, organisations like Lenovo are focusing on exactly this challenge, bringing a selection of best-of-breed security tools from the likes of Intel and Microsoft together into a single integrated portfolio called ThinkShield and making it easy to use.

Unfortunately, the reality is that you can’t always trust users to know the right thing to do. Nor can you oversee their every move. But with ThinkShield, you not only get comprehensive and customisable end-to-end IT security that you can trust to significantly reduce the risk of being compromised, but it’s also in a package that is easy for users to understand and use. It means less business interruption for your staff and less work for your IT admins.

Data concentration

Much of the focus in The Economist was on how much data certain players were collecting and the risks that go with this. It argued that new antitrust rules were needed to address the concentration of data and of power in the hands of a few giant players. 

Again, this makes data far more like uranium than oil – after all nuclear fuels are relatively safe in small quantities. It is only once you have a critical mass that it becomes potentially explosive.

In a recent interview, Edward Snowden suggested that GDPR had been a step in the right direction, but that the real threat came not from data protection, but from data concentration.
Elizabeth Warren’s threats to break up some of the tech giants may never happen, but further regulation both in the EU and the US is most likely and will focus on ensuring nuclear safety in the digital economy.

Cyber response

Anyone in the nuclear industry will be familiar with scenario planning and simulation exercises. They run regular drills to train staff on how to deal with catastrophes such as leakage of nuclear waste. Few firms realise that GDPR mandates the need for “a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.” In other words, if you don’t run do scenario planning or run simulation exercises to test how you’d respond to a data breach, then you’re not GDPR compliant.

Obviously, most organisations have in-house information security teams, just as they have legal and PR teams, but when a breach does occur your in-house teams are going to need help – they’re unlikely to have the specialist skills to deal with everything. As a result it is best to work with specialists – my latest venture, The Crisis Team, is a good example – that work alongside your internal teams offering world-leading expertise. After all, when things get serious, you don’t want the B team.

It is also worth including letting these experts support your scenario planning and simulation exercises. It will leverage their expertise and ensure that you develop a mutual understanding and are able to practice working together – something that will come in handy if or when the worst does occur.

Considering all of this, maybe treating your data as if it is toxic, and as if it were uranium, might be a good approach. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How to avoid the big upcoming cloud storage problem – which could run you down

When organisations migrate to the cloud they have an application problem: deciding which apps to migrate and in what order as well as which ones to reconfigure as cloud-native. 
Once in the cloud they have a data problem: budgets that are flat or in decline and data volumes that are growing exponentially.

Where people go wrong is thinking ONLY about the application problem in advance. All too often when we cross the road we look left or right when we should be looking both left AND right.

It is wrong to think of cloud as a commodity. Cloud price wars have eased and the pace of decline in prices for cloud compute and storage has slowed almost to a halt. The reason for this shift is market maturity, with people having more faith in the cloud model than they once did. 451 Research analysts have said that the cloud has not yet become a commodity and as such, the cloud market is "not highly price-sensitive" at the moment, despite businesses wanting to get the best deals they possibly can.

CIOs are often overly focused on the cost of compute, where the cost of compute is not decelerating as fast as it did during the height of the price war. However, they should also be focusing on the cost of object and block storage. Prices for storage may have more scope to fall than for compute, but if you’re being charged for data and your data is growing exponentially then you have a problem.

Few, if any, organisations are throwing away any of their old data and new data is being added at an exponential rate – a rate that will only increase with 5G and IoT. This exponential explosion in the volume of data is a real problem.

Many of us are some way down the cloud path. Most of the initial gains that we experienced from moving to the cloud came from the low hanging fruit. Such gains came from transformational projects that could deliver immediate improvements in service or reductions in cost, or that addressed the most immediate challenges at hand.

Typically, though, we put off the biggest challenges, those that would require either organisational transformation, including interdepartmental collaboration and structural reform, or technological transformation, including re-engineering or refactoring applications from the ground up. 

For many organisations, the easy gains have already been realised and the real challenges lie ahead.

Indeed, many of the easy gains came from virtualised applications that could easily be ‘lifted and shifted’ to the cloud and connected to cloud-based block storage. Now with budgets that are flat or in decline and data volumes that continue to grow, there is a looming crisis relating not only to the ongoing cost of data storage, but also to the cost of both ingress and egress [The cost of moving data and applications into the cloud (ingress) or move anything out of the cloud (egress) or even moving it between regions].

Things should be fine for those that ‘looked both ways’ and ensured that such costs were calculated in advance and built into the business case. However, those that ‘only looked left’ will have been hit from the right by unexpected costs that are outpacing the growth of their budgets. Indeed, all too many CIOs have gone from being unintelligent in their use of data in legacy environments to unintelligent in their use of data in the cloud.

If you, like many, have been overly focused on the cost of infrastructure and compute, but as quickly as savings have already been realised (and the easy ones have all been realised already), you have started experiencing exponential data growth and with it cost, and you’re locked in by egress charges, then you’ve got a BIG problem. Even if you are using existing commercial infrastructure or commodity cloud services to cap infrastructure costs, if your data use is unintelligent or is growing fast, both of which are true in many organisations, then your costs will be spiralling in the wrong direction.

The only way out is to 1) rationalize your data and 2) find an intelligent longer-term solution for your data storage that doesn’t lock you in to a single cloud provider and doesn’t include egress or ingress charges.

Thankfully there are multi-cloud storage solutions, like HPE’s new Cloud Volumes service, that not only include AI to maximise the intelligence with which you manage your storage, but provide a direct link to both your own on prem systems as well as all the public cloud providers, but are also free of ingress and egress charges (once you bitten the bullet and met the initial one-off charge from your current cloud provider of moving any existing data onto this new platform). in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

For better or for worse: Why your brand reputation is hitched to your ability to manage and protect data

The potential benefits of technology to change and improve lives are clear for all to see. At an individual level, wearable devices can help better manage health, home sensors can reduce your energy use and costs, and analytics can hone services to meet your every need. At an organisational level, digital transformation can not only boost efficiency and productivity, but it can change the way that whole industries operate and allow organisations (including governments) to deliver new kinds of services for citizens and consumers.

Organisations, however, need to be conscious of the kind of impact they (and their use of technology) are having on all aspects of society. This can include public concerns about the environmental impact of energy use, the societal impact of jobs lost to automation, the economic impact of online retail over bricks and mortar, and even the personal impact of indiscriminate data collection and mismanagement.

Many organisations employ corporate social responsibility (CSR) programs in order to benefit society while also seeking to boost their own brands. By embarking on philanthropy or volunteering they are not only able to promote worthy agendas or causes, but are also able to gain positive brand association.

In recent years the main issues that CSR programs have focused on are issues such as climate change and diversity, but a new issue has emerged in recent months that has eclipsed all others in the minds of consumers … privacy. For software and technology companies, the link between data privacy and corporate responsibility is relatively straightforward. Even in non-tech industries, however, privacy has become a major issue.

No matter what industry you work in, more products are becoming connected. Mattel released a Wi-Fi-connected Hello Barbie in 2015 and researchers promptly uncovered several vulnerabilities that showed it could be hacked into a secret listening device. At the same time companies from all industries process and store both customer and employee data that must be kept secure. Not only have customer data breaches grabbed headlines, but regulations now mandate prompt disclosure of data protection failures and companies can be liable for massive fines – or even worse, they can be told that they are no longer allowed to process customer data. On top of this, the reputational damage of such an incident can be monumental.

For the very first time, industry analyst firm Gartner has listed digital ethics and data privacy as one of the top 10 tech trends for the year ahead. On top of this, research by FleishmanHillardFishburn has shown that the issues that consumers currently care most about are data security and privacy. It is these issues that consumers now want brands to be talking about, rather than their diversity or sustainability efforts.

For better or for worse…?

So how open should brands be about their CSR efforts in the good times – explaining their support for digital ethics and data privacy when things are going well – at the risk of a backlash in the bad times – when they invoke crisis management plans in the event of a data breach?

As Nick Andrews, senior partner for EMEA reputation lead commented in the FleishmanHillardFishburn report: “In an increasingly hashtag driven world, though, do you support the movement and risk a backlash, or stay quiet and disappoint? Only companies with a clear sense of purpose, who use this as a yardstick against which to measure their actions, will demonstrate the consistency and clarity of view which people expect. For those that do, the rewards will be great.”

There are essentially three possible courses of action with organisations falling into one of the three following groups:

Group 1: Business as usual, with no real emphasis on digital ethics and data privacy: 80% of UK consumers surveyed by FleishmanHillardFishburn have stopped using the products and services of a company because the company’s response to an issue does not support their personal views.

With digital ethics and data privacy topping the list of issues that consumers currently care most about, your brand is going to be at a competitive disadvantage to your Group 2 rivals that advocate strong support for digital ethics. And without making data privacy and security a strategic priority, you’re going to be more likely than Group 3 rivals to suffer a data breach and be impacted by the consequent reputational damage.

Group 2: Strong support for digital ethics and data privacy, without any real cultural change: If you aren’t genuinely committed to privacy, you’re going to be more likely than Group 3 rivals to suffer a data breach and be impacted by the consequent reputational damage. In addition, the reputational damage will be amplified as your claims of strong support for digital ethics and data privacy will be shown to have been inauthentic, and you risk being accused of ‘greenwashing’ or ‘astroturfing’.

Group 3: Wholehearted adoption of digital ethics and data privacy as a strategic priority: There are expectation among consumers that companies will take these issues seriously and enact robust data privacy measures above and beyond the legal requirements. Realising this Group3 firms will see it as an imperative to act now and maintain strong leadership in this field, or else risk the consequences of consumer discontent. Only if digital ethics and data privacy are made a strategic priority that leads to true cultural change throughout the company will this be possible.

Let’s not forget that GDPR affects any organisation handling the personal data of EU citizens no matter where company is located, meaning that even U.S. companies which process the personal data of individuals residing in the EU have to comply. And if regulatory compliance with the threat of massive fines were not motive enough, the fact that privacy is now the number one issue for customers across all sectors means that not aiming to be in Group 3 here is sheer folly. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why data sovereignty is the only truly safe path to avoid Privacy Shield turmoil

Privacy is not just a legal obligation, it is an ethical commitment and a demonstration that you care about your customers’ privacy as much as they do.

Many people will be surprised to hear that although the EU General Data Protection Regulation (GDPR) took effect on May 25, many companies are not yet GDPR-compliant. The regulation  requires organisations to comply, and our Information Commissioner has signalled that  organisations need to be actively continuing efforts to achieve (and maintain) compliance.

Of course, those organisations that have an ethical commitment to privacy and that wish to demonstrate that they care about their customers’ privacy as much as they do, will be among the cohort that are already compliant. And they will do everything in their power to remain compliant.

Potential fines for violating the GDPR are significant. They include up to four percent of an organisations’ annual profits or €20 million (approximately $23 million) – whichever is greater. The fines are not the only thing to worry about though. The Information Commissioner’s office (ICO) can also revoke an organisations’ right to process data, a sanction that could be crippling. And then there is the reputational damage associated with any data breach. Ethical, customer-centric organisations will be acutely aware of customer opinion and loyalty, and this will be foremost in the minds – far ahead of the actual fines.

The data sovereignty dilemma

A storm on the horizon is the current status of the data sharing framework between the EU and the US called Privacy Shield. This is used by many organisations to demonstrate adequate levels of personal data protection, permitting transfer of such data between the EU and the US.

Privacy Shield was adopted in July 2016 as a replacement to Safe Harbor. In a 2015 decision by the European Court of Justice, Safe Harbor was determined to provide inadequate privacy protection.

The EU and US authorities then quickly introduced Privacy Shield as a replacement legal framework. Under the Privacy Shield certification process, companies must self-certify their commitment to compliance with the Privacy Shield requirements. Oversight has been somewhat more rigorous in the EU, where privacy is seen as a human right, than in the US where there has been minimal commitment to enforcing the framework.

Numerous concerns, including the abuse exposed by the Cambridge Analytica scandal, have led European privacy organisations and agencies to call for the suspension and/or outright revocation of Privacy Shield. Similar concerns and challenges have been levelled against the “Standard Contractual Clauses”, which are another mechanism to ensure the compliant transfer of EU personal data out of the EEA to jurisdictions that the European Commission has not deemed to be “adequate”.

The continuing legal uncertainty about transferring personal data out of the EU has led many global companies, in particular those from the US, to establish data processing and storage capabilities within the EU, and in some cases specifically within the UK.

This enables the global giants to avoid the data transfer issues but does not in itself address concerns about data jurisdiction. Foreign sovereign powers can and do demand access to data if the company holding that data is subject to the foreign jurisdiction. In the absence of any specific agreements between the EU and US about these kinds of data transfers, question marks remain over GDPR compliance, and there are further serious implications for Privacy Shield’s future.

How should ethical, customer-centric organisations respond?

All organisations operating in the EU and holding or processing personal data will need to be actively continuing efforts to achieve (and maintain) GDPR compliance. Those that also transfer data across the Atlantic and currently relying on Privacy Shield to demonstrate adequate data transfer protections, will also need to monitor developments regarding Privacy Shield and consider additional and alternative methods of proving compliance. Those organisations that pride themselves in being particularly ethical and customer-centric may want to take further provisions, such as ensuring data sovereignty for all personal data.

Example: the NHS

Guidance from NHS Digital on the off-shoring and the use of public cloud services states that:

NHS and Social care providers may use cloud computing services for NHS data. Data must only be hosted within the European Economic Area (EEA), a country deemed adequate by the European Commission, or in the US where covered by Privacy Shield.

With the risks of revocation or suspension of Privacy Shield now escalating, reliance on Privacy Shield alone is inadvisable. Trusts could consider the use of the EU Standard Contractual Clauses, although these are also being challenged in the European courts, or prepare for whatever other methods are approved by the EU regulatory authorities following the Privacy Shield review. A more certain (risk-free) course of action would be to opt for complete data sovereignty for patient data by retaining the data in the UK and using a UK-based service provider for these workloads.

Firms that operate in the US are subject to US law, including FISA and the CLOUD Act, neither of which will easily be incorporated into the next version of Privacy Shield. While they can offer a level of data residency (offering to keep your data in the UK), the CLOUD Act eliminates protection for data stored overseas, and provides them with no legal recourse to withhold data from the NSA and other US law enforcement bodies, meaning that they cannot guarantee data sovereignty.

Recent research by the Corsham Institute highlighted increasing patient awareness of data privacy issues with a growing public desire for more information on data storage in the NHS. 88% of adults said that it is important to know where and how their patient data is stored and 80% said that it is important to know whether patient data is hosted by companies whose headquarters are outside of the UK.

While public confidence in the NHS is currently high, the significant increase in privacy awareness means that there’s a real risk that any incidents, such as a repeat of the Wannacry malware, could expose weaknesses in sovereignty, efficiency and data security, leading to a potential patient backlash. Further details of the Corsham Institute research can be found here.

With many Trusts already opting to ensure data sovereignty by placing patient data and workloads with UK-based cloud service providers, there is no reason that other Trusts should not follow suit. After all there is no real need to move patient data offshore or to use foreign service providers. Nor the need for trusts to expose themselves to risks relating to the potential revocation or suspension of Privacy Shield and no real need to expose themselves to a potential patient backlash in the event of future incidents.

Other customer-centric organisations might also be wise to follow the example of these Trusts and accelerate their move to the cloud in order to enhance operational efficiency, but do so without neglecting data sovereignty.

How hybrid, multi-cloud and community clouds are coming together for the best of all worlds

What you look for in a cloud provider depends to a large extent on the drivers and challenges that you are experiencing.

People with large legacy estates, for instance, tend to be looking for a hybrid cloud solution that can support both their old legacy workloads and their new cloud ones. Some see this as a transitional arrangement to cover the period in which workloads are migrated to the cloud, but many realise that there are certain workloads for which migration will never be either technologically possible or economically practical.

Many people with heterogeneous environments, on the other hand, tend to be looking for a multi-cloud solution. They may be doing this by design, such as in moving their Oracle workloads onto an Oracle cloud environment and their Microsoft ones to an Azure cloud environment. There may also be an element of shadow IT, with some workloads strategically moved to SaaS environments like Salesforce while a host of other SaaS options may also have been adopted by individual departments.

There are others that are keen to collaborate with peers or partners in the cloud which tend to be looking for community clouds. In the USA, the main public cloud providers have set up dedicated regions as community clouds to allow US government agencies at the federal, state and local level, along with contractors and educational institutions to collaborate using sensitive workloads and data sets while meeting specific regulatory and compliance needs. Meanwhile in the UK, UKCloud has created a community cloud for public sector and healthcare that has succeeded in attracting over 220 projects, capturing over a third of the G-Cloud IaaS workloads.

Other sectors where such collaboration is becoming increasingly common include manufacturing with data sharing across the logistical supply chain, in public services and transportation where logistical and geospatial data is shared, and in health and social care where access to patient records or genomic sequencing data is shared.

There is no reason, however, for not being able to have the best of all worlds. New appliances, such as Customer@Cloud from Oracle and Azure Stack from Microsoft have been designed to enable seamless hybrid environments. However, these hybrid environments don’t need to operate in isolation. Heterogeneous environments can be created with hybrid appliances to support both Oracle and Microsoft workloads. Further combining these options with cloud native options like OpenStack and with container management as well creates a cross-over between hybrid and multi-cloud. Indeed, some providers are now starting to offer this kind of heterogeneous cloud with an array of technology stacks, all within dedicated community clouds, giving you the best of all worlds. You get a combination of hybrid and multi-cloud within a sector-specific community cloud.

There are many compelling advantages to this ‘have-it-all’ approach:

  • Customer-centricity: As a technology matures, vertical-industry expertise and talent becomes the ultimate differentiator as customers want to know that their technology suppliers are just as committed to their industry and its specific needs as the customer itself is. In effect technology wizardry becomes table stakes, while customer expertise trumps all. And we are now seeing this in the cloud arena.

    With global public cloud providers, you can be treated a bit like a number, but the sector specific nature of community clouds enables them to be very customer centric – centred around key workloads and data sets. Then adding a multi-cloud dimension allows you to use API calls to access advanced functionality in the public cloud in areas like Artificial Intelligence and Machine Learning. Multi-cloud also allows customers to create rich heterogenous solutions that address a wider set of requirements than is possible using only cloud native technologies or any single cloud platform, while maximising choice and flexibility and minimising lock-in.

  • The clustering effect – partners: Such sector-specific community clouds can spark a clustering effect, where, as more customers from a particular sector join, it attracts specialist application providers, both software as a service (SaaS) providers and independent software vendors (ISVs), which in turn then attract more customers in what becomes a virtuous circle.
  • Minimising latency: appliances like as Oracle Customer@Cloud and Azure Stack are part of a movement away from big centralized clouds, to clouds that are closer to their data origins and help cut down on latency. This is taking two forms: fog computing, and intelligent edge computing. Latency can occur either between the users and the workload that they are accessing, or between different workloads and datasets that need to work together, but are often based on different technology platforms. In the first instance, the appliance can be located as close to the main user groups as possible in order to minimise latency. In the second instance, it is better to locate the appliance within a community cloud alongside as many of the key datasets, workloads and platforms that need to interoperate and if possible to provide connectivity to this community cloud via secure, high performance networks.

Whatever your current situation, bringing together the best aspects of Hybrid Cloud and Multi-Cloud and combining them within a Community Cloud can create the best of all worlds – especially if you work within a sector where collaboration between partners and peers is important.

For example, an NHS trust in the UK may have a collection of legacy workloads that are Microsoft or Oracle based, along with a few newer cloud native applications. It might also have legacy systems that cannot be moved to the cloud, but that could be hosted in a secure facility and it might want to access cloud based applications offered by leading health provides (either SaaS or ISV) as well as core data sets like the 100,000 Genomes Project database. Ideally the trust would want as much of this as possible available in a single community cloud with close proximity between systems to minimise latency. The trust would also want to be able to access this heterogenous environment via HSCN and also to be able to connect onwards to peripheral workloads hosted elsewhere or even to public clouds via API calls for things like artificial intelligence. Fortunately for UK healthcare and the public sector this is all available today.

So why just focus on looking for hybrid cloud or multi-cloud or community cloud – when it is possible to have it all?

Read more: UKCloud partnership with Microsoft and Cisco pushes forward multi-cloud for public sector