How to improve privileged users’ security experiences with machine learning

Bottom line: One of the primary factors motivating employees to sacrifice security for speed are the many frustrations they face, attempting to re-authenticate who they are so they can get more work done and achieve greater productivity.

How bad security experiences lead to a breach

Every business is facing the paradox of hardening security without sacrificing users’ login and system access experiences. Zero Trust Privilege is emerging as a proven framework for thwarting privileged credential abuse by verifying who is requesting access, the context of the request, and the risk of the access environment across every threat surface an organisation has.

Centrify’s recent survey Privileged Access Management In The Modern Threatscape found that 74% of data breaches start with privileged credential abuse. Forrester estimates that 80% of data breaches have a connection to compromised privileged credentials, such as passwords, tokens, keys, and certificates. On the Dark Web, privileged access credentials are a best-seller because they provide the intruder with “the keys to the kingdom.” By leveraging a “trusted” identity, a hacker can operate undetected and exfiltrate sensitive data sets without raising any red flags.

Frustrated with wasting time responding to the many account lock-outs, re-authentication procedures, and login errors outmoded Privileged Access Management (PAM) systems require, IT Help Desk teams, IT administrators, and admin users freely share privileged credentials, often resulting in them eventually being offered for sale on the Dark Web.

The keys to the kingdom are in high demand

18% of healthcare employees are willing to sell confidential data to unauthorised parties for as little as $500 to $1,000, and 24% of employees know of someone who has sold privileged credentials to outsiders, according to a recent Accenture survey. State-sponsored and organised crime organisations offer to pay bounties in bitcoin for privileged credentials for many of the world’s largest financial institutions on the Dark Web. And with the typical U.S.-based enterprise losing on average $7.91M from a breach, more than double the global average of $3.86M according to IBM’s 2018 Data Breach Study, it’s clear that improving admin user experiences to reduce the incidence of privileged credential sharing needs to happen now.

How machine learning improves admin user experiences and thwarts breaches

Machine learning is making every aspect of security experiences more adaptive, taking into account the risk context of every privileged access attempt across any threat surface, anytime. Machine learning algorithms can continuously learn and generate contextual intelligence that is used to streamline verified privileged user’s access while thwarting many potential threats ― the most common of which is compromised credentials.

The following are a few of the many ways machine learning is improving privileged users’ experiences when they need to log in to secure critical infrastructure resources:

  • Machine learning is making it possible to provide adaptive, personalised login experiences at scale using risk-scoring of every access attempt in real-time, all contributing to improved user experiences: Machine learning is making it possible to implement security strategies that flex or adapt to risk contexts in real-time, assessing every access attempt across every threat surface, and generating a risk score in milliseconds.

    Being able to respond in milliseconds, or real-time is essential for delivering excellent admin user experiences. The “never trust, always verify, enforce least privilege” approach to security is how many enterprises from a broad base of industries including leading financial services and insurance companies are protecting every threat surface from privileged access abuse.

    CIOs at these companies say taking a Zero Trust approach with a strong focus on Zero Trust Privilege corporate-wide is redefining the legacy approach to Privileged Access Management by delivering cloud-architected Zero Trust Privilege to secure access to infrastructure, DevOps, cloud, containers, big data, and other modern enterprise use cases. Taking a Zero Trust approach to security enables their departments to roll out new services across every threat surface their customers prefer to use without having to customise security strategies for each.
     

  • Quantify, track and analyse every potential security threat and attempted breach and apply threat analytics to the aggregated data sets in real-time, thwarting data exfiltration attempts before they begin. One of the tenets or cornerstones of Zero Trust Privilege is adaptive control. Machine learning algorithms continually “learn” by continuously analysing and looking for anomalies in users’ behavior across every threat surface, device, and login attempt.

    When any users’ behavior appears to be outside the threshold of constraints defined for threat analytics and risk scoring, additional authentication is immediately requested, and access denied to requested resources until an identity can be verified. Machine learning makes adaptive preventative controls possible.
     

  • When every identity is a new security perimeter, machine learnings’ ability to provide personalisation at scale for every access attempt on every threat surface is essential for enabling a company to keep growing. Businesses that are growing the fastest often face the greatest challenges when it comes to improving their privileged users’ experiences.

    Getting new employees productive quickly needs to be based on four foundational elements. These include verifying the identity of every admin user, knowing the context of their access request, ensuring it’s coming from a clean source, and limiting access as well as privilege. Taken together, these pillars form the foundation of a Zero Trust Privilege.

Conclusion

Organisations don’t have to sacrifice security for speed when they’re relying on machine learning-based approaches for improving the privileged user experience. Today, a majority of IT Help Desk teams, IT administrators, and admin users are freely sharing privileged credentials to be more productive, which often leads to breaches based on privileged access abuse. By taking a machine learning-based approach to validate every access request, the context of the request, and the risk of the access environment, roadblocks in the way of greater privileged user productivity disappear. Privileged credential abuse is greatly minimised.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud scores FA digital transformation partnership


Connor Jones

31 May, 2019

The English Football Association (The FA) has partnered with Google Cloud to digitally transform its St. George’s Park national training centre used by 28 national teams.

Google Cloud is now the official cloud and data analytics partner to the FA and during the multi-year partnership, Google Cloud aims to put G Suite at the heart of everything. It will see a shift from siloed working to a more collaborative approach between coaches of all the teams.

«The first step in our transformation at St. George’s Park was to unify the way our coaches train and develop our 28 national teams to increase productivity,» says Craig Donald, CIO at the FA. «We needed the ability to collaborate and share across the coaches and team managers. G Suite allowed us to do that and was the first part of our Google Cloud partnership.»

The FA has terabytes of data stored in Google Cloud collected from tracking player activity and the analysis team will use the tools provided by Google Cloud Platform, such as smart analytics tools, machine learning, AI and BigQuery to unearth new insights from the data.

The organisation’s next step will be to build out its Player Profile System (PPS), a proprietary tool built on the platform, to measure performance, fitness, training and form of players at all levels.

The goal is to automate near real-time data analysis which will give the pitchside coaches a better indication as to how the players are performing in training, which could influence decisions such as player selection for matches.

The PPS will be further enhanced by Google Cloud smart analytics, data management systems and machine learning capabilities to analyse even more player data signals.

«Smart analytics and data management play a critical part in our PPS,» said Nick Sewell FA head of application development. «Everything we do at St George’s Park for this workload is built on Google Cloud.»

Over the multi-year partnership The FA aims to tackle three key areas:

  • Success: Preparing both men’s and women’s senior teams for the next World Cups.
  • Diversity: Doubling female participation in the game.
  • Inclusivity: Making football more inclusive and open to all.

«We believe technology is a key area of potential competitive advantage for our 28 teams and everything we do at St George’s Park,» said Dave Reddin, The FA’s head of team strategy and performance.

«We have progressively built a systematic approach to developing winning England teams and through the support of Google Cloud technology we wish to accelerate our ability to translate insight and learning into performance improvements.»

Salesforce launches blockchain platform for CRM


Keumars Afifi-Sabet

31 May, 2019

Salesforce is connecting a low-code blockchain platform with its customer relationship management (CRM) suite to open up new services and operations for its customers.

The cloud-powered software developer has launched the platform to allow companies to create blockchain networks, workflows and apps, in a way that’s easier and faster than traditional methods.

The Salesforce Blockchain platform is a low-code system built on the open source technology developed by Hyperledger Sawtooth and is customised to fit with the company’s flagship Salesforce Lightning CRM product.

Beyond building networks, users can layer blockchain data above existing sales, service, or marketing workflows, and run artificial intelligence-powered algorithms to integrate this data into sales forecasts and other predictions.

Salesforce says that blockchain’s distributed ledger technology can help with authenticating and sharing data across multiple third parties, where traditionally this process has been clunky and slow. Principally, the company says it streamlines how transactions and documents are created and exchanged.

«Blockchain allows us to upend antiquated processes like these and rebuild them entirely with customers at the centre,» said Salesforce’s senior vice president for emerging technologies Adam Caplan.

«Data can securely flow beyond an organization’s four walls and be extended to partners. Every party in the blockchain network can verify and see each transaction in an open, transparent way.

«The information is secure, trusted, and – if the need arises – can be audited.»

Organisations across several industries can use the technology for conventional business processes like asset tracking, credentialing, and authentication of goods. Salesforce says that combining CRM with blockchain data can see firms devise new business processes and models across sales, marketing, and services.

A real-life application of Salesforce’s blockchain platform involves Arizona State University, which is using the system to design and create an education network that allows universities to verify and share information securely.

S&P Global Ratings, meanwhile, is using the service to reduce the time it takes to review and approve new business bank accounts by bringing together multiple reviews for greater transparency in this process.

The main problem Salesforce is aiming to tackle involves a greater need for businesses to harness and share massive amounts of data with an ever-growing network of partners and third parties – and to do so securely.

The firm, therefore, sees blockchain’s distributed ledger as a means to plugging any ‘trust gap’ that arises if companies fail to manage to increased costs and inefficiencies that it said this process will introduce.

Salesforce is just the latest company to introduce a blockchain service after its CEO Mark Benioff teased such a platform in April last year.

Amazon Web Services (AWS) and Microsoft have both released blockchain-powered services, with the former targeting the healthcare and finance sectors with its Blockchain as a Service (BaaS) templates released last year.

Salesforce Blockchain is currently available to select design partners ahead of its general release in 2020.

Linksys LAPAC2600C review: Easy cloud networking for small businesses


Dave Mitchell

31 May, 2019

An affordable Wave 2 AP that’s strong on performance and features

Price 
£183 exc VAT

Small businesses that want to move from standalone wireless networks to fully cloud-managed ones will love Linksys’ LAPAC2600C as it doesn’t get any easier. This Wave 2 wireless AP takes everything we like about the standard LAPAC2600 model and teams it up with Linksys’ Cloud Manager web portal. Its price even includes a 5-year subscription.

Signing up for a Cloud Manager account is easy: provide an email address for the designated owner, add a password, choose a domain name and create networks for each geographical location. Adding APs is equally swift. You provide their MAC address and serial number, which are found on the box, under the AP and, if you login to its local web interface, can be copied and pasted from its system status page.

Before going further, we recommend visiting the portal’s main settings page and changing the default local admin password for all managed APs. Most AP settings aren’t available locally, but you can still login and disable cloud management or change the AP’s LAN configuration.

Each AP takes 10 seconds to link up with Cloud Manager and you can change their names to more meaningful ones. The portal’s overview page for the selected network provides a real-time graph of upload and download traffic for all clients or the number of connections and can be swapped to the last hour, day or week.

You can see the busiest clients and APs, wireless channel usage and a Google map showing the AP’s physical location. You can create an unlimited number of SSID profiles and up to eight can be assigned to each AP as ‘slots’.

Along with enabling encryption and SSID masking, you can decide which APs will broadcast the SSID and apply a single limit in Mbits/sec to overall upstream and downstream bandwidth usage. Client isolation stops users on the same SSID from seeing each other, you can restrict the number of clients that can associate and enable 802.1lk for fast roaming as users move around.

Zero-touch provisioning is achieved by creating a new network for the remote site, entering the AP’s details from the box, pre-assigning SSIDs to it and sending it to the remote location. All the user needs do is unbox the AP, connect it to power and the internet and it’ll do the rest.

The LAPAC2600C is a good performer, with real world file copies using a 5GHz 11ac connection on a Windows 10 Pro desktop averaging 56MB/sec at close range dropping to 53MB/sec at 10 metres. Coverage is also good as the SweetSpots app on our iPad only registered a loss of signal after we got 44 metres down the main building corridor.

Each network in your account can have additional members added and allowed to manage all settings or merely view them. There are no options to permit access to specific functions but the account owner can add more users and grant them full portal access to all networks.

Guest wireless networks are swiftly created by enabling a captive portal (or ‘splash page’) on selected SSIDs, which is presented to users when they load a browser after associating. The page can be customised with a small company logo and AUP (acceptable use policy) of up to 1,024 characters, set to request a global password and used to redirect guests to a landing web page – possibly with a promotional message.

The LAPAC2600C delivers good wireless performance and features at a very reasonable price. Linksys’ cloud portal is basic but its extreme ease of use makes it ideal for small businesses that want hassle-free cloud managed wireless networks.

Hyperscaler cloud capex declines – but ‘enormous barriers’ remain to reach the top table

The spending of the cloud hyperscalers has come to a comparative halt, according to the latest note from Synergy Research.

The analyst firm found that in the first quarter of 2019 total capex across the largest cloud vendors totalled just over $26 billion, representing a 2% downturn year on year. This excludes Google’s $2.4bn outlay on Manhattan real estate which pushed the Q118 figures even further, and not including exceptional items, represents the first quarterly downturn since the beginning of 2017.

In terms of launches in 2019, Google was the most dominant vendor, opening its doors in Zurich in March and Osaka earlier this month. At the very beginning of this year, Equinix and Alibaba Cloud focused on Asia Pacific data centre launches, in Singapore and Indonesia respectively.

Last month Synergy argued that global spend on data centre hardware and software had grown by 17% compared with the previous year. This is naturally driven by the continued demand for public cloud spend; more extensive server configurations ensured more expensive enterprise selling prices.

In order, the top five hyperscale spenders in the most recent quarter were Amazon, Google, Facebook, Microsoft and Apple.

“After racing to new capex highs in 2018 the hyperscale operators did take a little breather in the first quarter. However, though Q1 capex was down a little from 2018, to put it into context it was still up 56% from Q1 of 2017 and up 81% from 2016; and nine of the 20 hyperscale operators did grow their Q1 capex by double-digit growth rates year on year,” said John Dinsdale, a chief analyst at Synergy Research. “We do expect to see overall capex levels bounce back over the remainder of 2019.

“This remains a game of massive scale with enormous barriers for those companies wishing to meaningfully compete with the hyperscale firms,” Dinsdale added.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS’ launches Textract tool capable of reading millions of files in a few hours


Connor Jones

30 May, 2019

AWS has said that its Textract tool, designed to extract and translate data between files, is now generally available for all customers.

The tool, which is a machine learning-driven feature of its cloud platform, lets customers autonomously extract data from documents and accurately convert it into a usable format, such as exporting contractual data into database forms.

The fully-managed tool requires no machine learning knowledge to use and works in virtually any document. Industries that work with specific file types such as financial services, insurance and healthcare will also be able to plug these into the tool.

Textract aims to expedite the laborious data entry process that is also often inaccurate when using other third-party software. Amazon claims it can accurately analyse millions of documents in «just a few hours«.

«Many companies extract text and data from files such as contracts, expense reports, mortgage guarantees, fund prospectuses, tax documents, hospital claims, and patient forms through manual data entry or simple OCR software,» the company said.

«This is a time-consuming and often inaccurate process that produces an output requiring extensive post-processing before it can be put in a format that is usable by other applications,» it added.

Textract takes data from scanned files stored in Amazon S3 buckets, reads them and returns data in JSON text annotated with the page number, section, form labels, and data types.

PwC is already using the tool for its pharmaceutical clients, an industry that commonly uses processes that involve Food and Drug Administration (FDA) forms that would otherwise require hours to complete, according to Siddhartha Bhattacharya, director lead, healthcare AI at PwC.

«Previously, people would manually review, edit, and process these forms, each one taking hours,» he said. «Amazon Textract has proven to be the most efficient and accurate OCR solution available for these forms, extracting all of the relevant information for review and processing, and reducing time spent from hours to down to minutes.»

The Met Office is another organisation that plans to implement Textract, making use of old weather records.

«We hope to use AmazonTextract to digitise millions of historical weather observations from document archives,» said Philip Brohan, climate scientist at the Met Office. «Making these observations available to science will improve our understanding of climate variability and change.»

Exposed business data rises by 50% to 2.3 billion files


Keumars Afifi-Sabet

30 May, 2019

More than 2.3 billion sensitive corporate documents, including customer data and passport scans, are thought to be sitting on publicly accessible online storage systems.

One year after researchers disclosed the scale of exposed business files hosted using technologies like the server message block (SMB) protocol and Amazon Web Services (AWS) S3 buckets, new findings reveal this figure has risen by approximately 750 million.

Data exposed via these misconfigured systems mean companies across the world are at risk of handing data to cyber criminals and violating data protection laws, according to security research firm Digital Shadows, with 2,326,448,731 (2.3 billion) files exposed as of 16 May. This is in contrast with the 1.5 billion files detected in 2018.

Despite the steep rise in the total number of files left exposed, researchers did see a noticeable decline in the number of files being leaked through misconfigured AWS S3 buckets, which have in the past been responsible for some of the largest data leaks. Experian data on more than 120 million American households was exposed in 2017, while similar leaks also hit the NSA, WWE, Accenture and, most recently, a third party app built from Facebook data.

Due to changes in the way S3 buckets are configured, made in November, researchers found only 1,895 exposed files on 16 May, compared to around 16 million prior to default encryption being added.

However, this is overshadowed by a dramatic rise in the number of files expose through the SMB protocol, amounting to 1.1 billion or roughly 48% of exposed business documents. This compares against 20% of files made public through misconfigured FTP services, and 16% of the 2.3 billion documents exposed via rsync sites

«Our research shows that in a GDPR world, the implications of inadvertently exposed data are even more significant,» said Photon Research analyst Harrison Van Riper.

«Countries within the European Union are collectively exposing over one billion files – nearly 50% of the total we looked at globally – some 262 million more than when we looked at last year.

«Some of the data exposure is inexcusable – Microsoft has not supported SMBv1 since 2014, yet many companies still use it. We urge all organizations to regularly audit the configuration of their public facing services.»

In their previous report, published last April, the researchers detected exposed data totalling 12,000TB hosted across S3 buckets, rsync sites, SMB servers, file transfer protocol (FTP) services, misconfigured websites (WebIndex), and network attached storage (NAS) drives. This volume of information was roughly 4,000 times greater than the Panama Papers leak three years ago.

The first set of findings were based on files detected during a three-month window between January and the end of March 2018, while their latest report has extended the observation window to between April 2018 and mid-May 2019.

Based on their most recent findings, researchers are particularly worried about a «troubling» rise in files exposed through SMB-enabled file shares, partially because they’re «not entirely sure why that’s the case».

One potential indicator could be that AWS Storage Gateway added SMB support in June 2018, allowing file-based apps developed for Windows an easy way to store objects in S3 buckets. But the greater concern centres on ransomware, with more than 17 million ransomware-encrypted files detected across various file stores.

Elsewhere, the researchers discovered a variety of sensitive data exposed through misconfigured systems, including one server that contained all the necessary information an attacker would need to commit identity theft. The FTP server held job applications, personal photos, passport scans, and bank statements. All this data was publicly available.

Another example centred on medical data, with 4.7 million medical-related files exposed through the files stored the researchers analysed. The majority of these were medical imaging files, which doubled in volume from 2.2 million last year to 4.4 million today.

In light of its findings, Digital Shadows has advised organisations to use the Amazon S3 ‘Block Public Access’ setting to limit public exposure of buckets that are intended to be private. Logging should also be enabled to monitor for any unwanted access or potential exposure points.

Researchers have also advised businesses to disable SMBv1 and update to SMBv2 or v3 for systems which require the protocol. IP whitelisting, too, should be used to enable only authorised systems to access the storage systems.

NAS drives, as with FTP servers, should be placed internally behind a firewall with access control lists implemented to prevent unauthorised access.

Why the real multi-cloud motivator is choice – rather than lock-in

Multi-cloud is one of the biggest initiatives for enterprises today; however, if you analyse the justification for this trend, so much of it is driven by fear — fear of lock-in, fear of outages, fear of cost. And while those concerns may be valid, the enterprises that are realising the value of multi-cloud have an even more compelling reason for it: choice.

A few years ago, battle lines were drawn. For the most part you either used AWS, Azure, or GCP in addition to your data centre. People were contentious about which cloud provider was better for the business. “Cloud X is more reliable.” “Cloud Y caters to developers.” “Cloud Z has better tooling.” And switching cloud providers was a big deal. It usually meant there was a problem or some kind of secretive, exclusive business deal. More and more enterprises started picking sides, and then Adobe announced that it would use Azure and AWS as complements to each other.

This decision wasn’t because Adobe had a bad experience or fear towards one cloud over another. It was because Adobe viewed two clouds better than one. Each cloud provider boasts tens of thousands of engineers who deliver over 1,000 new features each year. Why not put all those resources to work for your business? Adobe wasn’t hedging its bet by choosing both, it was simply the smartest business move for the company.

In my work with Adobe, it is abundantly clear that these clouds are not used to compete with each other (i.e. to avoid lock-in); rather, they are used to complement one another. In fact, both Azure and AWS boast about Adobe as a featured customer.

A good analogy is streaming video. Many people now subscribe to multiple streaming services such as Netflix, Hulu, YouTube TV, HBO Now, and Amazon Prime. But no one subscribes to Hulu because they’re worried about getting locked in with Netflix. They pay for both because they want to watch the unique programming (Stranger Things on Netflix) and features (live TV on Hulu) offered by both services. They want choice.

The same is true of enterprises and cloud providers. Each of the big three cloud providers have a different geographic footprint and unique offerings ranging from robust security to AI and machine learning to Kubernetes and microservices specialisation. And each cloud provider is always innovating, so their offerings are constantly expanding and getting better.

However, the conversation about multi-cloud remains fixated on lock-in and cost — “what if you’re locked into cloud X and they raise prices?” But cost isn’t the driver for the cloud, it’s always been speed and flexibility. Even when Netflix and Hulu raise their prices, subscribers don’t flee (and in fact Netflix feels so confident about this that it just raised their prices again). The reason is that subscribers don’t buy these services on cost alone, they’re willing to pay for multiple services to preserve choice.

The fear, uncertainty, and doubt (FUD) narrative about lock-in is not very relevant when considering a multi-cloud strategy. Choice is the most important reason. As simple as that distinction may be, it is the difference between having a reactive multi-cloud strategy, driven by fear and uncertainty, and a proactive multi-cloud strategy that delivers on the promise of speed and flexibility through choice.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Doubling down on disaster recovery-as-a-service – for business continuity and beyond

The prospect of an IT outage is one of the key issues that keeps IT professionals awake at night.  In the past two years, 93% of organisations have experienced tech-related business disruption and, as a result, one out of five experienced major reputational damage and permanent loss of customers. From natural disasters to malicious cyber-attacks, organizations face an abundance of risks to business continuity that impact productivity, prosperity and reputation. Disaster recovery-as-a-service (DRaaS) is a mainstream use of the cloud that helps protect against outages through an infrastructure and strategy that deals with worst-case scenarios. 

The benefits of cloud DRaaS over on-premise disaster recovery are well-documented. Companies don’t have to double their infrastructure investments and run parallel systems as a backup. DRaaS also offers better protection against threats such as natural disasters because there’s no physical infrastructure to protect. DRaaS is easily scalable to grow with businesses, and it offers native high availability.

A successful DRaaS implementation requires the right cloud service provider (CSP) to help develop a DR plan that meets business objectives. Here are a few pointers to help you select a CSP that fits your needs.

Factors to consider when choosing a DRaaS CSP

When choosing a CSP for DRaaS, you trust them to protect your business during the worst possible scenario. So, you need to be clear on the capabilities and the SLAs they will deliver. This includes weighing costs as part of your budget planning and reviewing regulatory and compliance factors.

With cloud services you pay only for what you consume, which is undeniably preferable to paying for on-premise systems that may never be used. Nevertheless, CSP pricing models can be complex, making it important to know the cost implications should you need to fully failover your production environment to the cloud.

Your CSP should size your environment accurately (at iland we use a tool called Catalyst to do this) for sufficient storage and resources to avoid any nasty surprises in the event of a failover. This also ensures straightforward and transparent pricing so you’re clear on the true costs of your business continuity programme.

In a disaster scenario, your IT team will be stretched and under pressure. It helps if the DR environment you choose is based on familiar structures and terminology. For example, many IT administrators are familiar with a VMware-based cloud product that uses the same toolsets and terminology, which reduces the learning curve to respond faster during a disaster.  

It’s also important to know how much of your DR set-up will be a DIY exercise and how much support you can expect from your CSP. Will it be a concierge onboarding service, or do you need to scope extra internal resources or additional consultancy to manage set-up? Look at the support level the CSP commits to provide.  Could you ask them to press the failover button if they had to? Will they assist with failing back when the time comes?

Management is another critical factor. One of the benefits of cloud DRaaS is that in-house teams don’t have a second on-premise environment to manage. The environment is replicated without adding to the team’s administrative burden. However, visibility into the DR environment is essential and needs to be simple. Find out how your team will oversee the DR environment and what tools will they use to troubleshoot issues. Are they intuitive or do you need to budget time and resources for training?

Finally, you need assurance that your backup environment is  compliant with industry regulations to prevent data vulnerabilities that can compromise your customers and your business.  Whatever requirements your business has to meet – HIPAA, GDPR etc – your DRaaS provider needs to guarantee compliance as well.

Automation and orchestration for testing DR plans

DRaaS solutions provide facilities to test DR plans without impacting the production environment. Incredibly, many organisations are still reluctant or even afraid to test their DR plans.

With cloud DRaaS, teams can run recovery tests in replica environments in a short time, generating a full report to detail the performance of every part of the DR plan and recovery orchestration. This gives full visibility into whether or not a business can come back online during a disaster and the order in which applications will recover. Testing in this manner is much more effective than annual testing of on-premise systems and it helps businesses develop a full disaster recovery plan with absolute confidence it will work when needed.

Added value from cloud DRaaS

Beyond its primary purpose of disaster recovery, businesses can double down on their DRaaS investment with a replica virtual environment to support on-demand security testing, system upgrades, patch testing and user acceptance testing without disrupting their production environments. The replica environment contains all the quirks and eccentricities of a live environment for a more thorough testing before going live.

Having a sound disaster recovery plan in place gives peace of mind to IT professionals. Selecting the right CSP deliver DRaaS provides added comfort and confidence, even if that disaster never happens.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Calculating the Kube roots: Why 2019’s KubeCon represented a milestone for the industry

The latest iteration of KubeCon and CloudNativeCon, which took place in Barcelona last week, felt like something of a milestone – and not one shoehorned in for marketing purposes, either.

It is true however that Kubernetes came into being five years ago this June, so for those at Google Cloud, it was a time of reflection. From the acorn which was Eric Brewer’s presentation at Dockercon 2014, a veritable forest has grown. “We’re delighted to see Kubernetes become core to the creation and operation of modern software, and thereby a key part of the global economy,” wrote Brian Grant and Jaice Singer DuMars of Google Cloud in a blog post.

“Like any important technology, Kubernetes has become about more than just itself; it has positively affected the environment in which it arose, changing how software is deployed at scale, how work is done, and how corporations engage with big open source projects,” Grant and DuMars added.

2019’s KubeCon saw a smattering of news which represented a sense of continued maturation. In other words, various cloud providers queued up to boast about how advanced their Kubernetes offerings were. OVH claimed it was the only European cloud provider to offer Kubernetes deployment on multiple services, for instance, while DigitalOcean unveiled its managed Kubernetes service, now generally available. From Google’s side, with its Google Kubernetes Engine (GKE) managed service toolset, new products were made available, from greater control over releases, to experimentation with Windows Server containers – of which more later.

These are all clues which point to how Kubernetes has evolved since 2014 – and will continue to do so.

Analysis: The past, present and future

At the start of this year, for this publication’s 2019 outlook, Lee James, CTO EMEA at Rackspace, put it simply: “I will call it and say that Kubernetes has officially won the battle for containers orchestration.”

If 2018 was the year that the battle had been truly won, 2017 was where most of the groundwork took place. At the start of 2017, Google and IBM were the primary stakeholders; Google of course developing the original technology, and IBM holding a close relationship with the Cloud Native Computing Foundation (CNCF) – VP Todd Moore holding the CNCF chair of the governing board. By the year’s end, Amazon Web Services, Microsoft, Salesforce and more had all signed up with the CNCF. Managed services duly followed.

Last year saw Kubernetes become the first technology to ‘graduate’ from the CNCF. While monitoring tool Prometheus has since joined it, it was a key milestone. The award was a recognition that Kubernetes had achieved business-grade competency, with an explicitly defined project governance and committer process and solid customer credentials. According to Redmonk at the time, almost three quarters (71%) of the Fortune 100 were using containers in some capacity.

One of the key reasons why this convergence occurred was due to the business case associated with the technology becoming much more palatable. Docker first appeared on the scene in 2013 with containerised applications promising easier management and scalability for developers. Many enterprises back then were merely dipping their toes into the cloud ecosystem, agonising between public and private deployments, cloud-first eventually moving to cloud-only.

As the infrastructure became better equipped to support it, the realisation dawned that businesses needed to become cloud-native, with hybrid cloud offering the best of both worlds. More sophisticated approaches followed, as multiple cloud providers were deployed across an organisation’s IT stack for different workloads, be they identity, databases, or disaster recovery.

This need for speed was, of course, catnip for container technologies – and as Ali Golshan, co-founder and CTO at StackRox wrote for this publication in January: “Once we started using containers in great volume, we needed a way to automate the setup, tear down, and management of containers. That’s what Kubernetes does.”

The Docker story is an interesting one to tie up. The company had a presence at this year’s KubeCon, announcing an extension of its partnership with Tigera around support for Kubernetes on Windows in Docker Enterprise. Consensus across many in the industry was that Docker had simply run its course. At the end of 2017, Chris Short, ambassador at the CNCF – though he was swift to point out this was not the foundation’s opinion – wrote a piece headlined simply “Docker, Inc is Dead.” In October of that year, Docker announced it was supporting Kubernetes orchestration. Short added that ‘Docker’s doom [had] been accelerated by the rise of Kubernetes.’

One area of potential however is through Windows. In December Docker announced a collaboration with Microsoft in what was dubbed a ‘container for containers’; a cloud-agnostic tool aimed at packaging and running distributed applications and enabling a single all-in-one packaging format. Kubernetes 1.14 brought about support for Windows nodes, and Google referenced this in its Windows Server offering for GKE. “We heard you – being able to easily deploy Windows containers is critical for enterprises looking to modernise existing applications and move them towards cloud-native technology,” the company wrote.  

Docker secured $92 million in new funding in October. As TechCrunch put it, “while Docker may have lost its race with Kubernetes over whose toolkit would be the most widely adopted, the company has become the champion for businesses that want to move to the modern hybrid application development and information technology operations model of programming.”

This is where things stand right now. As for the future, more use cases will come along and, much like cloud has become, Kubernetes will stop being spoken of and just ‘be’. “Kubernetes may be most successful if it becomes an invisible essential of daily life,” wrote Grant and DuMars. “True standards are dramatic, but they are also taken for granted… Kubernetes is going to become boring, and that’s a good thing, at least for the majority of people who don’t have to care about container management.”

“In other ways, it is just the start,” the two added. “New applications such as machine learning, edge computing, and the Internet of Things are finding their way into the cloud-native ecosystem. Kubernetes is almost certain to be at the heart of their success.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.