AWS ramps up SageMaker tools at Re:Invent


Bobby Hellard

4 Dec, 2019

CEO Andy Jassy announced a barrage of new machine learning capabilities for AWS SageMaker during his Re:Invent keynote on Tuesday.

SageMaker is Amazon’s big machine learning hub that aims to remove most of the heavy lifting for developers and let them use ML more expansively. Launched in 2017, there have been numerous features and capabilities introduced over the years, with more than 50 added to it in 2019 alone.

Of the SageMaker announcements made at the company’s annual conference in Las Vegas, the biggest was AWS SageMaker Studio, an IDE that allows developers and data scientists to build, code, develop, train and tune machine learning workflows all in a single interface. Within it information can be viewed, stored, collected and used to collaborate with others through the studio.

In addition to SageMaker Studio, the company announced a further five new capabilities: Notebooks, Experiment Management, Autopilot, Debugger and Model Monitor.

AWS SageMaker Studio interface

The first of these is described as a ‘one-click’ notebook with elastic compute.

«In the past, Notebooks is frequently where data scientists would work and it was associated with a single EC2 instance,» explained Larry Pizette, the global head of ML solutions Lab. «If a developer or data scientist wanted to switch capabilities, so they wanted more compute capacity, for instance, they had to shut that down and instantiate a whole new notebook.

«This can now be done dynamically, in just seconds, so they can get more compute or GPU capability for doing training or inference, so its a huge improvement over what was done before.»

All of the updates to SageMaker have a specific purpose to simplify the machine learning workflows, like Experiment Management, which enables developers to visualise and compare ML model iterations, training parameters, and outcomes.

Autopilot lets developers submit simple data in CSV files and have ML models automatically generated. SageMaker Debugger provides real-time monitoring for ML models to improve predictive accuracy, reduce training times.

And finally, Amazon SageMaker Model Monitor detects concept drift to discover when the performance of a model running in production begins to deviate from the original trained model.

«We recognised that models get used over time and there can be changes to the underlying assumptions that the models were built with – such as housing prices which inflate,» said Pizette. «If interest rates change it will affect the prediction of whether a person will by a home or not.»

«When the model is initially built to keep statistics, it will notice what we call ‘Concept Drift’ if that concept drift is happening, and the model gets out of sync with the current conditions, it will identify where that’s happening and provide the developer or data scientist with the information to help them retrain and retool that model.»

Verizon unveils 5G edge compute service at Re:Invent


Bobby Hellard

4 Dec, 2019

AWS and Verizon have partnered to deliver cloud computing services at the edge using 5G connectivity.

The deal will see Amazon’s cloud processing brought closer to mobile devices at the edge thanks to Verizon’s 5G Ultra Wideband Network and AWS Wavelength.

Speaking during an AWS keynote on Tuesday, Verizon’s CEO Hans Vestberg said that his company was «the first in the world to offer 5G network edge computing».

However, this announcement comes a week after Microsoft and AT&T revealed their own integrated 5G edge computing service on Azure.

AWS and Verizon are currently piloting AWS Wavelength on Verizon’s edge compute platform, 5G Edge, in Chicago for a select group of customers, including video game publisher Bethesda Softworks and the NFL. Additional deployments are planned in other locations across the US for 2020.

«We’ve worked closely with Verizon to deliver a way for AWS customers to easily take advantage of the ubiquitous connectivity and advanced features of 5G,» said Jassy.

«AWS Wavelength provides the same AWS environment – APIs, management console, and tools – that they’re using today at the edge of the 5G network. Starting with Verizon’s 5G network locations in the US, customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.»

The aim is to enable developers to deliver a wide range of transformative, latency-sensitive use cases such as smart cars, IoT and augmented and virtual reality, according to AWS. The service will also be coming to Europe via Vodafone sometime in 2020.

«Vodafone is pleased to be the first telco to introduce AWS Wavelength in Europe,» said Vinod Kumar, CEO of Vodafone Business. «Faster speeds and lower latencies have the potential to revolutionise how our customers do business and they can rely on Vodafone’s existing capabilities and security layers within our own network.»

AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

“If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates,” wrote The Atlantic’s Derek Thompson in October, “you’ve interacted with seven companies that will collectively lose nearly $14 billion this year.”

It is a well-worn line, and as WeWork’s collapse showed, there is plenty of pushback when it comes to the gig economy champions. Yet at the start of his re:Invent keynote today, Amazon Web Services (AWS) CEO Andy Jassy cited Uber, Lyft and Postmates, as well as Airbnb, as examples of the overall keynote theme around transformation. “These startups have disrupted longstanding industries that have been around for a long time from a standing start,” said Jassy.

An eyebrow-raising opening, perhaps. Yet, backed by the re:Invent band once more with half a dozen songs ranging from Van Halen to Queen – AWS has heard of the former even if Billie Eilish hasn’t – the rationale was straightforward. If you’re making a major transformation, then you need to get your ducks in a row; senior leadership needs to be on board, with top-down aggressive goals and sufficient training.

“Once you decide as a company that you’re going to make this transition to the cloud, your developers want to move as fast as possible,” said Jassy. This beget the now-standard discussion around the sheer breadth of services available to AWS customers – more than 175 at the most recent count – with Jassy noting that certain unnamed competitors were ‘good at being checkbox heroes’ but little else.

This was not the only jibe the AWS chief exec landed on the opposition. From transformation, another key element for discussion was around modernisation. This was illustrated by a ‘moving house’ slide (below) which was self-explanatory in its message. Jassy took extra time to point out the mainframe and audit notices. While IBM and particularly Oracle have been long-term targets, the Microsoft box is an interesting addition. Jassy again noted AWS’ supremacy with regard to Gartner’s IaaS Magic Quadrant – adding the gap between AWS and Microsoft was getting bigger.

Last year, the two big headlines were around blockchain and hybrid cloud. Amazon Managed Blockchain did what it said on the tin, but AWS Outposts aimed to deliver a ‘truly consistent experience’ by bringing AWS services, infrastructure and operating models to ‘virtually any’ on-prem facility. Google Cloud’s launch – or relaunch – of Anthos was seen as a move in the same vein, while Azure Arc was seen by industry watchers as Microsoft’s response.

This is prescient as plenty of the product updates could be seen as an evolution of 2018’s re:Invent announcements. Instead of storage, Jassy this time focused on compute; instances and containers.

One piece of news did leak out last week around AWS building a second-generation custom server chip – and this was the first announcement which Jassy confirmed. The M6g, R6g, and C6g Instances for EC2 were launched based on the AWS Graviton 2 processors. “These are pretty exciting, and they provide a significant improvement over the first instance of the Graviton chips,” said Jassy. Another instance launch was seen as another upgrade. While AWS Inferentia was launched last year as a high-performance machine learning inference chip, this year saw Inf1 Instances for EC2, powered by Inferentia chips.

On the container side, AWS expanded its offering with Amazon Fargate for Amazon EKS. Again, the breadth of options to customers was emphasised; Elastic Container Services (ECS) and EKS, or Fargate, or a mix of both. “Your developers don’t want to be held back,” said Jassy. “If you look across the platform, this is the bar for what people want. If you look at compute, [users] want the most number of instances, the most powerful machine learning inference instances, GPU… biggest in-memory… access to all the different processor options. They want multiple containers at the managed level as well as the serverless level.

“That is the bar for what people want with compute – and the only ones who can give you that is AWS.”

Jassy then moved to storage and database, but did not stray too far from his original topic. Amazon Redshift RA3 Instances with Managed Storage enables customers to separate storage from compute, while AQUA (Advanced Query Accelerator) for Amazon Redshift flips the equation entirely. Instead of moving the storage to the compute, users can now move compute to the storage. “What we’ve built with AQUA is a big high-speed cache architecture on top of S3,” said Jassy, noting it ran on a souped-up Nitro chip and custom-designed FGPAs to speed up aggregations and filtering. “You can actually do the compute on the raw data without having to move it,” he added.

Summing up the database side, the message was not simply one of breadth, but one that noted how a Swiss Army knife approach would not work. “If you want the right tool for the right job, that gives you different productivity and experience, you want the right purpose-built database for that job,” explained Jassy. “We have a very strong belief inside AWS that there is not one tool to rule the world. You should have the right tool for the right job to help you spend less money, be more productive, and improve the customer experience.”

While various emerging technologies were announced and mentioned in the second half of last year’s keynote, the big gotcha arrived the day before. Amazon Braket, in preview today, is a fully managed AWS service which enables developers to begin experimenting with computers from quantum hardware providers in one place, while a partnership has been put in place between Amazon and the California Institute of Technology (Caltech) to collaborate on the research and development of new quantum technologies.

On the machine learning front, AWS noted that 85% of TensorFlow running in the cloud runs on its platform. Again, the theme remained: not just every tool for the job, but the right tool. AWS research noted that 90% of data scientists use multiple frameworks, including PyTorch and MXNet. AWS subsequently has distinct teams working on each framework.

For the pre-keynote products, as sister publication AI News reported, health was a key area. Transcribe Medical is set to be utilised to move doctors’ notes from the barely legible script to the cloud, and is aware of medical speech as well as standard conversation. Brent Shafer, the CEO of Cerner, took to the stage to elaborate on ML’s applications for healthcare.

With regard to SageMaker, SageMaker Operators for Kubernetes was previously launched to let data scientists using Kubernetes train, tune, and deploy AI models. In the keynote, Jassy also introduced SageMaker Notebooks and SageMaker Experiments as part of a wider Studio suite. The former offered one-click notebooks with elastic compute, while the latter allowed users to capture, organise and search every step of building, training, and tuning their models automatically. Jassy said the company’s view of ML ‘continued to evolve’, while CCS Insight VP enterprise Nick McQuire said from the event that these were ‘big improvements’ to AWS’ main machine learning product.

As the Formula 1 season coming to a close at the weekend, the timing was good to put forth the latest in the sporting brand’s relationship with AWS. Last year, Ross Brawn took to the stage to expand on the partnership announced a few months before. This time, the two companies confirmed they had worked on a project called Computational Fluid Dynamics Project; according to the duo more than 12,000 hours of compute time were utilised to help car design for the 2021 season.

Indeed, AWS’ strategy has been to soften the industry watchers up with a few nice customer wins in the preceding weeks before hitting them with a barrage at the event itself. This time round, November saw Western Union come on board with AWS its ‘long-term strategic cloud provider’, while the Seattle Seahawks became the latest sporting brand to move to Amazon’s cloud with machine learning expertise, after NASCAR, Formula 1 and the LA Clippers among others.

At the event itself, the largest customer win was Best Western Hotels, which is going all-in on AWS’ infrastructure. This is not an idle statement, either: the hotel chain is going across the board, from analytics to machine learning, the standard database, compute and storage, as well as consultancy.

This story may be updated as more news breaks.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS plugs leaky S3 buckets with CloudKnox integration


Bobby Hellard

3 Dec, 2019

AWS has launched a new tool to help customers avoid data leaks within its simple storage service.

The AWS IAM Access Analyzer is a new function that analyses resource policies to help administrators and security teams protect their resources from unintended access.

It comes from an integration with CloudKnox, a company that specialises in hybrid cloud access management.

It’s a strategic integration designed to protect organisations against unintended access to critical resources and mitigate the risks they face, such as overprivileged identities, according to Balaji Parimi, CEO of CloudKnox.

«Exposed or misconfigured infrastructure resources can lead to a breach or a data leak,» he said. «Combining AWS IAM Access Analyzer’s automated policy monitoring and analysis with CloudKnox’s identity privilege management capabilities will make it easier for CloudKnox customers to gain visibility into and control over the proliferation of resources across AWS environments.»

Amazon S3 is one of the most popular cloud storage services, but because of human error, it’s historically been a bit of a security liability, according to Sean Roberts, GM of Cloud Business Unit at hybrid managed services provider Ensono.

«Over the last few years, hundreds of well-known organisations have suffered data breaches as a direct result of an incorrect S3 configuration — where buckets have been set to public when they should have been private,» he said.

«When sensitive data is unintentionally exposed online, it can damage an organisation’s reputation and lead to serious financial implications. In real terms, this sensitive data is often usernames and passwords, compromising not only the business but its customers too.»

In July, more than 17,000 domains were said to have been compromised in an attack launched by the prolific hacking group Magecart that preyed on leaky S3 buckets. Looking back over the last two years, a number of companies and organisations such as NASA, Dow Jones and even Facebook have been seen breaches from this S3 Buckets.

With the Access Analyzer, there’s a new option in the console for IAM (Identity and Access Management). The toll alerts customers when a bucket is configured to allow public access or access to other AWS accounts. There is also a single-click option that will block public access.

HPE takes on public cloud with GreenLake Central


Jane McCallion

3 Dec, 2019

GreenLake, HPE’s as a service initiative, now has a new component: GreenLake Central.

The product is designed to offer IT departments a similar experience controlling and provisioning on-premises and hybrid IT as they would expect when using a public cloud service.

GreenLake Central, like many of the other offerings that fall under the GreenLake «as a service» umbrella, was created in response to the acknowledgement that public cloud doesn’t serve all requirements, particularly in large enterprises.

«Part of what we are seeing within hybrid is that our clients have moved all the easy stuff off to public cloud, and there’s been a bit of a stall, especially for a bunch of the legacy applications, whether that’s because of regulatory issues, or data gravity issues, or application dependency complexity type issues,» Erik Vogel, global vice president for customer experience for HPE GreenLake, told Cloud Pro.

«What we’re providing… is a consistent experience. We’ve taken the traditional GreenLake and really enhanced it to look and feel like the public cloud. So we have now shifted that into making it making Greenlake operate in a way that our customers are used to getting from AWS or Azure,» he added.

HPE has also incorporated some additional capabilities, such as delivering «EC2-like functionality» to provision and de-provision capacity within a customer’s own data centre on top of their GreenLake Flex Capacity environment.

It has also bundled in some managed service capabilities to help manage a hybrid IT environment. This includes, for example, controlling cost and compliance, capacity management, and public cloud management.

«Very soon we’ll be offering the ability to point and click and add more capacity,» Vogel added. «So if they want to increase the capacity within their environment, rather than having to pick up the phone and call a seller and go through that process, they will be able to drive those purchase acquisitions through a single click within the within the portal, again, being able to manage capacity, see their bills, see what they’re using effectively, what they’re not using effectively.»

GreenLake Central is in the process of being rolled out in beta to 150 customers, and will be generally available to all HPE GreenLake customers in the second half of 2020.

What to expect from AWS Re:Invent 2019


Bobby Hellard

2 Dec, 2019

The card tables at las Vegas will have to make way for cloud computing this week, as AWS is in town for Re:Invent 2019.

Last year the conference took place in eight venues, with some 50,000 cloud computing enthusiasts descending on the city of sin. There was a lot to take in too, from new blockchain and machine learning service to even satellite data programs, the announcements came thick and fast.

AWS is the leading provider of public cloud services, so naturally its annual conference is gargantuan. It can’t afford to rest on its laurels, though, as its rivals are building up their own offerings and gaining on Amazon fast. In just the last year Microsoft has invested heavily in Azure with a string of acquisitions for migration services, while IBM has changed focus with its massive Red Hat deal and Google is ploughing so much into its cloud it’s hurting Alphabet earnings.

This is without mentioning the biggest cloud computing deal of the last two years, the Pentagon’s JEDI contract, being awarded to Microsoft, despite AWS being the clear favourite for much of the bidding.

Bearing all this in mind, I do expect to see a slew of new products and services unveiled throughout the week.

Expansive keynotes

AWS tends to do marathon keynotes that run on for three hours and overflow with announcements. Last year CEO Andy Jassy fired through new products and special guest customers with the same stamina that saw Eliud Kipchoge recently break the world record over 26.2 miles.

Jassy is, of course, back again this year, for not one but two keynotes. On Tuesday he will deliver his main opening day presentation, while on Wednesday he will join the head of worldwide channels and alliances, Doug Yeum for a fireside chat.

CTO Werner Vogels will be on stage on Thursday with an in-depth explainer on all the new products. This two hour deep dive is definitely one for the diehard fans of cloud architecture, with all the technical underpinning you crave. For the frugal, get there early — the first 1,000 guests in the keynote line will get a special piece of «swag», according to the website.

Machine Learning

Last year, machine learning and the AWS Marketplace took precedence and 2019’s event should hold more of the same. Recently, the company announced the launch of the AWS Data Exchange, a new hub for partners to share large datasets that customers can use for their machine learning and analytical programmes.

The customer element is key for AWS, as it often integrates and shares these innovations. Last year, head of Formula 1 Ross Brawn joined Jassy on stage keynote and showcased what his sport had done with AWS Sagemaker and other machine learning services. Interestingly, the basic idea for the prediction models they used came from a London-based startup called Vantage Power that developed the technology to predict the lifespan of electric batteries in buses.

Doubtless there will be some kind of machine learning update, but what it is could depend on what AWS customers have innovated. Last year the company announced a partnership with NFL app Next Gen Stats, the automation of NASCAR’s video library and multiple services with US-based ride-hailing firm Lyft. Vegas is all about gambling, but it’s a safe bet that at least one of these companies will be in attendance to talk through case studies.

AWS goes all-in on quantum computing


Bobby Hellard

3 Dec, 2019

AWS unveiled its plans to aid and accelerate the research of quantum computing at its Re:Invent conference on Monday.

The cloud giant announced three new services for testing, researching and experimenting with the technology.

The first of which was Amazon Bracket, a service that enables scientists, researchers, and developers to begin experimenting with computers from quantum hardware providers in a single place.

To go with Bracket is the AWS Centre for Quantum Computing which brings together quantum computing experts from Amazon, the California Institute of Technology (Caltech) and other academic research institutions to collaborate on the research and development of new quantum computing technologies.

And finally Amazon Quantum Solutions Lab, which is a program that connects customers with quantum computing experts and consulting partners to develop internal expertise aimed at identifying practical uses of quantum computing. The aim is to accelerate the development of quantum applications with meaningful impact.

There has been significant progress in quantum computing this year, particularly from IBM and Google, with both announcing large investments in the technology. The two made headlines in October after IBM discredited claims made by Google that its 53-qubit Sycamore processor had achieved «quantum supremacy».

Quantum computing refers to extremely powerful machines capable of processing massive swathes of data due to a reliance on the theory of quantum mechanics in the way they are constructed. For example, Google suggested its processor was able to perform a complex mathematical problem in 200 seconds, while the world’s most-powerful supercomputer would need 10,000 years to complete.

Despite the work of IBM, Google and now AWS, quantum computing is not quite a mainstream technology just yet, but according to AWS evangelist Jeff Barr, that time is coming.

«I suspect that within 40 or 50 years, many applications will be powered in part using services that run on quantum computers,» he wrote in a blog post. «As such, it is best to think of them like a GPU or a math coprocessor. They will not be used in isolation, but will be an important part of a hybrid classical/quantum solution.»

Facebook lets users port photos and videos to Google


Nicole Kobie

2 Dec, 2019

Facebook is letting users move uploaded photos and videos to Google Photos as part of a project enabling data portability. 

The new tool lets Facebook users bulk export all of their photos and videos to Google’s photo hosting service. So far, the tool is only available in Ireland, but is set to be rolled out more widely in the first half of next year. 

«At Facebook, we believe that if you share data with one service, you should be able to move it to another,» said Steve Satterfield, Director of Privacy and Public Policy at Facebook, in a blog post. «That’s the principle of data portability, which gives people control and choice while also encouraging innovation.»

Data portability is required under laws such as GDPR and the California Consumer Privacy Act; the data portability rules in the latter come into play next year, just as this tool arrives more widely. 

Transferring the data to Google Photos does not appear to delete it from Facebook, but you can move the images over to the rival digital provider and then delete your account. It’s worth noting that Facebook has long allowed users to download everything from their account, photos and videos included, and then they can, of course, be uploaded again to your digital host of choice, Google Photos or otherwise. 

Facebook said the photo transfer tool is just the first step, and its release is designed to be assessed by policymakers, academics and regulators, in order to help decide what data should be portable and how to keep it private and secure.

«We’ve learned from our conversations with policymakers, regulators, academics, advocates and others that real-world use cases and tools will help drive policy discussions forward,» said Satterfield. 

He added: «We are currently testing this tool, so we will continue refining it based on feedback from people using it as well as from our conversations with stakeholders.»

The photo tool is based on code developed at the Data Transfer Project, an effort launched in 2018 that includes leading tech companies including Microsoft, Twitter, Google and Apple. The aim is to develop an open-source data portability platform to make it easier for individuals using their products to shift to a new provider if desired. 

The tool will eventually be available via the settings section of «Your Facebook Information.» «We’ve kept privacy and security as top priorities, so all data transferred will be encrypted and people will be asked to enter their password before a transfer is initiated,» said Satterfield. 

Satterfield saying Facebook hoped to «advance conversations» on the privacy questions identified in the white paper, which included the need to make users aware of privacy terms at the destination service, the types of data being transferred, and to ensure it’s encrypted to avoid it being diverted by hackers. For example, should contact list data be portable, given it’s private information of other people? Satterfield called on more companies to join the Data Transfer Project to further such efforts, which will be welcome to everyone as, after a string of security and privacy concerns, Facebook might not be the most trusted service on such issues. 

How to excel at secured cloud migrations through shared responsibility: A guide

  • 60% of security and IT professionals state that security is the leading challenge with cloud migrations, despite not being clear about who is responsible for securing cloud environments
  • 71% understand that controlling privileged access to cloud service administrative accounts is a critical concern, yet only 53% cite secure access to cloud workloads as a key objective of their cloud privileged access management (PAM) strategies

These and many other fascinating insights are from the recent Centrify survey, Reducing Risk in Cloud Migrations: Controlling Privileged Access to Hybrid and Multi-Cloud Environments, downloadable here. The survey is based on a survey of over 700 respondents from the United States, Canada, and the UK from over 50 vertical markets, with technology (21%), finance (14%), education (10%), government (10%) and healthcare (9%) being the top five. For additional details on the methodology, please see page 14 of the study.

What makes this study noteworthy is how it provides a candid, honest assessment of how enterprises can make cloud migrations more secure by a better understanding of who is responsible for securing privileged access to cloud administrative accounts and workloads.

Key insights from the study include the following:

Improved speed of IT services delivery (65%) and lowered total cost of ownership (54%) are the two top factors driving cloud migrations today

Additional factors include greater flexibility in responding to market changes (40%), outsourcing IT functions that don’t create competitive differentiation (22%), and increased competitiveness (17%). Reducing time-to-market for new systems and applications is one of the primary catalysts driving cloud migrations today, making it imperative for every organisation to build security policies and systems into their cloud initiatives.

How To Excel At Secured Cloud Migrations With A Shared Responsibility Model

Security is the greatest challenge to cloud migration by a wide margin

60% of organisations define security as the most significant challenge they face with cloud migrations today. One in three sees the cost of migration (35%) and lack of expertise (30%) being the second and third greatest impediments to cloud migration project succeeding. Organisations are facing constant financial and time constraints to achieve cloud migrations on schedule to support time-to-market initiatives. No organisation can afford the lost time and expense of an attempted or successful breach impeding cloud migration progress.

How To Excel At Secured Cloud Migrations With A Shared Responsibility Model

71% of organisations are implementing privileged access controls to manage their cloud services

However, as the privilege becomes more task-, role-, or access-specific, there is a diminishing interest of securing these levels of privileged access as a goal, evidenced by only 53% of organisations securing access to the workloads and containers they have moved to the cloud. The following graphic reflects the results.

How To Excel At Secured Cloud Migrations With A Shared Responsibility Model

An alarmingly high 60% of organisations incorrectly view the cloud provider as being responsible for securing privileged access to cloud workloads

It’s shocking how many customers of AWS and other public cloud providers are falling for the myth that cloud service providers can completely protect their customised, highly individualised cloud instances.

The native identity and access management (IAM) capabilities offered by AWS, Microsoft Azure, Google Cloud, and others provide enough functionality to help an organisation get up and running to control access in their respective homogeneous cloud environments. Often they lack the scale to adequately address the more challenging, complex areas of IAM and Privileged Access Management (PAM) in hybrid or multi-cloud environments, however. For an expanded discussion of the Shared Responsibility Model, please see The Truth About Privileged Access Security On AWS and Other Public Clouds. The following is a graphic from the survey and Amazon Web Services’ interpretation of the Shared Responsibility Model.

How To Excel At Secured Cloud Migrations With A Shared Responsibility Model

Implementing a common security model in the cloud, on-premises, and in hybrid environments is the most proven approach to making cloud migrations more secure

Migrating cloud instances securely needs to start with Multi-Factor Authentication (MFA), deploying a common privileged access security model equivalent to on-premises and cloud systems, and utilising enterprise directory accounts for privileged access.

These three initial steps set the foundation for implementing least privilege access. It’s been a major challenge for organisations to do this, particularly in cloud environments, as 68% are not eliminating local privilege accounts in favour of federated access controls and are still using root accounts outside of “break glass” scenarios.

Even more concerning, 57% are not implementing least privilege access to limit lateral movement and enforce just-enough, just-in-time-access.

How To Excel At Secured Cloud Migrations With A Shared Responsibility Model

When it comes to securing access to cloud environments, organisations don’t have to reinvent the wheel

Best practices from securing on-premises data centres and workloads can often be successful in securing privileged access in cloud and hybrid environments as well.

Conclusion

The study provides four key takeaways for anyone working to make cloud migrations more secure. First, all organisations need to understand that privileged access to cloud environments is your responsibility, not your cloud providers’. Second, adopt a modern approach to privileged access management that enforces least privilege, prioritising “just enough, just-in-time” access. Third, employ a common security model across on-premises, cloud, and hybrid environments. Fourth and most important, modernise your security approach by considering how cloud-based PAM systems can help to make cloud migrations more secure.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.