(c)iStock.com/maxkabakov
Enterprise IT has long struggled to develop common standards for the security of cloud deployments. With multiple cloud vendors, fast-moving product teams, and a changing security landscape, it is perhaps no wonder that enterprises are left asking:
- What is the right cloud security standard?
- What level of security is “good enough”?
- And most importantly — how do we apply these standards in a consistent way to existing and new cloud environments?
In July 2016, the Ponemon Institute published The 2016 Global Cloud Data Security Study, an independent survey of 3,400 technology and security professionals. More than half of the respondents did not have measures for complying with privacy and security requirements in the cloud. That is clearly a problem.
We sat down with Dan Rosenbloom, the lead AWS architect at Logicworks, to talk about his own struggle with standardization and how he enforces common configurations across 5,000+ VMs on AWS.
Why do you think central IT and GRC teams have struggled to keep up with the pace of cloud adoption?
Every company is different, but this struggle usually happens when the business goal of getting to the cloud and meeting deadlines trumps long-term goals, like setting up standards for consistency and manageability. In fast-moving companies, every team builds a cloud environment on a different platform following their own definition of cloud security best practices; security teams are often pressured to complete reviews of these custom, unfamiliar environments in short time frames. Both developers and security professionals end up unhappy.
Many central IT departments are in the process of reinventing themselves. You do not become a service-oriented IT team overnight — and when you add cloud adoption into the mix, you put further strain on already stretched processes and resources.
Have you seen a company go through this process well? Or is this struggle inevitable?
To some degree, every company struggles. But the ones that struggle less do some or all of the following:
- They choose (and stick to) a limited set of cloud security guidelines for all projects on all platforms (i.e. NIST, IT consortiums like Cloud Council, and industry-specific associations like the Legal Cloud Computing Association)
- Central IT maintains a strong role in cloud purchasing and management
- Central IT commits to an automation-driven approach to cloud management, and developers modular templates and scripts, not one-off environments
What standard do most organizations apply in cloud security?
From a strategic perspective, the issue boils down to who can access your data, how you control that access and how you protect that data while it is being used, stored, transmitted and shared. At Logicworks, we build every cloud to at least PCI DSS standards, which is the standard developed by credit card companies to protect consumer financial information. We choose PCI because a) it is specific and b) we believe it represents a high standard of excellence. Even clients with no PCI DSS compliance requirement meet at least these standards as a baseline.
If your company has an existing annual infrastructure audit processes and standards, and a supplementary standard like Cloud Council can help orient your GRC team for cloud-specific technologies.
How does central IT enforce this standard?
One of the benefits of cloud technology is that you can change your infrastructure easily — but If your environment is complex, change management and governance quickly become a nightmare. Automation is the key.
One of Logicworks’ main functions as a managed service provider is to establish a resource’s ideal state and ensure that even as the system evolves, it never strays from that state. We use four key processes to do this:
- Infrastructure automation: Infrastructure is structured and built into templates, where it can be versioned and easily replicated for future environments. Tools: AWS CloudFormation, Git
- Configuration management: Configuration management scripts and monitoring tools catch anomalies and proactively correct failed/misconfigured resources. This means that our instances replace themselves in case of failure, errors are corrected, and ideal state is always maintained. Tools: Puppet, Chef, Jenkins, AWS Autoscaling, Docker, AWS Lambda, AWS Config Rules
- Deployment automation: Code deployment processes are integrated with cloud-native tools, improving deployment velocity and reducing manual effort (and error). Tools: AWS CodeDeploy, AWS Lambda, Jenkins
- Compliance monitoring: Systems are continuously monitored for adherence to security standards and potential vulnerabilities, and all changes are logged for auditing. Tools: Git, AWS Config Rules, AWS CloudTrail, AWS Config, AWS Inspector, Sumo Logic, Congiruation management (Puppet, Chef, Ansible)
It is easiest to see how this works together in an example. Let’s say central IT wants to build a standard, PCI DSS compliant environment that product teams can modify and use for their own individual projects. Essentially, they want to “build in” a baseline of security and availability standards into a repeatable template.
First, they would write or create core resources (compute, storage, network) in a framework like AWS CloudFormation, with some basic rules and standards for the infrastructure level. Then the OS would get configured by a configuration management tool like Puppet or Chef, which would perform tasks like requiring multi-factor authentication and installing log shipping and monitoring software, etc. Finally, those resources receive the latest version of code and are deployed into dev/test/production.
Obviously I am oversimplifying this process, but we have found that this is an extremely effective way to ensure that:
- Product teams have access to an approved, “best practices” template, which ensures they’re not launching completely insecure resources,
- You have a central source of truth and can manage multiple systems without having to log in and do the same change on hundreds of systems one by one
- Security teams have a record of all changes made to the environment, because engineers apply changes through the configuration management tool (rather than one-off)
- Standards are continually enforced, without human intervention.
How do you bring engineers on board with an automation system?
I have been in IT for about 15 years, and I come from a very “traditional” systems engineering background. I always tried to follow the old school IT rule that you should never do the same thing twice — and the cloud just gives me many new opportunities to put this into practice.
I got introduced to configuration management a few years ago and was obsessed from day 1. Most engineers like scripting things if there is an annoying problem, and cloud automation takes care of some of the most annoying parts of managing infrastructure. You still get to do all the cool stuff without the repetitive, mindless work or constant firefighting. Once your engineers get a taste of this new world, they will never want to go back.
Any advice for IT teams that are trying to implement better controls?
The automation framework I just described — infrastructure automation, deployment automation, and configuration management — is complex and can take months or years to build and master. If you work with a Managed Service Provider that already has this framework in place, you can achieve this operational maturity in months; but if you are starting on your own, do not worry about getting everything done at once.
Start by selecting a cloud security standard or modifying your existing standard for the cloud. Then build these standards into a configuration management tool. Even if you do not build a fully automated framework, implementing configuration management will be a huge benefit for both your security team and your developers.
The post Advanced Cloud Security: Q&A with a Sr. DevOps Engineer appeared first on Logicworks Gathering Clouds.