Tech-savvy banks were among the first and most enthusiastic supporters of Docker containers.
Goldman Sachs invested $95 million in Docker in 2015. Bank of America has its enormous 17,500-person development team running thousands of containers. Top fintech companies like Coinbase also run Docker containers on AWS cloud. Nearly a quarter of enterprises are already using Docker and an additional 35% plan to use it.
It may seem unusual that one of the most risk-averse and highly regulated industries should invest in such a new technology. But for now, it appears that the potential benefits far outweigh the risks.
Why containers?
Containers allow you to describe and deploy the template of a system in seconds, with all infrastructure-as-code, libraries, configs, and internal dependences in a single package. This means your Docker file can be deployed on virtually any system; an application in a container running on an AWS-based testing environment will run exactly the same in production environments on a private cloud.
In a market that is becoming increasingly skittish about cloud vendor lock-in, containers have removed one more hurdle to moving containers across AWS, VMware, Cisco, etc. A survey of 745 IT professionals found that the top reason IT organizations are adopting Docker containers is to build a hybrid cloud.
In practice, teams are usually not moving containers from cloud to cloud or OS to OS, but rather benefiting from the fact that developers have a common operating platform across multiple infrastructure platforms. Rather than moving the same container from VMware to AWS, they benefit from being able to simplify and unite process and procedures across multiple teams and applications. You can imagine that financial services companies that maintain bare metal infrastructure, VMware, and multiple public clouds benefit from utilising the container as a common platform.
Containers also are easier to automate, potentially reducing maintenance overhead. Once OS and package updates are automated with a service like CoreOS, a container becomes a maintenance-free, disposable “compute box” for developers to easily provision and run code. Financial services companies can leverage their existing hardware, gaining the agility and flexibility of disposable infrastructure without a full-scale migration to public cloud.
On large teams, the impact of these efficiencies — magnified across hundreds or thousands of engineers — can have a huge impact on the overall speed of technological innovation.
The big challenges: Security and compliance
One of the first questions enterprises ask about containers is: What is the security model? What is the fallout from containerization on your existing infrastructure security tools and processes?
The truth is that many of your current tools and processes will have to change. Often your existing tools and processes are not “aware” of containers, so you must apply creative alternatives to meet your internal security standards. It’s why Bank of America only runs containers in testing environments. The good news is that these challenges are by no means insurmountable for companies that are eager to containerize; The International Securities Exchange, for example, runs two billion transactions a day in containers running CoreOS.
Here are just a few examples of the types of changes you’d have to make:
Monitoring: The most important impact of Docker containers on infrastructure security is that most of your existing security tools — monitoring, intrusion detection, etc. — are not natively aware of sub-virtual machine components, i.e. containers. Most monitoring tools on the market are just beginning to have a view of transient instances in public clouds, but are far behind offering functionality to monitor sub-VM entities.In most cases, you can satisfy this requirement by installing your monitoring and IDS tools on the virtual instances that host your containers. This will mean that logs are organized by instance, not by container, task, or cluster. If IDS is required for compliance, this is currently the best way to satisfy that requirement.
Incident response: Traditionally, if your IDS picks up a scan with a fingerprint of a known security attack, the first step is usually to look at how traffic is flowing through an environment. Docker containers by nature force you to care less about your host and you cannot track inter-container traffic or leave a machine up to see what is in memory (there is no running memory in Docker). This could potentially make it more difficult to see the source of the alert and the potential data accessed.
The use of containers is not really understood by the broader infosec and auditor community yet, which is potential audit and financial risk. Chances are that you will have to explain Docker to your QSA — and you will have few external parties that can help you build a well-tested, auditable Docker-based system. Before you implement Docker on a broad scale, talk to your GRC team about the implications of containerization for incident response and work to develop new runbooks. Alternatively, you can try Docker in a non-compliance-driven or non-production workload first.
Patching: In a traditional virtualized or public cloud environment, security patches are installed independently of application code. The patching process can be partially automated with configuration management tools, so if you are running VMs in AWS or elsewhere, you can update the Puppet manifest or Chef recipe and “force” that configuration to all your instances from a central hub. Or you can utilize a service like CoreOS to automate this process.
A Docker image has two components: the base image and the application image. To patch a containerized system, you must update the base image and then rebuild the application image. So in the case of a vulnerability like Heartbleed, if you want the ensure that the new version of SSL is on every container, you would update the base image and recreate the container in line with your typical deployment procedures. A sophisticated deployment automation process (which is likely already in place if you are containerized) would make this fairly simple.
One of the most promising features of Docker is the degree to which application dependencies are coupled with the application itself, offering the potential to patch the system when the application is updated, i.e., frequently and potentially less painfully.
In short, to implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together, and responsibilities are clear.
Almost ready for prime time
If you are eager to implement Docker and are ready to take on a certain amount of risk, then the methods described here can help you monitor and patch containerized systems. At Logicworks, we manage containerized systems for financial services clients who feel confident that their environments meet regulatory requirements.
As public cloud platforms continue to evolve their container support and more independent software vendors enter the space, expect these “canonical” Docker security methods to change rapidly. Nine months from now or even three months from now, a tool could develop that automates much of what is manual or complex in Docker security. When enterprises are this excited about a new technology, chances are that a whole new industry will follow.
The post Why Financial Services Companies Love Docker Containers appeared first on Logicworks.