Why enterprises need containers and Docker

(c)iStock.com/JVT

At DockerCon 2015 last week, it was very clear that Docker is poised to transform enterprise IT.

While it traditionally takes years for a software innovation — and especially an open source one — to reach the enterprise, Docker is defying all the rules. Analysts expect Docker will be the norm in enterprises by 2016, less than two years after its 1.0 release.

Why are Yelp, Goldman Sachs, and other enterprises using Docker? Because in many ways, enterprises have been unable to take full advantage of revolutions in virtualisation and cloud computing without containerisation.

Docker, standard containers, and the hybrid cloud

If there ever was a container battle among vendors, Docker has won — and is now nearly synonymous with container technology.

Most already understand what containers do: describe and deploy the template of a system in seconds, with all infrastructure-as-code, libraries, configs, and internal dependences in a single package, so that the Docker file can be deployed on virtually any system.

But the leaders of the open-source project wisely understand that in order to work in enterprises, there needs to be a “standard” container that works across more traditional vendors like VMware, Cisco, and across new public cloud platforms like Amazon Web Services. At DockerCon, Docker and CoreOS announced that they were joining a Linux Foundation initiative called the Open Container Project, where everyone agrees on a standard container image format and runtime.

This is big news for enterprises looking to adopt container technology. First, in a market that is becoming increasingly skittish about “vendor lock-in”, container vendors have removed one more hurdle to moving containers across AWS, VMware, Cisco, etc. But more importantly for many IT leaders, this container standardisation makes it that much easier to move across internal clouds operated by multiple vendors or across testing and production environments.

A survey of 745 IT professionals found that the top reason IT organizations are adopting Docker containers is to build a hybrid cloud. Despite the promises of the flexibility of hybrid clouds, it is actually quite a difficult engineering feat to build cloud bursting systems (where load is balanced across multiple environments), and there is no such thing as a “seamless” transition across clouds. Vendors that claim to facilitate this often do so by compromising feature sets or by building applications to the lowest common denominator, which often means not taking full advantage of the cost savings or scalability of public clouds.

By building in dependencies, Docker containers all but eliminate these interoperability concerns. Apps that run well in test environments built on AWS will run exactly the same in production environments in on-premises clouds.

Docker also announced major upgrades in networking that allow containers to communicate with each across hosts. After acquiring SocketPlane six months ago, the SocketPlane team is working to complete a set of networking APIs, it looks like Docker is hard at work making networking enterprise-grade, so that developers are guaranteed application portability throughout the application lifecycle. Read all the updates from DockerCon 2015 here.

Reducing complexity and managing risk

Docker does add another level of complexity when engineers are setting up the environment. On top of virtualisation software, auto scaling, and all of the moving parts of automation and orchestration now in place in most enterprises, Docker may initially seem like an unnecessary layer.

But once Docker is in place, it drastically simplifies and de-risks the deploy process. Developers have more of a chance to work on application knowing that once they deploy to a Docker file, it will run on their server. They can build their app on their laptop, deploy as a Docker file, and type in a command to deploy it to production. On AWS, using ECS with Docker takes away some of the configuration you need to complete with Docker. You can achieve workflows where Jenkins or other configuration integration tools run tests, AWS CloudFormation scales up an environment, all in minutes.

This simplified (and shortened) deployment cycle is even more useful in complex environments, where developers often must “remember” to account for varying system and infrastructure requirements during the deploy process. In other words, the deploy process happens faster with fewer errors, so developers can focus on doing their jobs. System engineers do not have to jump through the same hoops to make sure an application runs on infrastructure it was not configured for.

Many large start-ups, like Netflix, have developed workarounds and custom solutions to simplify and coordinate hundreds of deploys a day across multiple teams. But as enterprises are in the nascent stages of continuous delivery, Docker has come at a perfect time to eliminate the pain of complex deploys before they have to develop their own workarounds.

Caveat: Docker in hybrid environments is not “easy”

We mentioned it above, but it is important to note that setting up Docker is a specialised skill. It has even taken the senior automation engineers at Logicworks quite some time to get used to. No wonder why it was announced at DockerCon that the number of Docker-related job listings went from 2,500 to 43,000 in 2015, an increase of 1,720 percent.

In addition, Docker works best in environments that have already developed sophisticated configuration automation practices (using Puppet or Chef), where engineers have invested time in developing templates to describe cloud resources (CloudFormation). Docker also requires that these scripts and templates change. Most enterprises will either have to hire several engineers to implement Docker or hire a managed service provider with expertise in container technology.

On top of this, there are lingering concerns over the security of Docker in production — and rightly so. While many enterprises, like Yelp and Goldman Sachs, have used Docker in production, there are certain measures one can take to protect these assets for applications carrying sensitive data or compliance obligations.

Docker did announce the launch of Docker Trusted Registry last week, which is a piece of software that securely stores container images. It also comes with management features and support, which meets Docker’s paid support business objectives. This announcement is specifically targeted at the enterprise market, which has traditionally been skittish of open source projects without signatures and support (e.g., Linux vs. Red Hat). AWS and other cloud platforms have already agreed to resell the technology.

Over the next 12 months, best practices and security protocols around containers will become more standardised. And as they do, enterprises and start-ups will benefit from Docker to create IT departments that function as smoothly as container terminals.

The post Why Enterprises Need Containers and Docker appeared first on Gathering Clouds.