All posts by Davey Winder

Move over VMs, the future of app deployment is in containers


Davey Winder

10 Apr, 2018

Containerisation is fast becoming one of the most popular methods of deploying applications in a virtual environment, and is widely considered to be making ‘virtual machines’ a thing of the past.

Yet what exactly are containers and why should you bother moving from a tried and trusted VM?

Containers? You mean like boxes for moving our computer gear?

Not exactly – we’re not talking about packaging up physical appliances here. But in the IT operational sense, containers are pretty much the same idea, only for applications. Docker, which is the best-known proponent of the technology, defines a container as a “lightweight, standalone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries and settings”.

Is ‘container’ just a fashionable term for VM?

Containers and virtual machines do share some similarities, particularly to do with resource isolation. But they’re not the same thing. A virtual machine is primarily an abstraction of a hardware platform – an approach that makes it easy to turn one physical server into lots of independent virtual ones. In a setup like this, each VM runs its own operating system and application stack.

Containers, by contrast, focus on virtualising an operating environment. Multiple containers can run concurrently under a single OS, just like regular applications. It’s a more efficient technology, and much more portable.

And we need this why, exactly?

Containers are tremendously useful when it comes to moving software between different computing environments – for example, moving an application from testing into production, or from physical hardware into the cloud.

You can be confident that things will continue to work as expected, even if the supporting software environment has a completely different network topology, security policy or hardware configuration. And since containers don’t require a complete OS installation, you can fit more containerised apps than VMs onto a single server.

So containers are only good for porting applications?

Containers are also very useful for development. They’re a great fit for the microservices, modular way of doing things. The key is that you don’t need to run everything within a single container: you can connect together multiple containers to build an application out of known quantities.

This is a huge help when it comes to management and development, as individual modules can be updated individually – and it’s efficient too, as each container is only initiated (in an almost instant, “just in time” fashion) when it’s needed.

That sounds good. But do we have to tie ourselves to Docker?

Not at all – containers have been built directly into Linux for years now, under the umbrella of the LXC user space interface for kernel containment features (you can read more on this here). Another free, open source container system is Kubernetes. However, if flexibility and support are priorities, Docker is probably the biggest and best-known cross-platform container technology vendor.

Will we be locking ourselves into the framework we choose?

You’re right to raise the question: app container images can be proprietary. For example, Docker and CoreOS have had differing specifications in the past. However, since 2015 the Linux Foundation’s Open Container Initiative (OCI) has been working on a standard container format. Both Docker and CoreOS are sponsors, along with the likes of AWS, Google, HP, IBM, Microsoft, Oracle, Red Hat and VMware. So things are only going to get easier.

So are containers more secure than VMs?

One aspect of container technology that seems to cause endless debate is security. The concern is that, because multiple containers can run on one host platform, a single compromise could affect a whole stack of containers. That’s less of a concern with virtual machines, since each one is completely isolated from the other VMs running on the same hardware. What’s more, hypervisors don’t expose the entire functionality of the Linux kernel, so the attack surface of a VM is smaller, which again reduces the risk.

But containers have security strengths too. The model allows for a microservices approach, which modularises an application into a well-defined interface and limited package services – making it hard for anything to slip through the cracks.

Containers can also be scanned on access, and network segmentation can be used to isolate application clusters. In all, a well configured, properly deployed container should be just as secure as a virtual machine; the only catch is that you need to ensure that your containers meet those standards.

So, the big question: how do we get management to buy in?

As we’ve mentioned, containers can save money versus virtual machines, as the hardware demands are lesser. There’s also the potential for quicker deployment: when you need to roll out application updates, it’s much easier to replace a few containers than to update an entire virtual machine.

Containers also bring flexibility to the party: your developers can write in almost any language, and deploy painlessly to both Windows and Linux, so they’re not wasting time adapting to the idiosyncrasies of your environment. And, of course, since test, staging and deployment environments are identical, bugs are much less likely to make it into the final production code.

Should we just ditch our VMs and switch entirely to containers?

If you need to run a big stack of apps on a modest allocation of resources then containers probably make more sense than VMs. But even the container vendors admit that virtualisation and containers work best when used together.

One option is to run your containers within VMs: this provides even better isolation and better security, as well as allowing you to easily manage your virtual hardware infrastructure management – so for many scenarios it’s the best of both worlds.

Image: Shutterstock