All posts by matkeep

A CIO writes (sort of): I’m terrified of cloud lock-in – what should I do?

(c)iStock.com/kuarmungadd

“Dear Deidre,

I’m stuck in a loveless marriage.

I’m in a three-year relationship which I don’t think I can get out of. It started off great, but now I’m getting bled dry and I don’t have the freedom I thought I would have. What should I do?

Regards,

C.I O’Really.”

Some of you will be suffering from the same concerns as poor Mr. O’Really. Over the last two years it seems like every organisation has been enthusiastically walking down the aisle with whichever cloud vendor had the best deal at the time.

In this ‘heartache column’, I want to look at the inhibitors of cloud adoption and which factors are driving the fear of cloud lock-in. Then talk through the two crucial steps users can take to get the best of both worlds – the business velocity provided by the cloud, without the risks of locking themselves into a specific vendor.

Our own Cloud Brief research has found 82% of respondents are strategically using or evaluating the cloud today. So they should, cloud computing is helping business build better services, quicker and at a lower cost. Better, quicker, cheaper. It’s what we’re all looking for in a relationship with the vendors we love the most.

The number one driver for cloud adoption is agility – the need to rollout new applications faster. This was reinforced at a recent meeting I had in London with developers from a leading global financial institution. They complained it takes three months for hardware supporting a new project to be procured, installed, racked, and stacked. Clearly unacceptable in today’s hyper-competitive market governed by agile development, continuous integration, and elastic scaling.

Where did it all go wrong?

The cloud is providing serious value, but hitching your wagon to one provider could present serious competitive disadvantage. Another cloud vendor could make all sorts of changes to become more appealing. New services, features or region availability are all changes that could give your competitors an advantage.

So what form does that lock-in take? It’s not about the hardware, operating systems and software of the past, instead it’s about APIs, services and data. The underlying infrastructure made up by compute, storage and networking are largely a commodity and can be exchanged between cloud providers. But as we move up the infrastructure stack, the APIs and data these services exchange become less portable.

There are a number of potential friction points: security, management, continuous integration pipelines and container orchestration. That’s not all though, business need to also consider their content management, search, databases, data warehouses, and analytics too.

It’s the data management services which cause most concern. You may have heard of the term ‘data gravity’. It was coined a few years ago, but has a real resonance today. As you’ll remember from fourth form physics: The mass of an object increasing is equalled by the increase in the strength of its gravitational pull. Well, the same is true for data. The more data you have in a specific location, the harder it is to move.

So you’ve got data gravity tugging on your heartstrings and you’re staring down the barrel of being locked-in to an endless stream of unfulfilling and expensive date nights. What are the options?

Open your heart

There are no easy answers, especially if your organisation needs the simplicity and convenience of an “as-a-service solution”. However, there are two things you can do to greatly improve your options.

The first step is to find a service that can be run on multiple public cloud platforms. Would you buy a car that can only be driven on 30% of roads? Don’t buy a service that only works on Amazon, Google or Microsoft.

Second, find services which have open source alternatives. That way, if you tire of your current vendor relationship, you can just give them the heave-ho and download the same software and run it yourself, anywhere. Having an open source choice means you can run your deployment on another cloud or on your own private infrastructure.  Going from an as-a-service to running it yourself does require planning, but it does give you the ultimate freedom.

To demonstrate why both of the above points matter, take a look at Comparethemarket.com. The largest price comparison site in the UK made the switch from managing its own on-premise infrastructure to Amazon Web Services (AWS). As a part of that move, the IT team considered the AWS DynamoDB NoSQL database service. However, concerns around exposing itself to excessive AWS control made comparethemarket.com eliminate DynamoDB as an option.

So our advice to C.I. O’Really is blunt: get a divorce. If your vendor isn’t treating you right then get one that is. Find a solution that is cloud-agnostic and that has an open source version available. These two points will go a long way to giving you the joy of the cloud, along with the freedom. They may seem charming but please, just date your cloud provider, don’t marry them.

Editor’s note: Dear Deidre is a well-known UK agony aunt column in The Sun newspaper. Other agony aunts are also available.

Conducting containers in the cloud: Evaluating orchestration options

(c)iStock.com/cyano66

The standard has been set for containerisation. It’s Docker. Need an identical copy of your application stack in multiple environments? Package it once for Docker and away you go. It’s simple, and it’s why we love containers. That was all you needed back in 2015 – but things are changing fast.

Now that containers have proved their viability, it’s on to the next big area for innovation: orchestration. This is where there’s even more competition and the potential for confusion for users.

As your organisation scales and your container requirements go from half a dozen to many hundreds, one DevOps engineer can’t deploy them all. The question becomes: how will you automate, deploy and manage your fleet of containers? The answer for most teams will be through orchestration.

Each orchestration platform has advantages relative to the others and so users should evaluate which are best suited to their needs. There are, of course, a lot of factors to consider. Some that I would recommend you keep in mind include:

  • Does your enterprise have an existing DevOps framework that the orchestration must sit within?
  • Will the containers be run on bare metal, private virtual machines (VMs) or in the cloud?
  • What skills do you have within your organisation?
  • Is my database designed to provide always-on availability and scalability in these distributed environments?

Now that you know what you want to do, it’s time to consider your options. There are many different types of orchestration platforms, but for the purposes of this article I wanted to look at the three most popular and where I think they can be of most use:

Docker Compose

Made by the container experts, Docker Compose is a great starting point for orchestration. Docker hasput together a platform that may not have all the bells and whistles you’ll find elsewhere, but it is a simple and effective tool.

Some of the benefits of using Docker Compose include:

  • A single host can run multiple, isolated environments
  • Data is preserved when containers are shut down and restarted.
  • It determines which containers for a project are already running, and which need to be started

It also can incorporate other Docker tools like, Docker Machine, which makes multiple machines look like a single machine, and Docker Swarm, a technology that claims to be able to deal with 1,000 physical services concurrently.

When to use it: If you’re a smaller shop or you have simple requirements, Docker Compose could be right for you. It’s also often used in testing environments, but then put into production with more feature rich platforms like the next two I’m going to describe.

Kubernetes

Kubernetes was created by Google and is one of the most feature-rich and widely used orchestration frameworks. In fact it’s the platform you’ve probably already heard of. Much as Docker become synonymous with containers, Kubernetes seems to be gaining the same status with container orchestration.

According to Wikipedia Kubernetes is Greek for “helmsman”. A clever and apt name as Kubernetes is designed to pilot your containers in multiple environments, including bare metal, on-premise, VMs and public clouds.

Its key features include:

  • Automated deployment and replication of containers
  • Online scale-in or scale-out of container clusters
  • Load balancing over groups of containers
  • Rolling upgrades of application containers
  • Resilience, with automated rescheduling of failed containers

When to use it: Kubernetes is suitable for most use cases involving orchestration and has already proved itself within Google’s demanding infrastructure. It’s designed for virtual machines, so will be particularly effective on those.

Mesos

Apache Mesos is designed to scale to tens of thousands of physical machines and it’s already being relied on by the likes of Twitter, Airbnb, and Apple.

It’s true that Mesos is fantastic at scale but it doesn’t have the features nor ease of us that Kubernetes does. There is likely to be extra work designing services and some coding required too but, depending on the use case, this may lead to even better performance.

There is actually a project currently in the works to run Kubernetes as a Mesos framework. Mesos provides the management of thousands of hosts while Kubernetes adds higher level functions such as high availability and rolling upgrades. Kubernetes adds the higher level functions such as: load balancing; high availability through failover (rescheduling); and elastic scaling.

When to use it: it was designed for physical machines, so that’s the area where I’d specifically recommend Mesos over Kubernetes. It’s also worth looking at if you have good knowledge on your team and the ability to do some research and experimentation to see how it can be fine-tuned for your use case.

Conclusion

That is, of course, not an exhaustive list but those are the three orchestration platforms I would investigate first. The two other points I would also like to make are around maturity and migration.

These are not technologies that were designed with a grand plan. Most of them came out of internal projects so were designed for a unique set of use cases. They are also still relatively young so functionality in some instances may be frustratingly limited, although these days kids grow up fast and open source technologies tend to mature quickly.

The other point is that it is possible to migrate from one orchestration platform to another. Though it’s much better to migrate the app while it is still in the testing phase. Moving an orchestration platform once a product is in production could get sticky quickly.  

Going from solo performer with a few containers to conducting a whole “orchestra” is a big step but as containers become more widely adopted it will become standard. My advice: play around now so you can get familiar with the different options and know which one will help you solve your business problems. Let me know how you get on (@matkeep) and good luck conducting!