Container systems like Docker are a popular way to build ‘cloud native’ applications designed for cloud environments from the beginning. You can have thousands of containers in a typical enterprise deployment, and they’re often even more ephemeral than virtual machines, appearing and disappearing in seconds. The problem with containers is that they’re difficult to manage at scale, load balancing and updating them in turn via the command line. It’s like trying to herd a bunch of sheep by dealing with each animal individually.
Enter Kubernetes. If containers are sheep, then Kubernetes is your sheepdog. You can use it to handle tasks across lots of containers and keep them in line. Google created Kubernetes in 2014 and then launched the Cloud Native Computing Foundation (CNCF) in partnership with the Linux Computing Foundation to offer it as an open project for the community. Kubernetes can work with different container systems, but the most common is Docker.
One problem that Kubernetes solves is IP address management. Docker manages its own IP addresses when creating containers, independently of the host virtual server’s IP in a cloud environment. Containers on different nodes may even have the same IP address as each other. This makes it difficult for containers on different nodes to communicate with each other, and because containers on the same host share the same host IP address space, they can’t use the same ports. Two computers on the same node can’t each expose a service over port 80, for example.
Understanding Kubernetes pods and clusters
Kubernetes solves problems like this by grouping containers into pods. Each container in a pod has the same IP address, and they can communicate with each other on localhost. It exposes these pods as services (an example might be a database or a web app). Collections of pods and the nodes they run on are known as clusters, and each container in a clustered pod can talk to containers in other pods using Kubernetes’ built-in name resolution.
You can have multiple pods running on a node (a physical or virtual server). Each node runs its own Kubelet, which ensures that a cluster is in the correct state, along with a kube-proxy, which handles network communication for the pods. Nodes work together to form a cluster.
Kubernetes manages all this using several components. The first is the network overlay, which handles networking between different pods. You can install a variety with a range of capabilities, including advanced ones like the Istio service mesh.
The second component is etcd, which is a database for all the objects in the cluster, storing their details as a series of key:value pairs. etcd runs on a master node, which is a machine used to administer all the worker nodes in the cluster. The master node contains an API server that acts as an interface for all components in the cluster.
A node controller running on the master node handles when nodes go down, while a service controller manages accounts and access tokens so that pods can authenticate and access each other’s services. A replication controller creates running copies of pods across different nodes that run the same services, sharing workloads and acting as backups.
Installing and running Kubernetes
Installing Kubernetes will be different on each machine. It runs not just on Linux, but also on Windows and macOS. In summary, you’ll install your container system (usually Docker) on your master and worker nodes. You’ll also install Kubernetes on each of these nodes, which means installing these tools: kubeadm for cluster setup, kubectl for cluster control, and kubelet, which registers each node with the Kubernetes API controller.
You’ll enable your kubelet service on each of these nodes so that it’s ready to talk to the API. Then initialise your cluster using the kubeadm command kubeadmin init for your master node. This will give you a custom kubeadm join command that you can copy and use to join each worker node to the cluster.
After this, you can create a pod. You define the pod’s characteristic using a configuration file known as a PodSpec. This is often written in YAML («YAML ain’t markup language»), which is a human- and machine-readable configuration format. Your YAML file will define the name space that your pod exists in (you can name Kubernetes clusters differently so that you can run multiple clusters on the same physical machine).
The PodSpec also defines the details for each container inside the pod, including the Docker images on which they’re based. This file can define a pod-based volume for them in the same pod so that they can store data on disk and share it. You can create a pod using a single command – kubectl create – passing it the name of your YAML file.
Running copies of a pod for resilience and workload sharing is known as replication, and a collection of replicated pods is called a replica set. While you can handle replica sets directly, you’ll often control them using another kind of Kubernetes object known as a deployment. These are objects in the Kubernetes cluster that you use to create and update replica sets, and clear them away when you’re done with them. Replica sets can contain many pods, and a deployment gives you a strategy to update them all (adding a new version, say).
A YAML-based deployment file also contains a PodSpec. After creating a deployment (and therefore its replica pods) using a simple kubectl create command, you can then update the whole deployment by changing the version of the container image it’s using. Do this using kubectl set image, passing it a new version number for the image. The deployment now updates all the pods with the new specification behind the scenes, taking care to keep a percentage of pods running at all times so that the service keeps working.
This is all great, but how do we actually talk to and reference these pods? If we have, say, ten web server pods in a replica set, we don’t want to work out which one’s IP to visit. That’s where Kubernetes’ services come in. We define a service that exposes that replica set using a single IP address and a service name like ‘marketing-server’. You can connect to the service’s IP address or the service name (using Kubernetes’ DNS service) and the service interacts with the pods behind the scenes to deliver what you need.
That’s a short introduction to Kubernetes. As you can imagine, there’s plenty more to learn. If you’re hoping to manage native cloud services in any significant way, you’re going to bump up against it frequently, so it pays to invest the time in grokking this innovative open source technology as much as you can. With Kubernetes now running on both Amazon Web Services and Azure alongside Google’s cloud service, it’s already behind many of the cloud services that people use today.