Many companies are moving to a new way of delivering service to customers based on microservices. Rather than building huge and monolithic apps, microservices uses small and interconnected application components instead. These modern applications tend to be easier to update and expand than those traditional applications, as replacement services can be slotted in using APIs rather than requiring full rewrites.
To support this design approach, developers are making more use of cloud and containers. According to the Continuous Intelligence Report for 2019, the percentage of companies adopting containers has grown to 30 percent. Cloud services can host tens or thousands of containers based on how large the application needs to be, while the number of containers can be raised or lowered depending on demand. This makes containers complex to manage. For companies that run their critical applications on these new services, managing all this infrastructure is a huge challenge.
To administer this new infrastructure, companies are adopting Kubernetes as a way to orchestrate their IT. In the CI Report, the percentage of companies adopting Kubernetes ranged from 20 percent for businesses running on AWS alone, through to 59 percent for those running on a combination of AWS and Google Cloud Platform.
For companies running on AWS, GCP and Azure, the adoption of Kubernetes was up to more than 80 percent. For multi-cloud environments, Kubernetes helps to streamline their operations and respond more quickly to changes in demand.
Monitoring Kubernetes
So far, Kubernetes has helped companies turn the idea of multi-cloud into a reality. By being able to run the same container images across multiple cloud platforms, IT teams should be able to maintain control over their IT and maintain leverage when it comes to pricing.
However, Kubernetes is still a developing technology in its own right. While it provides a strong foundation for developers to build and orchestrate their applications’ infrastructure, there are some gaps when it comes to maintaining, monitoring and managing Kubernetes itself.
Kubernetes pods, nodes and even whole clusters can all be destroyed and rebuilt quickly in response to changes in demand levels. Rather than looking at infrastructure, effectively monitoring what is running in Kubernetes involves looking at the application level and focusing on each Service and Deployment abstraction instead. Monitoring therefore has to align with the way Kubernetes is organised, as opposed to trying to fit Kubernetes into a previous model.
It is also important to understand the different forms of data that might be captured. Log data from an application component can provide insight into what processes are taking place, while metric data on application performance can provide insight into the overall experience that an application is delivering.
Joining up log and metric data should give a complete picture of the application, but this task is not as easy as it sounds. It can be near impossible to connect the dots between metrics on a node to logs from a pod in that node. This is because the metadata tagging of the data being collected is not consistent. A metric might be tagged with the pod and cluster it was collected from, while a log might be categorised using a different naming convention.
To get a true picture of what is taking place in an application running on Kubernetes involves looking at all the data being created and correlating this information together. Using metadata from the application alongside the logs and metrics information coming in, a consistent and coherent view of what is taking place across all the containers being used can be established. This involves collecting all the metadata together and enriching it so that consistent tagging and correlation can be carried out.
Bringing all the data together
Looking at Kubernetes, it’s easy to see why the number of companies utilising it are on the rise. However, currently developers can have multiple different tools in place to take data out of container instances and bring that information back for analysis and monitoring, which can be hard to scale. For log data collection, Fluent Bit processes and forwards on data from containers; similarly, Fluentd provides log and event data collection and organisation. The open source project Prometheus provides metrics collection for container instances, while Falco provides a way to audit data from containers for security and compliance purposes.
Each of these tools can provide an element of observability for container instances, but ideally they should be combined to get a fuller picture of how containers are operating over time. Similarly, automating the process for gathering and correlating data across multiple tools can help make this information easier to use and digest.
Bringing all these different sets of data together not only provides a better picture of what is taking place in a specific container or pod; the merged data can be used alongside other sources of information too. By gathering all this data together in real time, as it is created, you can see how your company is performing over time. This continuous stream of intelligence can be used to see how decisions affect both IT infrastructure and business performance.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.