Todas las entradas hechas por markpidgeon

Kubernetes and multi-cloud: How to monitor your modern applications effectively

Many companies are moving to a new way of delivering service to customers based on microservices. Rather than building huge and monolithic apps, microservices uses small and interconnected application components instead. These modern applications tend to be easier to update and expand than those traditional applications, as replacement services can be slotted in using APIs rather than requiring full rewrites.

To support this design approach, developers are making more use of cloud and containers. According to the Continuous Intelligence Report for 2019, the percentage of companies adopting containers has grown to 30 percent. Cloud services can host tens or thousands of containers based on how large the application needs to be, while the number of containers can be raised or lowered depending on demand. This makes containers complex to manage. For companies that run their critical applications on these new services, managing all this infrastructure is a huge challenge.

To administer this new infrastructure, companies are adopting Kubernetes as a way to orchestrate their IT. In the CI Report, the percentage of companies adopting Kubernetes ranged from 20 percent for businesses running on AWS alone, through to 59 percent for those running on a combination of AWS and Google Cloud Platform.

For companies running on AWS, GCP and Azure, the adoption of Kubernetes was up to more than 80 percent. For multi-cloud environments, Kubernetes helps to streamline their operations and respond more quickly to changes in demand.

Monitoring Kubernetes

So far, Kubernetes has helped companies turn the idea of multi-cloud into a reality. By being able to run the same container images across multiple cloud platforms, IT teams should be able to maintain  control over their IT and maintain leverage when it comes to pricing.

However, Kubernetes is still a developing technology in its own right. While it provides a strong foundation for developers to build and orchestrate their applications’ infrastructure, there are some gaps when it comes to maintaining, monitoring and managing Kubernetes itself.

Kubernetes pods, nodes and even whole clusters can all be destroyed and rebuilt quickly in response to changes in demand levels. Rather than looking at infrastructure, effectively monitoring what is running in Kubernetes involves looking at the application level and focusing on each Service and Deployment abstraction instead. Monitoring therefore has to align with the way Kubernetes is organised, as opposed to trying to fit Kubernetes into a previous model.

It is also important to understand the different forms of data that might be captured. Log data from an application component can provide insight into what processes are taking place, while metric data on application performance can provide insight into the overall experience that an application is delivering.

Joining up log and metric data should give a complete picture of the application, but this task is not as easy as it sounds. It can be near impossible to connect the dots between metrics on a node to logs from a pod in that node. This is because the metadata tagging of the data being collected is not consistent. A metric might be tagged with the pod and cluster it was collected from, while a log might be categorised using a different naming convention.

To get a true picture of what is taking place in an application running on Kubernetes involves looking at all the data being created and correlating this information together. Using metadata from the application alongside the logs and metrics information coming in, a consistent and coherent view of what is taking place across all the containers being used can be established. This involves collecting all the metadata together and enriching it so that consistent tagging and correlation can be carried out.

Bringing all the data together

Looking at Kubernetes,  it’s easy to see why the number of companies utilising it are on the rise. However, currently developers can have multiple different tools in place to take data out of container instances and bring that information back for analysis and monitoring, which can be hard to scale. For log data collection, Fluent Bit processes and forwards on data from containers; similarly, Fluentd provides log and event data collection and organisation. The open source project Prometheus provides metrics collection for container instances, while Falco provides a way to audit data from containers for security and compliance purposes.

Each of these tools can provide an element of observability for container instances, but ideally they should be combined to get a fuller picture of how containers are operating over time. Similarly, automating the process for gathering and correlating data across multiple tools can help make this information easier to use and digest.

Bringing all these different sets of data together not only provides a better picture of what is taking place in a specific container or pod; the merged data can be used alongside other sources of information too. By gathering all this data together in real time, as it is created, you can see how your company is performing over time. This continuous stream of intelligence can be used to see how decisions affect both IT infrastructure and business performance.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Putting data security at the heart of digital transformation – from culture to code

In the new digital economy, data is the most valuable asset a company possesses. However, according to a recent survey by IDC, the spending ceiling for data security is as low as six per cent of the total security budget. Understandably, many information security professionals are feeling the pinch – and increasingly burning out and leaving the industry according to Goldsmiths, University of London – and companies aren’t spending enough on data security to prevent bad attackers from swiping the family silver.

At the same time, large-scale digital transformation projects continue to be high-profile news. The IDC report also found that 97 per cent of respondents were using sensitive data on new technologies as part of digital transformations, but fewer than 30 per cent were using tools, such as encryption, to keep that data secure within these environments.

This lack of security is a worrying trend when security should be included by design in digital transformation projects and implemented as early as possible in this new approach to the software development lifecycle.

Securing software

Software is eating the world; Marc Andreessen’s famous description of the need for every company to become a software business has been devoured by enterprises, but this rapid process of change has given many organisations indigestion and security headaches to boot. These investments are strategic ones, but they can often move ahead far faster than security teams can get involved.

Behind these changes, there are some bigger IT adoption trends taking place too. For example, environments have changed; many enterprises have moved from private cloud to hybrid cloud and are now embarking on multi-cloud. Our own  Modern App Report found that multi-cloud adoption had doubled year on year to around 10 per cent of companies.

Similarly, application architectures have shifted from the traditional three-tier, client-server approach to new microservices-based approaches. The technology stack is now shifting to containerised applications that are orchestrated by the likes of popular open source platforms such as Kubernetes. The responsive, flexible and scalable capabilities of these technologies has yielded significant performance and efficiency gains but it has added greater complexity.

The ephemeral nature of technologies, such as Docker and Kubernetes, has meant that the security tools used to collate data from these applications like security incident and event management (SIEM) are unable to keep pace with the rate of change taking place. Without this data and insight into your company’s applications and data, it’s simply not possible to gain insight into your security posture.

Planning out any digital transformation project should requires a thorough security needs assessment too. If done correctly, this provides a complete overview of your operating conditions and how processes operate, and it helps meet the business demands that digital transformation projects require.

Implementing a data-driven baseline as part of this process is also a vital way of protecting your enterprise. Using machine data – all the data created by all the applications, infrastructure components, cloud services and more – should supply more meaningful insights from metrics, logs and thresholds that you can evaluate in the current infrastructure and assess again once the project is live and running. 

The right DevSecOps tools

Getting this visibility around the cloud can help development, security and operations teams converge their approaches. This convergence – commonly called DevSecOps – involves making security into a continuous process that is part of the development lifecycle. This convergence can help maintain the speed of digital transformation while also ensuring security rules get followed from the start.

A DevSecOps approach differs to old delivery pipeline methods in that traditional software development priorities have not tended to address software vulnerabilities from the start. When software development relies on integrating third party programme components or publicly available images to create these services, this supply chain element becomes more important for all the teams involved.

Alongside this, there is a common assumption that DevSecOps is only about making sure that your security teams are working with developers and IT Ops teams. However, DevSecOps should go deeper than that in order to be successful. It’s an approach that sees security as code, building data protection and privacy thinking into the code itself from all stages: starting in design and architecture through to development, QA, pre-production and into production.

In practice, this means working with development teams on code is delivered in small updates and building security checks into the process so that any vulnerabilities can be spotted quickly before they go into production. This involves taking a more proactive approach that sees compliance monitoring baked in as well. This effectively positions your organisation in a constant state of audit readiness.

As you may have guessed, time-consuming manual security analysis and auditing will slow down the frequency and speed of software delivery. Automation is therefore integral to the success of DevSecOps, as areas such as threat investigation must be ongoing for any emerging threats and vulnerabilities as they are identified with code analysis. Using automated scans and analysis of data across the application, DevSecOps teams can concentrate on where they can provide the most value rather than on spending time on manual correlation of potential issues.

Empowering IT teams

The DevSecOps principles should not be seen as a silver bullet for digital projects; indeed, they are only effective with the right tools and data to power them. Implementing DevSecOps has to be based on a common approach to the applications and services involved. There will be too many interactions taking place to decipher without a unified approach for monitoring and fine-tuning operations.

Making security the responsibility of everyone across IT does mean having to manage different levels of experience around software and security. Generally, software developers don’t have the same history in looking through alerts to discern which ones are serious and should be investigated as risks, while they do have more expertise in new application design practices and how to put services together. Providing the right level of data – and making sure it can be made actionable and relevant for each team – is, therefore, something to consider as you implement your DevSecOps processes.

In a fast-paced environment, security tools that generate too many false positives can be as serious a problem as sticking with manual security testing. If too many issues come through, it can lead to “alert fatigue” and serious issues can be then be missed. By developing a baseline and monitoring alert levels, IT teams can avoid this problem. Similarly, you can automate common responses to potential conditions or threats. At the same time, data can help teams to interact in real time around real risks or potential threats in software systems as they are discovered.

Digital transformation is still gathering pace – more and more organisations are looking at how to improve their agility and keep up with competitors. However, this should not come at the cost of security. In the same way that DevOps is a fundamentally different approach to developing and delivering software, DevSecOps represents a completely different approach to making software secure. This approach is necessary if companies want to get all the potential value of their digital investments and avoid unnecessary risks.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.