Todas las entradas hechas por neilanderson

Data centres and cloud networks: Security in the modern context

Traditionally, companies have sought to create a hardened IT network perimeter that kept all potential cyber threats out and to protect organisations through the use of network security platforms such as firewalls. In the modern context, however, this has become a restrictive and dangerous approach and I will explain why. 

What we think of as traditional firewalls are only really able to inspect unencrypted traffic. This means that attackers will use encrypted communications to exploit and maintain control over assets. Attackers have also moved to exploit changes in application design and implementation, and use network paths between application components that traverse internal data centre and cloud networks. 

While traditional network security appliances, such as firewalls and Intrusion Prevention Systems (IPS), are still useful for creating choke points in conventional networks, their utility declines rapidly in cloud and distributed networks. This is because the traditional model of network security was based on the assumption that the majority of traffic would be passing from the perimeter “south” towards monolithic service pods, with little traffic propagating across the data centre.  We also assumed that the majority of our services would be hosted in data centres that enterprises would own and deploy themselves. 

In contrast, modern application architecture now takes advantage of highly automated cloud and hosted data centre solutions based on multiple layers of virtualisation. The rise of containerisation and the move towards micro-service architectures has also lead to a proliferation of network traffic between workloads within, and across, data centre and cloud networks. Now much of this traffic moves east – west rather than north – south, meaning that the adequacy of traditional security appliances is vastly reduced.  It also leads to a reduction in our visibility into the traffic flows between application components. The automation and orchestration functions within these applications can make it difficult to predict how and where data will transit across the network, and whether the network will be entirely under our control.

To protect these modern application architectures we need to be able to apply security policy to east – west traffic in a way that is consistent with the automation and orchestration tools available within the enterprise. Conventional network security tools typically integrate poorly with these systems – although vendors continue to improve this situation through the implementation of configuration APIs (application programming interfaces) and the general move to Software Defined Networking (SDN) – which leads to delays in the implementation of new services and the creation of unwelcome blockers in the management of service infrastructures. 

One approach taken by conventional network security vendors has been to create virtual appliance versions of their existing platforms, with the intention that these can be deployed in cloud and virtualised networks in a way that mirrors traditional distributions. Unfortunately, this does nothing to alleviate the key issues with the legacy model of deploying network security, resulting in a broken model that fails to address the crucial requirements of the modern network. A new model for delivering network security is required.

In modern environments, we need network security functions to be heavily automated and capable of integrating with the standard toolsets available to operational teams. New toolsets should devolve network security functions down to the endpoint and/or workload, whilst still providing centralised programmatic methods for configuration. This requirement has led to the development of micro- or nano-segmentation. The approach is based on the need to apply security policies to network traffic regardless of where services are physically deployed, and allows the distribution of fine-grained security policies, usually at the workload or container level. This ensures that traffic between them is still subject to inspection and the application of policy, even if it never leaves the physical host that they are running on.

This is a vital point: traditional network security approaches cannot do this since they require the traffic to break out of the physical host at some point. In the past, attempts to meet this requirement have led to the implementation of highly complex and fragile routing configurations that often lead to the loss of key advantages for virtualised networks. These “work around” solutions have often been exploited by attackers to persist within a compromised network and can enable lateral movement within a service – something that micro-segmentation technology is explicitly designed to prevent.

A nice side-effect of the centralised management of network security policy on workloads is that through logging and other forms of telemetry, it is possible to passively detect and map out application data flows, which is an invaluable feature in highly automated networks spanning multiple data centres and cloud services. This can be combined with active application performance management systems that hook into orchestration and automation platforms to dynamically adjust network configurations and optimise service delivery.

The modern enterprise is heavily reliant on the use of cloud and virtualised network services, and it would be foolish to try to shoehorn these services into a traditional network security model that is simply incapable of supporting them fully. Modern enterprises should be deploying new security architectures that support their application infrastructure and can adapt in step with the requirements of consistent service performance and business requirements. 

A key component of this is the deployment of micro-segmentation technology. This ensures that application data flows are adequately protected in a way that is easy to integrate with highly adaptable automation and orchestration tools. The end result is that enterprises can limit their exposure to threats that exploit brittle and low yield traditional security architectures and gain valuable insight into their application infrastructure through passive and active network telemetry.