What’s in your cloud? Key lessons to learn after the Capital One breach

The lack of visibility into the expanded cloud attack surface is a fast-growing problem that is only getting worse. Although we have seen misconfigurations in the cloud before, the Capital One breach is a sobering reality check for the security industry. We need to vastly improve threat detection and response in cloud environments.

The attack behaviours associated with the Capital One breach that occurred in March 2019 are consistent with other data breaches with one exception: It transpired quickly over two days due to the attacker’s familiarity with specific Amazon Web Service (AWS) commands.

The simple misconfiguration of a web application firewall (WAF) – which is designed to stop unapproved access – enabled the attacker to obtain an access token from that same WAF to carry out the breach.

AWS enables organisations to issue tokens that give trusted users temporary security credentials that control access to AWS resources. Temporary security credentials work almost identically to long-term access key credentials.

A temporary token is a good way to give a user the right to perform specific tasks and it reduces the need to manage access to certain accounts. However, it runs the risk of exposing passwords from a compromised account.

The misconfiguration of the Capital One WAF enabled a remote attacker to generate a temporary AWS token that could then be used to fetch data from an AWS simple storage service (S3).

It would be easy to say Capital One should have not made this kind of mistake, but when organisations transition to the cloud, these type of mistakes and misconfigurations are unfortunately common. With full access to the web servers, the attacker executed a simple script of AWS commands used for system administration. The first was the S3 list-buckets command to display the names of all the AWS S3 buckets.

This was followed by a sync command that copied 700 folders and buckets of data containing customer information to an external destination. These are AWS commands used every day by cloud administrators that manage data stored in AWS virtual private clouds (VPCs).

The challenge in detecting this type of attack is not the threat behaviours, but the data source. The attack did not use malware, was not persistent on hosts, and did not exhibit unusual network traffic. And the attacker blended in with normal cloud administrative operations.

Data access and compromise occurred using simple AWS commands commonly used in the management interface. Any hope of detecting attackers in this scenario will require insight into the AWS management plane – which doesn’t exist today.

With so much hanging in the balance, high-fidelity visibility into the everyday management of every cloud infrastructure is imperative. In the Capital One case, the attacker was quickly identified by a vigilant observer. The attacker was not a nation-state actor or part of a sophisticated cybercrime ring capable of covering its tracks. Otherwise, this data compromise could have easily gone unnoticed for years.

Managing access

Cloud service providers (CSPs) must ensure that their own access management and controls limit access to cloud tenant environments. And cloud tenants must assume compromise is possible and focus on learning the who, what, when and where of administrative access management.

Properly assigning user access rights helps reduce instances of shared credentials so cloud tenants can concentrate on how and when those credentials are used. Resource access policies can also reduce opportunities for movement between the CSP infrastructure and tenants.

Detection and response

It is critically important to monitor cloud-native and hybrid cloud environments as well as determine how to correlate data and context from both into actionable information for security analysts.

Monitoring resources deployed in the cloud by tenants is essential to increase the ability to detect lateral movement from the CSP infrastructure to tenant environments and vice versa. Visibility into this and other attacker behaviours is dependent on the implementation of proper tools that leverage cloud-specific data.

Cloud tenants who coordinate with CSPs – as well as CSPs who coordinate with cloud tenants – can stitch together a powerful combination of information that can increase the likelihood of detecting the post-compromise activities before a catastrophic breach occurs.

Security operations

Knowing and managing the cloud infrastructure as a part of due diligence should help to identify systems and operations that are compromised, such as in the Capital One breach.

Changes to production systems can be difficult to detect. But with 360-degree visibility into the cloud infrastructure, it is much easier to detect attacker behaviours in compromised systems and services that are clearly operating beyond the scope of what is normally observed.

Ideally, when security operations teams have solid information about expectations for that cloud infrastructure, malicious behaviours will be much easier to identify and mitigate.

Read more: Capital One confirms data breach, cites cloudy approach as key to swift resolution

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.