‘Software glitch’ to blame for global Cloudflare outage


Keumars Afifi-Sabet

3 Jul, 2019

Cloudflare has resolved an issue that led to websites serviced by the networking and internet security firm to show 502 ‘Bad Gateway’ errors en masse for half an hour yesterday.

From 2:42pm BST the networking giant suffered a massive spike in CPU utilisation to its network, which Cloudflare is blaming on bad software deployment. This affected websites hosted in territories across the entire world.

Ironically, even Downdetector was knocked offline during the outage

Once this faulty deployment was rolled back, its CTO John Graham-Cumming explained, service was returned to normal operation and all domains using Cloudflare returned to normal traffic levels.

“This was not an attack (as some have speculated) and we are incredibly sorry that this incident occurred,” Graham-Cumming said.

“Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.”

The incident affected several massive industries, including cryptocurrency markets, with users not able to properly access exchanges like CoinMarketCap and CoinBase.

Cloudflare issued an update last night suggesting the global outage was caused by the deployment of just one misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment. The company had aimed to improve the blocking of inline JavaScript used in cyber attacks.

One of the rules it deployed caused CPU to spike to 100% on its machines worldwide, and subsequently led to the 502 errors seen on sites across the world. Web traffic dropped by 82% at the worst point during the outage.

“We were seeing an unprecedented CPU exhaustion event, which was novel for us as we had not experienced global CPU exhaustion before,” Graham-Cumming continued.

“We make software deployments constantly across the network and have automated systems to run test suites and a procedure for deploying progressively to prevent incidents.

“Unfortunately, these WAF rules were deployed globally in one go and caused today’s outage.”

At 3:02pm BST the company realised what was going on and issued a global kill on the WAF Managed Rulesets which dropped CPU back to normal levels and restored traffic, before fixing the issue and re-enabling the Rulesets approximately an hour later.

Many on social media were speculating during the outage that the 502 Bad Gateway errors may be the result of a distributed denial-of-service (DDoS) attack. However, these suggestions were fairly quickly quashed and confirmed to be untrue by the firm.