Four-hour Google Cloud outage blamed on ‘network congestion’


Jane McCallion

3 Jun, 2019

Google Cloud Platform (GCP) suffered a significant outage on Sunday night that lasted nearly three hours, knocking offline services including G Suite, YouTube and Google Cloud.

The issue was first noted on the company’s cloud status dashboard at 8.25pm BST on 2 June as a Google Compute Engine problem.

Shortly, however, reports of problems with Google Cloud, YouTube and more started hitting Twitter and by 8.59pm, the dashboard acknowledged it was a “wider network issue”.

By 12.09am on 3 June, the issue was resolved but little detail is available as to what happened beyond “high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, G Suite and YouTube”.

However, someone claiming to work on Google Cloud (but currently on holiday) posted a message on Hacker News saying: “It’s disrupting everything, including unfortunately the tooling we usually use to communicate across the company about outages.”

“There are backup plans, of course, but I wanted to at least come here to say: you’re not crazy, nothing is lost … but there is serious packet loss at the least,» they added. 

In a statement, Google told Cloud Pro: “We will conduct a post mortem and make appropriate improvements to our systems to prevent this from happening again. We sincerely apologise to those that were impacted by [these] issues. Customers can always find the most recent updates on our systems on our status dashboard.”

Some, however, have questioned what exactly Google meant by “high levels of network congestion in the eastern USA”.

Clive Longbottom, co-founder of analyst house Quocirca, told Cloud Pro: “If this was the case, a lot more than GCP would have been impacted: this does not seem to have been the case. As such, it would appear that what Google possibly means is that it was excessive network traffic in its own environment in the Eastern USA.”

He suggested that the excessive network traffic was potentially caused by something internal.

“This could be something like a memory leak on an app going crazy, or (like AWS some time back) human error through a script causing a looping command bringing chaos to the environment.”

This doesn’t mean that organisations should abandon cloud for business-critical workloads, however. Owen Rogers, research director at the digital economics unit of 451 Research, told Cloud Pro: “Four hours is quite a long time … but it’s a tricky issue, because outages are going to happen now and then, and all customers can do is to build resiliency such that if an outage does occur, they have a backup.

“Using multiple availability zones and regions is a must, but if applications are business critical, multi-cloud should be considered. Yes, it’s more complex to manage; yes, you’ll have to train more people. But if your company is going to go bust because of a few hours of outage, it is an investment worth making. It appears some hyperscalers are more resilient than others, but even the best are likely to slip up occasionally.”