Just about every new piece of technology is considered disruptive to the extent that they are expected to replace older technologies. Sometimes as with the cloud, old technology is simply re-branded to make it more appealing to customers and thereby to create the illusion of a new market. Let’s remember that cloud computing had previously existed in one shape or form. At one stage it was called on-demand computing, and then it became ‘application service provision’.
Now there is edge computing, which some people are also calling fog computing and which some industry commentators feel is going to replace the cloud as an entity. Yet the question has to be: Will it really? The same viewpoint was given when television was invented. Its invention was meant to be the death of radio. Yet people still tune into radio stations by their thousands each and every day of every year.
Of course, there are some technologies that are really disruptive in that they change people’s habits and their way of thinking. Once people enjoyed listening to Sony Walkmans, but today most folk listen to their favourite tunes using smartphones – thanks to iPods and the launch of the first iPhone by Steve Jobs in 2007, which put the internet in our pockets and more besides.
Levine’s prophecy
So why do people think edge computing will blow away the cloud? This claim is made in many online articles. Clint Boulton, for example, writes about it in his Asia Cloud Forum article, ‘Edge Computing Will Blow Away The Cloud’, in March this year. He cites venture capitalist Andrew Levine, a general partner at Andreessen Horowitz, who believes that more computational and data processing resources will move towards “edge devices” – such as driverless cars and drones – which make up at least part of the Internet of Things. Levine prophesises that this will mean the end of the cloud as data processing will move back towards the edge of the network.
In other words, the trend has been up to now to centralise computing within the data centre, while in the past it was often decentralised or localised nearer to the point of use. Levine sees driverless cars as being a data centre; they have more than 200 CPUs working to enable them to operate without going off the road and causing an accident. The nature of autonomous vehicles means that their computing capabilities must be self-contained, and to ensure safety they minimise any reliance they might otherwise have on the cloud. Yet they don’t dispense with it.
Complementary models
The two approaches may in fact end up complementing each other. Part of the argument for bringing data computation back to the edge falls down to increasing data volumes, which lead to ever more frustratingly slow networks. Latency is the culprit. Data is becoming ever larger. So there is going to be more data per transaction, more video and sensor data. Virtual and augmented reality are going to play an increasing part in its growth too. With this growth, latency will become more challenging than it was previously. Furthermore, while it might make sense to put data close to a device such as an autonomous vehicle to eliminate latency, a remote way of storing data via the cloud remains critical.
The cloud can still be used to deliver certain services too, such as media and entertainment. It can also be used to back up data and to share data emanating from a vehicle for analysis by a number of disparate stakeholders. From a data centre perspective, and moving beyond autonomous vehicles to a general operational business scenario, creating a number of smaller data centres or disaster recovery sites may reduce economies of scale and make operations more inefficient than efficient. Yes, latency might be mitigated, but the data may also be held within the same circles of disruption with disastrous consequences when disaster strikes; so for the sake of business continuity some data may still have to be stored or processed elsewhere, away from the edge of a network. In the case of autonomous vehicles, and because they must operate whether a network connection exists or not, it makes sense for certain types of computation and analysis to be completed by the vehicle itself. However, much of this data is still backed up via a cloud connection whenever it is available. So, edge and cloud computing are likely to follow more of a hybrid approach than a standalone one.
Edge to cloud
Saju Skaria, senior director at consulting firm TCS, offers several examples of where edge computing could prove advantageous in his LinkedIn Pulse article, ‘Edge Computing Vs. Cloud Computing: Where Does the Future Lie?’. He certainly doesn’t think that the cloud is going to blow away.
“Edge computing does not replace cloud computing…in reality, an analytical model or rules might be created in a cloud then pushed out to edge devices… and some [of these] are capable of doing analysis.” He then goes on to talk about fog computing, which involves data processing from the edge to a cloud. He is suggesting that people shouldn’t forget data warehousing too, because it is used for “the massive storage of data and slow analytical queries.”
Eating the cloud
In spite of this argument, Gartner’s Thomas Bittman, seems to agree that ‘Edge Will Eat The Cloud’. “Today, cloud computing is eating enterprise datacentres, as more and more workloads are born in the cloud, and some are transforming and moving to the cloud… but there’s another trend that will shift workloads, data, processing and business value significantly away from the cloud. The edge will eat the cloud… and this is perhaps as important as the cloud computing trend ever was.”
Later on in his blog, Bittman says: “The agility of cloud computing is great – but it simply isn’t enough. Massive centralisation, economies of scale, self-service and full automation get us most of the way there – but it doesn’t overcome physics – the weight of data, the speed of light. As people need to interact with their digitally-assisted realities in real-time, waiting on a data centre miles (or many miles) away isn’t going to work. Latency matters. I’m here right now and I’m gone in seconds. Put up the right advertising before I look away, point out the store that I’ve been looking for as I driver, let me know that a colleague is heading my way, help my self-driving car to avoid other cars through a busy intersection. And do it now.”
Data acceleration
He makes some valid points, but he falls into the argument that has often been used about latency and data centres: They have to be close together. The truth, however, is that wide area networks will always be the foundation stone of both edge and cloud computing. Secondly, Bittman clearly hasn’t come across data acceleration tools such as PORTrockIT and WANrockIT. While physics is certainly a limiting and challenging factor that will always be at play in networks of all kinds – including WANs, it is possible today to place your datacentres at a distance from each other without suffering an increase in data and network latency. Latency can be mitigated, and its impact can be significantly reduced no matter where the data processing occurs, and no matter where the data resides.
So let’s not see edge computing as a new solution. It is but one solution, and so is the cloud. Together the two technologies can support each other. One commentator says in response to a Quora question about the difference between edge computing and cloud computing that “edge computing is a method of accelerating and improving the performance of cloud computing for mobile users.” So the argument that edge will replace cloud computing is a very foggy one. Cloud computing may at one stage be re-named for marketing reasons – but it’s still here to stay.