Analysing the rise of the distributed cloud model

(c)iStock.com/Erikona

The current model for public cloud computing is an optimisation of the traditional construct of internet-facing data centres: maximise the scale of the facility to serve as broad a market as possible, using the Internet as the means of distribution.

This model is wedded to a separation of the compute from the ‘dumb network’, or to use the original terminology of some 40 years ago, we are still in a world where we separate processing (computing) from inter-process communication (network).

The convenience of this separated relationship has bought us the Internet economy: an autonomous network that can deliver anything anywhere. The rise of cloud computing ‘as a service’ exploits this Internet delivery model, removing the need to buy and build your own data centres.

If buying digital infrastructure as a service is the future for computing, then the next question is whether the current architecture of cloud computing is the final resting place or will evolution take us elsewhere?

The clue to future direction is the current level of investment and activity around how to improve and secure the communication between clouds and between legacy assets and the cloud. Software defined networking, and the numerous cloud connect/exchange products, all seek to create an improvement in the level of communication between clouds. Simplifying the complex world of networks, or trying to bypass them by ‘overlaying’ relationships across the still dumb network. Conversely, those in search of surety look to the narrow and legacy-oriented approach of the ‘meeting room’ model of neutral colocation providers with ‘direct cloud connect products’.

The challenge is that none of these approaches move us on fundamentally. And the elephant in the room is that many are not yet ready to jettison the concept of massive scale data centres with its separated access.

Instead of seeing separate processing pools, connected by ubiquitous-but-murky access or limited-but-assured access, I would invite you to think about a DISTRIBUTED CLOUD where you can distribute and run workloads anywhere. Processing and storage are wherever and whenever you need them to be, for reasons of latency, language, resilience or data sovereignty. The network is appropriately mobile, secure, assured or ubiquitous, according to location.

The key to creating such a distributed cloud is to build the compute into the network. And Interoute has already done this. We have created a distributed, global cloud which offers very low latency, private and public networking with a global pool of computing and storage that you can place anywhere. By deploying network technologies like MPLS we are able to provide logical separation and security for customers, allowing them to build a ‘single tenant’ infrastructure on our global network, as if it was their own.

The distributed cloud model supports the rise in the use of container technologies like Docker, where the developer abstracts from the data centre infrastructure to a distributed computational environment populated by containers. It is unnecessary for the developer to have to ‘go under the hood’ and create static routing relationships between virtual machines. The goal is to provide simple addressing to each application. Add to this the possibility to create resilient, scalable clusters which straddle multiple nodes, without the constraint of traditional routing.

This forward-thinking approach also answers the needs of those running legacy applications, where you want to consolidate or migrate workloads into the cloud without having to jump to a whole new ‘Internet only’ model, which delays implementation, and for the enterprise that mostly means delays in competitiveness and knowledge.

Here in Europe we are sensitive about the idea of ‘one cloud location to rule them all’, not only because of the data sovereignty issue but because of latency and the languages of Europe. If you are building a website in Spain then 90% of your market is going to be Spain and having your ‘processing’ in Ireland or the UK is an unnecessary complication and expense for data traffic that predominantly should stay local. That distance to the processing location hinders performance and throughput so basically you get a slower cloud for your money, or you must upgrade to a more expensive one.

The evolution of a distributed cloud which I envision builds on the technological core of ‘Cloud 1.0’ – fully elastic resources and scale –but moves forward by combining in an intelligent way the twin elements of the digital economy, the network and the computer. Once this model takes hold, the ability to evolve applications toward higher levels of availability and resilience accelerates and all those efforts and products that are trying to simplify the managing of networks simply disappear, as the ‘network’ just works as part of the platform.

I feel we are harking back to the 1980s when John Gage touted the idea of the network as a computer. 30 years on we are finally there.