Sipping a coffee outside the hubbub of Cloud Expo Europe, Matthew Finnie, chief technical officer at virtual data centre provider Interoute, has something on his mind.
The vast majority of the activity going on in the ExCeL arena, he surmises, will be vendors pitching their position. Nothing unusual with that at a trade show, you might say, but Finnie wants the conversations to go one step further.
“I think one of the things we’re hoping for is people start to put two and two together,” he tells CloudTech. Users realise cloud, as a flexible consumption model, is just a metaphor for how they want to buy and build their applications, so the next step is asking: what needs to happen to make this process even simpler?
According to Finnie, the answer lies in the likes of open source app container Docker and scalable network MPLS. “It’s a little bit of an odd combination,” he admits, “but the key thing is that Docker does a brilliant job of abstracting the way for me to understand the VM, so you don’t really care what’s happening below.
“But then if you build it in the traditional architecture, you’ve still then got to go off and create relationships with those VMs, so you’re still fiddling around with firewalls and load balancers. [If] you have a model where you have an integrated platform, you’re essentially saying [you’re] going to add network control to [your] cloud. It means you can predefine what relationships those machines can have.”
Despite the odd security hiccup, Docker has strategic partnerships with many big players using the service on top of their public cloud, including Microsoft, Amazon and Google. Finnie argues the key to Docker is that it’s lightweight, so allows you to spin up as much as you want, and it can also smash through any network connection it’s given.
“You now move away from this concept of cloud computing as being isolated silos of compute that you’ve got to manage and bind together, to a model which says ‘actually, we still haven’t finished optimising what this platform is,’” says Finnie. Pointing at the main hall, he adds: “And no doubt I don’t think anyone’s going to be talking about much of that in there – because it doesn’t work for a lot of people.”
One of the primary causes of this imbroglio is the usage of the word ‘hybrid’. It’s simply everywhere. CenturyLink VP cloud platform David Shacochis described hybrid as the most “overwrought” term in IT right now. Another source told this correspondent hybrid was simply a word certain vendors hid behind. What’s going on here?
Finnie started his career in the semiconductor game, and the word hybrid was being tossed around like a tennis ball then. It just meant different. Some time on, the lines are still blurred.
“Hybrids are a transition, but most things we do with most customers are hybrid,” he explains. “The confusion with hybrid in a cloud computing perspective is where that relationship should be formed.
“The relationship has always been formed at the network interface path. It’s the universal connector, be it an AS400, an Exadata platform, and your cloud infrastructure or dedicated infrastructure. A network interface path allows you to have a common view.” But of course you can’t virtualise everything – not least an elderly IBM AS400. Is that hybrid then?
“For us…everything is implicitly hybrid, and it’s really down to you as a customer to work out how much you can put where,” Finnie adds. “The more you can stick on the network we have, in terms of compute, the more agile and flexible it’s going to be. It’s as simple as that.”
Regular readers of this publication will remember an opinion piece Finnie wrote earlier this month analysing a new original ‘distributed cloud’ model, where processing and storage is wherever you need it to be, to save issues with latency, language or data sovereignty. The truth is, it’s neither new, nor original – John Gage discussed the ‘network as a computer’ in the 1980s, while broad themes were explored in Joe Weinman’s book Cloudonomics. Yet it’s one that seems to work for Interoute. Finnie explains the strategy that, as networks have 10% of the cost model of data centres, it’s both cheaper and more agile.
“The Internet has shown that a distributed model is the most efficient way of moving information round and presenting information,” he says. “And all we’ve said really is – let’s take the same model, and apply it to computing. Distributed cloud for us is just an optimisation of the existing model, it just brings it closer to markets.”
It’s all part of a fascinating future for Finnie. Let’s see who comes along for the ride.