If you’re a techie living in the UK, you’re almost certainly familiar with Hive.
This home-grown smart home firm was created in 2012 by parent company Centrica – which also owns British Gas – as a dedicated division to handle its burgeoning connected heating project. While it’s technically part of Centrica, it’s run as a separate company, operating as a lean startup independent of the rest of the business.
The venture has clearly proved successful; in the six years since it launched, Hive has expanded its portfolio to include smart lighting, motion sensors, surveillance cameras and more, and in May this year the company reached one million customers. However, supporting one million connected homes and counting requires a robust and scalable IT infrastructure.
Hybrid can still be costly
As you’d expect from a modern smart tech company, Hive’s infrastructure is now entirely cloud-based, running on AWS and VMware. This wasn’t always the case, however, as Hive’s infrastructure has evolved as the business and its needs changed over time.
According to Hive’s head of site reliability engineering, Chris Livermore, the man who’s responsible for provisioning and maintaining the infrastructure on which Hive’s software engineers deploy their code, the company initially started out with a hybrid model. The team used cloud environments to build and deliver Hive’s mobile applications but also maintained a physical data centre.
The main reason for this, Livermore says, is that AlertMe – a key partner that provided Hive with a platform for remote monitoring and automation services – only supported on-prem deployments, forcing Hive to run its own infrastructure.
“The data centre we had, we put a virtualisation platform on it, we used OpenStack, but we did that to allow our dev teams to interact with it in a cloud-type manner,” explains Livermore. “We wanted them to be able to spin up a virtual environment to work on without having to stop and wait for somebody in my team to do it. It’s all about moving that empowerment to the developers.”
Hive was investing a lot of time, effort and manpower in maintaining its data centres, Livermore says, and the company ultimately decided to shutter them around two years ago.
“All of those guys still work for me, they just don’t run a data centre any more – they do other stuff,” he explains. “It’s very interesting. We’ve done a lot of consolidation work, but none of it has been from a cost reduction point-of-view, it’s just been a better deployment of resources.”
IoT built on IoT
Now that it’s ditched its data centres, Hive is all-in on cloud; the company runs exclusively on AWS, with anywhere from 18,00 to 22,00 virtual machines running on VMware’s virtualisation software. It’s also a big user of Lambda, AWS’ serverless computing platform, as well as its IoT platform.
The fact that Hive uses Amazon’s IoT service may sound a little odd, given that Hive actually owns and operates its own IoT platform, but the deal allows the company to focus entirely on its own products, and leave much of the overhead management to AWS.
“At the time, it was a means to an end,” Livermore explains. “Five years ago when we started, you couldn’t go out to market and find an IoT platform provider, so in order to deliver Hive we partnered with AlertMe; they had an IoT platform. We subsequently acquired AlertMe and acquired an IoT platform, but you have all the overhead of maintenance of maintaining and evolving that IoT platform.”
Some products, like the relatively complicated Hive heating system, benefit from running on a custom-made platform, but for simpler devices like smart lights and motion sensors, Livermore says that it makes sense to find a platform provider “and let them do all the hard work… we will wherever possible use best-of-breed and buy-in services”.
Hive has completely embraced the concept of business agility, and is not shy about periodically reinventing its IT. For example, despite the fact that its entire infrastructure runs on AWS, the company is considering moving portions of its workloads from the cloud to the edge, having the device process more instructions locally rather than pushing them to the cloud and back.
This would mean a reduction in Hive’s usage of AWS, but as with the data centre consolidation efforts from previous years, Livermore stresses that this is about technological efficiency rather than cost-cutting. More on-device processing means lower latency for customers, and a better user experience. “There are certain things that make sense to be a lot closer to the customer,” he says.
Building for scale
This constant pace of change may seem chaotic, but according to Livermore, it’s an essential part of scaling a company. “That presents opportunities to reevaluate what we’re doing and say ‘are there any new suppliers or new services that we can leverage?’.”
“We’re part-way through a re-architecting of our platform,” he tells Cloud Pro, “and we now need to be building a platform that will scale with the business aspirations. You get to these milestones in scaling. Up to half a million customers, the system will scale, [but] then you get to bits where you realise the code isn’t quite right, or that database technology choice you’ve made doesn’t work.”
For Livermore, his role is fundamentally about giving Hive’s developers as easy and seamless an experience as possible.
“Essentially, my job is to give my dev teams a platform where they can deploy their code and do their job with the minimum of fuss,” he says. “It’s all about empowering the developers to spend as much time as possible on solving customer problems and as little time as possible worrying about where the server’s going to come from or where they’re going to put their log files or where their monitoring and telemetry goes.”