The Morlocks are here: Why computing’s new paradigm is the time machine

Opinion We had the desktop. Then came the cloud. And next, we’ll have the time machine.

The Time Machine is one of the classic 60s-era Sci Fi films. Based on the H.G. Wells novel, a Victorian-era inventor (Rod Taylor) gets propelled into the year 802,701 A.D. by a barber chair with a roulette wheel grafted onto it. In the distant future, he meets the Eloi, a future breed of humans that look like they just popped over from the country club. But just underneath the surface of the Earth dwell Morlocks, hairy, unkempt albinos who are nonetheless active, ambitious and clever enough to turn the Eloi into a free range food source. He escapes, explains everything to Mr. Ed’s best friend (Alan Young) and returns for his love interest, Weena.

Gorgeous and airy above. Dirty, but crafty and industrious, below. The same plot line propels Fritz Lang’s Metropolis.

What does that mean for computing? Computing architectures shift—and often relatively quickly–because of the ongoing tension between data and technology. Data grows exponentially. The world’s supply doubles every two years with the desire to consume it accelerating at around the same pace. Bandwidth, meanwhile, grows in a linear fashion. It doubles whenever Comcast feels like rolling out the backhoes. Computing architectures, thus, are in a never-ending race to close the gap between what we humans want to accomplish and what the infrastructure can deliver. The goal is not to win, but to just mask latency.

From 1948 to the 70s, centralised mainframes ruled because they were quicker than adding machines. In the 80s, desktops ruled because it eliminated computing queues and allowed people to get moderately complex jobs done on their own. Then came the browser and data centers the size of the Pentagon: the applications you wanted to run, and data you needed to access, far exceeded the capabilities of a laptop.

And then came the Fail Whale. Remember how Twitter used to regularly crash? That was the first of a growing number of signs that trying to manage everything from even the most elaborate and well-managed clouds and data centers wasn’t going to work. Edge data center providers like vXchange suddenly emerged to take off the strain of serving up viral videos and became the first wedge of a retreat from a High Castle future.

The Internet of Things and edge computing architectures will only exacerbate the trend. Take predictive maintenance, the gateway drug of IIoT. It will save billions a year in reduced downtime and repair costs. But how do you design a data system to accommodate the volume, variety and velocity of information with the very urgent, short-term needs of the users?

A wind turbine typically will have close to 650 parameters (hydraulic fluid levels, fluid temp…). Updates every ten minutes means 93,000. Or 34.1 million a year. Times the 44 turbines found in a mid-sized field that a wind developer will want to track. Or the hundreds in your entire portfolio. Or, if you’re a grid operator, the tens of thousands in your region? And then cross check that against current pricing, demand projections, projected repair costs and other parameters.

Sending all of this data to the cloud would be prohibitively expensive. Worse, it could delay getting it into the hands of the people—repair technicians, field managers—who will be the primary users. In this case, a computing in the cloud/data down below architecture. Send what’s necessary up to the cloud to build a model but keep the system of record down below.

Back to the Morlocks. We’ve got a situation where we will have absolutely asinine amounts of data. Huge portions of it may only be relevant for specific tasks or consumed by other, nearby computers.  You won’t want to throw it out—that would reduce the fidelity and accuracy of any findings. But you also don’t want to shuttle it too much between computers or replicate it too often—that would lead to a Chapter 11 reorganisation.

The solution. Don’t move it. Keep it where it was generated. Conduct as much work as locally and reserve the cloud for your most elegant applications, where scalability can really help. Compute in the clouds, keep data down below. It’s not centralised, nor is it distributed. And, like the food cycle of 802,701 symbiotic.

Some of this earthbound reality is reflected in recent analyst reports. IDC estimates that 40% of IoT data will be captured, processed and stored where it was generated. Gartner estimates the amount of data outside the cloud or enterprise data centrs will grow from 10% today to 55% by 2022.

Are there better metaphors? A work colleague refers to these new models as examples of dispersed computing. It’s catchy. Others have said we should expect to see archipelagos of computing.

Also, the Eloi weren’t smart. They aren’t the data scientists of the future, more like their attractive, college dropout children who plan to inherit. Maybe. But I can’t find the VCR and my access to old B movies is thus compromised.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.