Todas las entradas hechas por michaelkanellos

The Morlocks are here: Why computing’s new paradigm is the time machine

Opinion We had the desktop. Then came the cloud. And next, we’ll have the time machine.

The Time Machine is one of the classic 60s-era Sci Fi films. Based on the H.G. Wells novel, a Victorian-era inventor (Rod Taylor) gets propelled into the year 802,701 A.D. by a barber chair with a roulette wheel grafted onto it. In the distant future, he meets the Eloi, a future breed of humans that look like they just popped over from the country club. But just underneath the surface of the Earth dwell Morlocks, hairy, unkempt albinos who are nonetheless active, ambitious and clever enough to turn the Eloi into a free range food source. He escapes, explains everything to Mr. Ed’s best friend (Alan Young) and returns for his love interest, Weena.

Gorgeous and airy above. Dirty, but crafty and industrious, below. The same plot line propels Fritz Lang’s Metropolis.

What does that mean for computing? Computing architectures shift—and often relatively quickly–because of the ongoing tension between data and technology. Data grows exponentially. The world’s supply doubles every two years with the desire to consume it accelerating at around the same pace. Bandwidth, meanwhile, grows in a linear fashion. It doubles whenever Comcast feels like rolling out the backhoes. Computing architectures, thus, are in a never-ending race to close the gap between what we humans want to accomplish and what the infrastructure can deliver. The goal is not to win, but to just mask latency.

From 1948 to the 70s, centralised mainframes ruled because they were quicker than adding machines. In the 80s, desktops ruled because it eliminated computing queues and allowed people to get moderately complex jobs done on their own. Then came the browser and data centers the size of the Pentagon: the applications you wanted to run, and data you needed to access, far exceeded the capabilities of a laptop.

And then came the Fail Whale. Remember how Twitter used to regularly crash? That was the first of a growing number of signs that trying to manage everything from even the most elaborate and well-managed clouds and data centers wasn’t going to work. Edge data center providers like vXchange suddenly emerged to take off the strain of serving up viral videos and became the first wedge of a retreat from a High Castle future.

The Internet of Things and edge computing architectures will only exacerbate the trend. Take predictive maintenance, the gateway drug of IIoT. It will save billions a year in reduced downtime and repair costs. But how do you design a data system to accommodate the volume, variety and velocity of information with the very urgent, short-term needs of the users?

A wind turbine typically will have close to 650 parameters (hydraulic fluid levels, fluid temp…). Updates every ten minutes means 93,000. Or 34.1 million a year. Times the 44 turbines found in a mid-sized field that a wind developer will want to track. Or the hundreds in your entire portfolio. Or, if you’re a grid operator, the tens of thousands in your region? And then cross check that against current pricing, demand projections, projected repair costs and other parameters.

Sending all of this data to the cloud would be prohibitively expensive. Worse, it could delay getting it into the hands of the people—repair technicians, field managers—who will be the primary users. In this case, a computing in the cloud/data down below architecture. Send what’s necessary up to the cloud to build a model but keep the system of record down below.

Back to the Morlocks. We’ve got a situation where we will have absolutely asinine amounts of data. Huge portions of it may only be relevant for specific tasks or consumed by other, nearby computers.  You won’t want to throw it out—that would reduce the fidelity and accuracy of any findings. But you also don’t want to shuttle it too much between computers or replicate it too often—that would lead to a Chapter 11 reorganisation.

The solution. Don’t move it. Keep it where it was generated. Conduct as much work as locally and reserve the cloud for your most elegant applications, where scalability can really help. Compute in the clouds, keep data down below. It’s not centralised, nor is it distributed. And, like the food cycle of 802,701 symbiotic.

Some of this earthbound reality is reflected in recent analyst reports. IDC estimates that 40% of IoT data will be captured, processed and stored where it was generated. Gartner estimates the amount of data outside the cloud or enterprise data centrs will grow from 10% today to 55% by 2022.

Are there better metaphors? A work colleague refers to these new models as examples of dispersed computing. It’s catchy. Others have said we should expect to see archipelagos of computing.

Also, the Eloi weren’t smart. They aren’t the data scientists of the future, more like their attractive, college dropout children who plan to inherit. Maybe. But I can’t find the VCR and my access to old B movies is thus compromised.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Edge or cloud? The five factors that determine where to put workloads

Should you send your data to computers or bring your computing assets to the data?

This is a major question in IoT. A few years ago you might have said “everything goes to the cloud,” but sheer size and scope often makes a smart edge more inevitable. IDC estimates that 40% of IoT data will be captured, processed and stored pretty much where it was born. While Gartner estimates the amount of data outside the cloud or enterprise data centres will grow from 10% today to 55% by 2022.

So how do you figure out what goes where?

Who needs it?

IoT will generate asinine quantities of data across all industries. Manufacturers and utilities already track millions of data streams and generate terabytes a day. Machine data can come at blazingly fast speeds, with vibration systems churning out over 100,000 signals a second, delivered in a crazy number of formats.

Machines, however, aren’t good conversationalists. They often just provide status reports on temperature, pressure, speed, pH, etc. It’s like watching an EKG machine; companies want the data, and in many cases need to keep it by law, but only a few need to see the whole portfolio.

The best bet: look at the use case scenario first. Chances are, every workload will require both cloud and edge technologies, but the size of the edge might be larger than anticipated. 

How urgently do they need it?

We’ve all become accustomed to the Netflix wheel that tells you your movie is only 17% loaded. But imagine if your lights were stuck at 17% brightness when you came home. Utilities, manufacturers and other industrial companies operate in real-time – any amount of network latency can constitute an urgent problem.

Peak Reliability, for instance, manages the western U.S. grid. It serves 80 million people spread over 1.8 million square miles. It also has to monitor over 440,000 live data streams. During the great eclipse it was getting updates every ten seconds

Rule of thumb: if interruptions can’t be shrugged off, stay on the edge or a self-contained network.

Is anyone’s life on the line?

When IT managers think about security, they think firewalls and viruses. Engineers on factory floors and other “OT” employees—who will be some of the biggest consumers and users of IoT — think about security as fires, explosions and razor wire. The risk of a communications disruption on an offshore drilling rig, for example, far outweighs the cost benefits of putting all of the necessary computing assets on the platform itself. Take a risk-reward assessment.

What are the costs?

So if the data isn’t urgent, won’t impact safety, and more than a local group of engineers will need it, do you send it to the cloud? Depends on the cost. Too many companies have responded to cloud like a teenager in 2003 given their first smart phone. Everything seems okay, until the bill comes.

In the physical world, no one sends shipments from L.A. to San Francisco via New York, unless there is a good reason to go through New York. Distance means money. Sending data to the cloud that could just as effectively be stored or analyzed on the edge is the digital equivalent. Getting the right balance of edge and cloud is the key to managing the overall TCO.

How complex is the problem?

This is the most important and challenging factor. Are you examining a few data streams to solve an immediate problem such as optimizing a conveyor belt, or comparing thousands of lines across multiple facilities? Are you looking at a patient’s vital signs to determine a course of treatment, or studying millions of protein folds to develop a new drug?

Companies often use the cloud to crack a problem, and then repeat it locally at the edge. Projects resulting in millions in savings aren’t being produced by a magical algorithm in the cloud – instead, people look at a few data streams and figure it out on their own.

Another way to think about it: the cloud is R and the edge is the D in R&D.