(c)iStock.com/bjdlzx
All too often, the restrictions of engineering seem to us like the laws of physics. Think back to when horse-drawn carriages were the best available mode of transportation. Drivers could only get to their destinations so quickly, given the needs of the horses, the wheels of the carriages themselves, and the quality of the roads. A trip from one city to another could take several days. At the time, that didn’t seem like an opportunity for revolutionary technology. It was just the way things were.
The same has been true for the way we power data across networks. We’ve been restricted by the physical capabilities of silicon, copper and fiber, as well as the heft of our data; but much like cars transformed transportation in the early 1900s, DevOps is allowing us to stretch the laws of physics.
Even though DevOps has been able to automate the delivery of the data behind applications and databases, it has been unable to hasten the transfer of that data. It’s as if DevOps is an engine that replaced a horse, but didn’t change the length of the journey or the quality of the roads.
So the question is: is this as good as DevOps can be? Is there something we could be doing to move our data even faster? Maybe there’s a better way to leverage our new engine to maximize efficiencies. Maybe we can eliminate the constraint of the roads themselves.
If instead of putting our DevOps engine into automobiles, we put that same engine into an airplane, all of the sudden we’re not dealing with the potholes and traffic jams of data bottlenecks.
This is the exciting challenge now posed to today’s technologists. We’ve moved data around in the same way for decades, constrained by Moore’s Law, without any regard for the enormous expansion of data each organisation is undergoing. But with this many packets of data trying to cross the same number of cables in the same way they always have, we’ve created a multiplying set of constraints, even within a process that is largely automated and therefore should be optimised.
However, the real result is far from optimal. We’re moving the same data over and over again within an organisation. And because it takes so long to move data, teams are using stale data rather than go through the process of requesting it. If they do request it, they often have to wait so long that the newly delivered data is already stale. If our goal is to make real-time decisions based on our real-time data, our current infrastructure falls painfully short.
We need to rethink the way that data is moving. Our current infrastructure is already swamped by today’s tidal wave of data, and every new user and new solution will only make things worse. In essence, to get the most out of our DevOps engine, we need automobiles to sprout wings.
We owe it to ourselves to emulate the Wright brothers, who simply asked if flight might be possible. Flight ispossible, and just as airplanes proved that there was more to transportation than cars in 1903, data virtualisation has proven that it’s possible to move more data, faster and cheaper than ever before.
The key to that flight is that most of the data flood consists of duplicates, and up to 90% of all non-production data an organisation uses is identical. Developers, testers, and analysts all need many copies of the same data, taking up immense amounts of storage and valuable time that could be used for more innovative endeavors. By retaining those copies in a central repository, versioning their incremental differences, and intelligently sharing only the requested data to end users – in a word, by virtualisation – we can reduce the flood and speed delivery. All we need is to replace ongoing maintenance and accumulation with insight and innovation.
Flight revolutionised transportation; and data virtualisation will revolutionise data management processes, allowing DevOps to power through the data deluge. Now, it falls to each organisation to empower their IT groups with data virtualisation capabilities so that their DevOps practices can really soar.