Todas las entradas hechas por mikebushong

The data centre of tomorrow: How the cloud impacts on data centre architectures

As the enterprise world continues speeding towards complete digitization, technologies like cloud and multi-cloud are leading the charge. Yes, cloud offerings like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud are changing the way enterprises consume IT resources. Having cloud-grade infrastructure at an enterprise’s fingertips opens up opportunities that simply did not exist before.

But are the effects of cloud limited to a collection of somewhat ephemeral infrastructure residing in someone else’s data centre? Or does cloud carry with it the power to change owned infrastructure as well?

The cloud’s impact on data centre architectures

Perhaps the most basic impact of the rise of cloud and multi-cloud is the effect on data centre architectures. In years gone by, enterprise data centres were sprawling collections of sometimes eclectic equipment deployed in support of point applications or use cases. With each new turn that the business took, the data centre was forced to bob and weave.

It’s understandable then that devices with robust sets of capabilities dominated. When IT cannot predict the next requirement, there are only two possible paths forward: deploy devices that support as much as possible, and when that fails, deploy snowflakes purpose-built for narrow use. 

But the cloud doesn’t work this way. Amazon, Microsoft, Google, and the others cannot build bespoke infrastructure for the varying application needs of their users. Doing so would utterly destroy the economies of scale that come from shared infrastructure. Rather, they must design the data centres that power their cloud offerings in such a way that they are robust in capability but uniform in design. Without resource fungibility, there simply is no cloud.

And so data centre designs have changed, favoring commonality over uniqueness. Modern data centres are not a mix of different shapes and sizes. They are a uniform fabric of fixed-form-factor devices, deployed explicitly because they are interchangeable. Servers and storage have long been in this mode. 

More recently, even the network devices that provide connectivity have moved this direction. Built on merchant silicon, these “pizza boxes” (so named because they are thin) are deployed in non-blocking architectures. When something fails, traffic is routed around it, and the device is replaced with an identical copy. 

Within the data centre, this means that racks and rows ought to begin to look identical. Where diversity served the legacy data centre well, it is the enemy of efficiency in the cloud era. This simplifies things like deployment and management, allowing for finer-grained grow-as-you-go strategies. It also makes space, power, and cooling a much more straightforward activity. When devices are the same, planning is reduced to understanding capacity requirements and physical constraints. 

Moving from device-led to operations-led

Ultimately, the cloud is probably more about operations than devices. Historically, data centres have been architected from the devices up. That is to say that things like capacity requirements drive what boxes are required, which then determine what operators must do.

The currency of cloud and multicloud, though, is not capacity so much as it is agility. And this means that physical devices must assume a supporting role while operations steps to the front.

As enterprises look to learn from the cloud movement, they should conclude that operations are the starting point. Enterprises that are not efficient in how they manage their infrastructure will be at a perpetual competitive disadvantage to those companies that have adopted cloud practices to drive their business. 

Operations certainly involve technology movements like automation, telemetry, and DevOps. But enterprises looking to become more efficient need to start with their physical infrastructure.  The enemy of fast is complexity, which means that enterprises need to be taking every opportunity to reduce complexity in their operating environments. One of the easiest ways to make progress here? Eliminate infrastructure sprawl. 

Because most data centres evolve organically over time, they are a collection of different devices. The more different they are, the more diverse the operational model must be. Every unique platform running every unique version of software configured for every unique feature is ultimately making the data centre more diverse. That diversity is an efficiency killer. 

Maintaining economic leverage

While one conclusion to draw here is that a single supplier can help drive data centre evolution, the reality is that enterprises will ultimately want to maintain economic leverage. Indeed, there are no benevolent rulers in IT, and a single-vendor approach to the data centre is likely to wreak long-term economic havoc. 

Instead, enterprises should be architecting their data centres for a common set of functionality that can be offered over two or more supplier solutions. By maintaining interchangeability across vendors, enterprises will find that their procurement teams can reap rewards even as their operation centres rejoice. 

This, too, has implications on the physical data centre. Understanding the underlying merchant silicon that drives solutions will allow architects to steer their designs towards common building blocks available across the industry. Adopting white box servers, for instance, enables things like common sparing, which helps improve repair times and maintain consistency of deployment. In the network realm, standardising on connectivity (25GbE to the server, as an example) allows enterprises to settle on common optics and cabling as well. Anything that drives uniformity will ultimately help the bottom line.

Process dominates

It is certainly true that the data centres of the future will converge on a fairly narrow set of architectural principles. But enterprises that really want to ride the wave of cloud and multicloud will need to evolve their overarching processes as well. 

Where most enterprises today are skilled at deploying new equipment, they struggle at decommissioning aging gear. For example, most enterprises have network refresh cycles of seven years or more. This means that a data centre will have seven years’ worth of equipment in it, built with varying components and supporting varying capabilities. 

Compare that with cloud companies that refresh their hardware every two to three years. It is tempting to argue that the cloud properties have more available spend, making this practice more economically palatable. But the driver behind this practice is actually the same efficiencies that enterprises want within their IT environments. 

By reducing operational divergence, cloud companies make themselves dramatically more efficient operationally, allowing them to grow their capacity exponentially while maintaining their current IT teams at or near current staffing levels. This allows them to divert operational spend back into their capital expenditures, helping maintain this aggressive refresh cycle. And as they deploy newer equipment, they can take advantage of increased scale and performance of newer platforms, frequently adding more capacity at lower per-unit prices.

Perhaps more importantly, these operational efficiencies allow teams to spend less time doing break-fix activities and more time driving value to the business. How much is it worth for an enterprise to be more automated? Or to have better documentation? Or to have robust automated testing? None of these happen when teams are maxed out merely maintaining existing infrastructure.

The bottom line

Data centres are at a point where they simply must evolve. The rise of common building blocks built on standard components has changed the way enterprises plan, build, and operate. By combining these principals with important shifts in both operations and refresh cycles, enterprises can apply the principles of cloud to their owned infrastructure, allowing for dramatic improvements in both utility and efficiency.