All posts by martincooper

Why a new era of computing has arrived with the software-defined data centre

(c)iStock.com/4x-image

The delivery of IT services on-demand is becoming an increasingly common ambition for enterprises. As businesses are introduced to public cloud platforms such as Amazon Web Services, they are expecting the same flexibility, real-time delivery and cost savings from services across the entire IT landscape.

In turn, this is leading to a brand new world of computing, one in which old client-server models are being turned on their heads, replaced by mobile and cloud computing. By necessity, this is also leading to radical changes across the entire IT infrastructure layer in the data centre.

Let’s take storage as an example. For decades, storage has been defined by closed, proprietary and monolithic hardware-centric architectures, built for single applications and local network access, with limited redundancy and manual management. And in a traditional data centre, each component of the infrastructure has an independent set of management requirements. Trying to provide dynamic workload delivery is a complex and time-consuming process; manual infrastructure reconfiguration is required and new hardware is often essential. In practice, this places an onerous workload on IT staff who struggle with a lack of agility and leaves the data centre exposed to human error.

With these new requirements placed on IT infrastructure comes the development of the software-defined data centre, which is driving change across the entire industry. For example, all primary storage is moving to flash by necessity, with research firm IDC expecting all-flash arrays to dominate primary storage market spend by 2019.

The software-defined data centre

An essential characteristic of this new model is software that is decoupled from hardware. While its definition varies among vendors, essentially it allows data to be pooled and assigned to applications, making it more flexible to needs, and greatly increases the ability to scale-out depending on the architecture.

The decoupling of software and hardware has also introduced automation into the data centre. The ability to abstract service offerings from the underlying infrastructure paves the way for the delivery of new enterprise cloud consumption models based on the specific needs of the business, such as infrastructure as a service, a converged or hyper-converged infrastructure, or even software-based infrastructure on commodity hardware.

Ultimately, this storage model is defining the next-generation data centre.

From the static to the dynamic

The software-defined data centre is characterised by the ability to deploy, manage, consume and release resources, and monitor infrastructure, all with fine-tuned control. This approach eliminates IT silos and underpins a move from a static to an elastic model, enabling new levels of agility, which are essential to the successful delivery of enterprise services via the cloud.

For example, with software-defined storage that virtualises performance separate from capacity, IT managers gain unprecedented storage control, enabling them to dial up performance or dial it down as required, add either resource in small increments without disruption to scale-out easily and use data services for tasks such as deduplication, snapshots and replication, all in real-time.

Added to this is network functions virtualisation (NFV), which offers a new way to design, deploy and manage networking services. Similar to other next-generation data centre services, it decouples network functions from proprietary hardware appliances so they can run in software. Allied with software-defined networking (SDN), the result is a powerful architecture that is dynamic, manageable, cost-effective, and adaptable – making it ideal for the high-bandwidth and dynamic nature of cloud platforms.

Benefits you can’t ignore

Taken together, software-defined storage and automation, NFV, SDN and the software-defined data centre radically improve service delivery, dramatically reduce costs and enable levels of flexibility not previously seen.

As a result, we’re now seeing a reduction in the amount of time required to manage storage by dynamically classifying and moving data between storage tiers. By transferring infrequently accessed data to lower-cost drives, organisations save money and improve performance by lightening the load. Further, by reducing the number of active files, we’re seeing shorter daily backup times.

It’s hardly surprising then that the software-defined data centre is inevitable and many are already undertaking this journey. If you haven’t already set out on this road, developing this model depends on whether you are building out a new data centre, or updating an existing data centre.

However, wherever you are on the journey, in both cases it’s an investment in the business. If you’re struggling with the decision you need to think about whether or not you want an investment that will gain value over time and grow with the business, or an investment that you struggle with as you try to make the old hardware-centric IT model fit into a new era of computing.

Operational benefits aside, the cost drivers for software-defined data centres are certainly compelling. Take network virtualisation as an example. Instead of applications taking an average of 12 weeks to deploy in an enterprise, it can take minutes, with all of the required network capabilities and security policies attached to the application.

This should certainly impress business decision makers, as far less time and money is spent on implementation and the return on investment begins almost immediately.

Today, many companies are already exploring, planning or virtualising their networks as they move from the client-server era to the mobile-cloud era. They understand only too well that the software-defined data centre is more agile, secure, scalable, and cost-effective to run, than a traditional hardware-centric data centre.

If you’re struggling to convince senior executives to make the leap, speak to them in the language of business benefits to achieve buy-in; the long term cost savings and return on investment, greater flexibility to scale up and down as required, and to have IT services on demand and in real-time.

It is also worth spelling out that agile public cloud computing and on-demand IT is not a future abstract; it is already here and being used by an increasing number of organisations which are already reaping the benefits.

Why OpenStack success depends on simplicity

(c)iStock.com/cherezoff

Is it any wonder that OpenStack has become so popular? It accelerates an organisation’s ability to innovate and compete by deploying applications faster and increases IT teams’ operational efficiency, according to the latest OpenStack user survey.

OpenStack is an open-source cloud operating system that enables businesses to manage compute, storage and networking resources via a self-service portal and APIs at massive scale – significantly propelling cloud services forward. And with its growing popularity, the demand for OpenStack expertise is so high, employers are willing to pay top dollar for it and do everything they can to retain talent.

While this is a great thing for those with the expertise, it also hints towards a significant roadblock when it comes to OpenStack implementations. While OpenStack offers tremendous benefits, it is not simple. In fact, implementations are notoriously complex, resulting in skyrocketing demand for skilled specialists.

Removing complexity and focusing on foundations

Despite its complexity, the secret to OpenStack’s success is – perhaps ironically – in taking a simpler approach. For a successful implementation, removing unnecessary complexities is an essential first step.

For instance, you start with the basics by focusing on storage, compute power and infrastructure before adding further features. A solid and simple storage backend translates into a strong foundation.

This approach is opposite to the bolt-on method in which OpenStack features are added to existing architectures. This essentially treats OpenStack as an afterthought, ultimately results in added challenges further down the line and militates against the full potential of OpenStack.

Focusing on fundamental infrastructure considerations may be more resource-intensive and demanding in the beginning than the ‘bolt-on’ approach, but it makes all the difference between an OpenStack platform that organisations could use to its full potential and one that is bloated by unnecessary complexity.

Businesses need to take several considerations and processes into account when implementing OpenStack. After all, it isn’t just a lower-cost alternative to a traditional virtualised environment, it’s a fundamental shift in how applications are deployed, and how infrastructure is used.

For instance, you need to consider how the hardware components come together to drive cloud-focused goals and strategy, which is precisely why the initial focus should be on the underlying infrastructure foundation.

Orchestration

Keep in mind that a virtualised infrastructure is about data centre automation, whereas an OpenStack-powered cloud is about orchestrating the entire data centre. Orchestration builds on data centre automation, so understanding where you currently are in the orchestration process should also inform your approach.

When considering infrastructure components, you also need to think about what specific applications or workloads are driving your project and their individual requirements. This process of due diligence will serve you well; it will ultimately lead to a simpler and easy-to-use cloud.

Once the foundation is in place, this simplicity will translate into easier and cost-effective operations and maintenance, and the ability to easily and quickly customise your environment through APIs. It will also provide a clear path to scale and grow, while avoiding disruptive migrations or the rebalancing of resources.

Transforming infrastructure services

An OpenStack-powered cloud creates a rich platform for building and deploying applications, and it goes hand-in-hand with an automated and software-defined data centre. It manages the infrastructure below and orchestrates applications above, creating fully automated workflows that enable infrastructure to be deployed on demand, and for processes to become consistent and repeatable.

It not only lowers the operational cost of deployments and upgrades, but also provides a flexible platform for offering innovative new services to customers. And when it is combined with other technologies such as big data and virtualised network functions, it not only drives data centre transformation but also changes the very definition of infrastructure services.

That said, an OpenStack deployment can be a significant expenditure which is why deployment and management must be simplified to ensure a successful project and one which truly leverages the full potential of this exciting new technology.

Read more: OpenStack cloud adoption continues to rise but challenges remain

Sifting through an explosion of choice to build the right cloud infrastructure for you

(Image Credit: iStockPhoto/Jeff_Hu)

When it comes to deploying IT infrastructure, selecting the right model is crucial. Fundamental to this decision, IT departments must make sense of the options available to them and understand their strengths and weaknesses.

But this isn’t as simple as deciding which model is better. It’s about understanding which model is right for the business — right now and into the future. With such a variety of models (as well as vendors) to choose from though, this can be a challenge for companies of all sizes to figure out.

An explosion of choice

There are roughly five different infrastructure consumption models that organisations can choose from. These range from ‘off the peg’ options such as asa-service and hyper converged, to the middle ground of converged infrastructure,  to the more complex and tailored best of breed solutions and commodity hardware running management software. But although you’d think it relatively easy for a company to choose the one that suits them best – there are only five after all – when combined with trends around public, private and hybrid clouds, it can be rather difficult. All models offer a balance of benefits and challenges.

With this explosion of options, there simply isn’t time to compare each vendor in every consumption model. Vendor hype doesn’t help either. Add this to the fact that the industry is constantly evolving, with new choices emerging every week, and it becomes even more complicated. Beyond that, independent research is hard to come by and hands-on testing is typically required.

Narrowing the search

To help businesses avoid becoming overwhelmed with choice and to identify the best solution, it can help to have a methodology for narrowing down the available options. This ensures they’re able to objectively look at which infrastructure model best suits the needs of their business:

1.  Rank the priorities of the business

An organisation needs to start off by essentially ranking five key considerations: data volume, ease of implementation, degree of vendor lock-in, implementation of flexibility and cost efficiency. To do this, they need to ask themselves: Which of these is the most imperative to making the business successful? By understanding the relative importance of such factors, it can quickly narrow the list of viable consumption models.

2.  Understand the infrastructure models

Next a company needs to understand the pros and cons for each model. For example, if an organisation wants easy implementation and is happy to sacrifice a little on flexibility and vendor lock-in, then as-a-service would be the best choice. For example, when firms such as Uber or AirBnB first launched, they may not have had the budget or expertise to build their own infrastructure but could use as-a-service to quickly scale up – ‘on demand’ – to grow and meet their customer needs. As an aside, this fits their business model perfectly, Uber does not own any cars, AirBnB does not own any property. So why should these online business own any IT infrastructure either?

Whereas, if scalability is more important for an organisation, with less vendor lock-in and it doesn’t mind a more extensive implementation process, then an infrastructure model that runs management software on commodity hardware maybe of interest. For example, a global e-commerce company, such as eBay, has to consistently deliver its service to millions of customers worldwide. With so much data to manage, it needs an extensive and reliable infrastructure underpinning it but simultaneously, also one that is cost effective and tailored to its large scale needs. As such, software on commodity hardware is probably the natural choice.

Between the extremes of these two models, the extent to which they serve a business’s priorities will change and therefore an organisation will need to carefully identify which actually suits them best.

3.  Start the elimination process

Once a business understands how each model works, and its pros and cons, it then needs to weigh them against its priorities. First, it needs to eliminate the two infrastructure models that least serves its top priority. For example, if a company’s most important consideration is ease of implementation, it would cross best of breed appliances and software on commodity hardware off its list because these have a more involved set up process representing a bigger investment.

Other considerations

Even if a single consumption model can accommodate all of an enterprise’s IT needs today, CIOs should expect to accommodate change over time. As new consumption models evolve, businesses should look for opportunities to move to a new consumption model that better aligns to existing or changing priorities.

For complex IT environments or environments with widely varying priorities and needs (such as branch-office IT vs. core infrastructure), an organisation may require multiple consumption models. Of course, this will always add complexity over adopting a single model, but it may be the only way to meet some business needs, or may just be simply more cost effective.

Architecting the future

As we’ve seen over the last decade, the evolution of available infrastructure options has made the task of architecting the next generation data centre more challenging than ever. Indeed, the wide variety of infrastructure consumption models today is a tremendous opportunity for enterprises of all sizes. These options enable more rapid and more cost-efficient deployment and management of computing infrastructure at scales both large and small. That said, the additional complexity of selecting the right consumption model, vendors and products often clouds these decisions with uncertainty.

If one thing is clear, it’s that no one option fits every need. If CIOs balance the advantages and disadvantages of each infrastructure option, they’ll be able to identify the consumption model that suits them best. Don’t believe all the hype, but realise there are more options than a few years ago for you to choose from. Make sure you have a methodology in place to work out which are right for you, as you make the transition to a next-generation approach to IT.

Do you have further tips on deciding the right cloud infrastructure? Let us know in the comments.