All posts by joekinsella

Three key predictions for the cloud industry in 2019: Multi-cloud, governance and blurred lines

In 2019, we can expect the cloud industry to continue to thrive – with impressive cloud adoption across all industries, accompanied by an improvement in solutions and integrated data tools to best meet user needs. These advancements will substantially enhance operations in the cloud, leading it to ultimately become the preferred platform for all enterprise applications.

Looking ahead, we will see companies move beyond standard adoption, and instead begin to redefine how they use cloud as a key part of their business strategy. The cloud space has a rich history of continual improvement and is evidently getting more competitive, with three trends likely to emerge in this new year.

Multi-cloud dominance across enterprise 

Cloud environments today come in many shapes and sizes, from many different providers. As such, multicloud deployment will likely continue to be used as a key strategy, bringing the ability to pick and choose solutions to the organisation as leaders seek to avoid dependence on a singular cloud provider. This flexibility means that companies can structure workloads into separate environments sorted by their different requirements.

However, as organisations look to deploy diverse clouds and operations within a single heterogeneous infrastructure, leaders will need a clear strategy and visibility into how these pieces will work together in order to avoid creating more silos.

There are multiple challenges when expanding the multicloud strategy, including security, governance, service integration and financial costs. Businesses can adhere to this by outlining the best practices for efficient management of their specific company’s culture and cloud environment. In 2019, we will see multicloud reach a tipping point and providers that are able to embrace this culture will be able to build the best custom approach that fundamentally presents the greatest offering.

Blurring of the line between public and private cloud

2019 could be the year and turning point for organisations to migrate critical workloads to public cloud as business leaders strive to stay ahead of the demands of digital transformation, including faster access to emerging technologies, on-demand capacity and unlimited scalability. And with the public cloud entering the data center through solutions like Amazon Relational Database Service, the rigid line between public and private cloud is slowly being diminished.

So with the cloud world no longer being simple black or white, how will we adjust to this new ‘grey scale’ commoditised cloud world? Fundamentally, APIs and control plans will become increasingly more important, and strategies will need to focus on when and where streams are being run and who is managing them. IT management will need to have a clear plan in determining what should be outsourced and create a contingency plan for when adjustments need to be made.

Governance and agility in opposition

IT operations is structured by a set of corporate guidelines which must be adapted and monitored within a cloud landscape, ensuring compliance with operational standards of high efficiency and security. And in the cloud and IT industry, governance is intertwined with business goals and policies, as companies strive to move forward and evolve.

However, cloud-based services are advancing at an accelerated rate and have exceeded the capabilities of traditional management solutions, creating roadblocks on the journey to innovation. New cloud technologies are continuing to change the nature of services, as today’s landscape no longer considers a single infrastructure to be sufficient. As organisations use multiple clouds simultaneously and deploy on-premise solutions, data needs to be maintained and shared across multiple different infrastructures, making IT governance increasingly more difficult to accomplish.

2019 will see governance and agility come head to head, and automation will be key in implementing a sound governance strategy that both simplifies operations and speeds up decision making. Managing large-scale cloud environments is no easy feat and automating policies will be essential in ensuring optimal operation of cloud infrastructure. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

The six keys to competing and succeeding in a cloud-first environment

Cloud computing has come a long way over the last several years. It has gone from an emerging technology used in tech startups, to a catalyst for driving enterprise business transformations. According to IDC’s CloudView 2017 report, 70 percent of CIOs say they embrace a “cloud-first IT strategy.” This shows that cloud has moved way beyond the early-adopted phase, where born in the cloud startups were the only ones putting data and applications in the public cloud. Today, global enterprises have embraced the cloud as a means to achieve agility and innovation, and are rapidly driving cloud adoption to new heights.

Still, looking at it another way, just how far has cloud come? Have cloud users configured their organisations to truly get the most out of the technology? The same IDC report suggests the answer is no. According to CloudView 2017, only 16 percent of worldwide organisations have in place the skills and processes they need to manage the evolving cloud environment efficiently.

Here at CloudHealth Technologies, we work with customers who are grappling with these same challenges every day. Many customers expanded into the cloud quickly, and they’re struggling not only to get visibility into the systems and teams using it, but to develop a holistic strategy to fully harness the power.

The good news is, the enterprise has been making strong progress in transitioning from early adoption to mature usage of the cloud. They’re beginning to develop the organisational skills they need to gain productivity and efficiency in the cloud. And they’re developing new roles that take advantage of the new skillsets required to compete in cloud-first environments.

Here are a few ways we're seeing innovative organisations getting it done:

Adopting a disruptive attitude

The cloud can be a threatening technology within the enterprise. For all the promised benefits of innovation and agility, it requires a fundamental shift in the skills, people, processes and technologies within an organisation. All enterprises will inevitably encounter some level of resistance to adopting the cloud, and thus it's critical to approach it as a disruptive innovation.

The first step is to make the commitment. Companies need to have change agents, and employees need to buy into the change. Tech companies embrace this counter-revolution mentality when it comes to cloud. Enterprises need to as well.

Aligning their business

Once companies have committed to a cloud-first approach, they need to ensure that everyone is aligned on how the move to cloud will help the organisation accomplish its goals. The chief goal must be driving business transformation that enhances value to customers, increase top-line revenue, and improves competitiveness of your business. At the end of the day, cloud is the vehicle that’s fueling their transformation. It's critical to develop and communicate a clear vision, and to get buy in from your key stakeholders.

Hiring and locating great cloud talent

To succeed in the cloud, companies need to create a culture that will attract the people with cloud skills that are going to make a difference. The culture needs to be open, collaborative, and fast moving. People who have great skills in the cloud want to work with other people who are great at cloud. As Steve Jobs used to say: “A players” want to work with “A players.” It's critical to cultivate this culture in a small team before rolling it out more broadly. Often an organisational tipping point must be achieved to drive cultural change.

Creating a ‘learning organisation’

It’s not enough to create a solid organisational plan for the cloud; the organisation has to learn and keep learning. The Japanese call it a “Kaizen” mentality – a system that continuously improves. When it comes to cloud, you need to always be analysing, measuring, and figuring out how to change things to make them better. Everything should be subject to continuous improvement.

Creating new leadership roles

One of the most important moves a modern cloud-first organisation can make is to create a function for cloud governance. A typical enterprise may have hundreds of teams using the cloud, each of which has great ownership over the applications and infrastructure they manage. It is critical to empower these teams with the ability to harness the innovation and agility enabled by the cloud, but it is also equally important to ensure your teams are adopting best practices, driving standards, gaining efficiencies, and complying with critical policies and frameworks. Increasingly we are seeing enterprises create governance teams that help drive the impedance mismatch between agility and control, increasing the cloud IQ across the organisation, and enabling business success.

Building technical knowledge

At the end of the day, while building out the team is critical, companies still need deep technical knowledge in areas where cloud matters. It’s essential to have people with a deep understanding of cloud architectures, programming, cloud services, DevOps, and key technologies (e.g. containers, microservices) that are required in the cloud. But the list of new areas of knowledge to learn is constantly growing and changing, and it is imperative to build a learning organisation that can continue to grow and learn together. In addition to enabling greater cloud success, it is also a critical tool for retaining organisational talent.

While the technology may be more than 10 years old, we're still very early in harnessing the true power of the cloud. We have come so far, but still have so many opportunities to grow and learn. The cloud has great potential to fundamentally transform our businesses, but achieving success requires that we adopt new roles, processes and technologies to support this change. Your cloud transformation won’t happen overnight. But if you dedicate yourself to the task and learn from others’ successes and mistakes, you be able to compete in a cloud-first environment – and win.

2016 cloud computing forecast: Private, hybrid, and automation


As we look forward to 2016, there is a lot to reflect on and forecast in cloud. Below you will find my top predictions for the year ahead.

RIP private cloud

The biggest missed story of 2015 has been the profound failure of the private cloud.

Just a few years ago, the private cloud was IT’s solution to remaining relevant to their business partners. I remember attending OpenStack Boston 2011 at the height of the private cloud movement, where everyone seemed convinced of the inevitability of the self-managed private cloud. But after years of incredible innovation in the public cloud and disarray in private cloud, 2016 will be the year that the private cloud as a primary strategy will finally go to its grave.

I expect some big shake ups in the private cloud, especially in the OpenStack community. Unless the private cloud substantially changes its pace of innovation, it will become a speed bump for enterprises on their way into the public cloud.

Google still doesn’t figure it out

If there was ever a company that should own the public cloud, it would be Google. They were building cutting-edge cloud infrastructure while the rest of us were still talking about our type 1 hypervisors. They even introduced the term “cloud computing” into our lexicon. I don’t know about you, but with the exception of a handful of mobile companies that built their businesses on AppEngine, I rarely run into a Google cloud customer.

While I believe Google has the vision, financial resources, and technical capacity to be a top cloud provider, I predict they will continue to lose more ground in 2016. To take liberty with the famous Wayne Gretzky quote, in the process to skating to where Google thinks the puck will be, companies like Amazon and Microsoft are busy putting the puck in the net.

Hybrid cloud hype unleashed

Recent industry news would make it hard not to predict 2016 to be the year of the hybrid cloud. But before doing this, let’s remind ourselves of the last year of the hybrid cloud: 2012. Back then industry experts were predicting the hybrid cloud to be the future of enterprise cloud computing, with the private cloud as the foundation and the public cloud used for bursting and new/unproven workloads. So with this historical reminder, I predict 2016 to be the year of the hybrid cloud hype, where products rebrand themselves as powering and/or enabling the hybrid cloud, and container hype reaches new heights. Expect legacy data centre and virtualisation products to attempt to breath new life into their ageing product lines with the term “hybrid cloud”, and existing public cloud vendors to reach into the data center to get a little of the hybrid shine. Ultimately, hybrid cloud is essentially the toll booth on the way to the public cloud.

End of shadow IT

Public cloud adoption in the enterprise was fuelled, to a large extent, by shadow IT – effectively “rogue” lines of business that worked around their slow-moving IT departments. With shadow IT, enterprises indirectly embraced an incredible amount of innovation in SaaS, cloud computing, mobile, and open source. This innovation has come at a risk to the business, since it typically worked around the IT business policies put in place to mitigate risks.

I predict 2016 to be the end of shadow IT. Two things have changed. IT has embraced and provided leadership in many of the disruptive changes they ignored over the last several years, and enterprises desperately need a lean and agile IT again to help propel their businesses through the current state of technology turmoil. The resurgence of enterprise IT will drive changes in product functionality and how vendors market, sell, price and deliver their services.

Cloud automation goes mainstream

We saw the emergence of business-level automation of the cloud with products from companies like VMTurbo and ourselves. This emerging market is being driven by what I call the “complexity gap”, where the complexity of the building and managing cloud infrastructure is outpacing the ability of management software and services to contain this complexity.

The future of the cloud will not be DevOps engineers writing low-level scripts to automate parts of our infrastructure. Instead it will be business-level automation, with enterprises inputting the policies by which they want their business systems managed and smart software executing these policies in support of the business. I predict 2016 to be the year business-level automation of the cloud goes mainstream.

Telecom makes big moves in the cloud

I think telecom companies will finally make big moves in the cloud…no, just kidding. Remember everyone predicting telecom providers would be the real winners in the cloud? Glad we can finally put this one to bed.

The cloud complexity gap: Making software more intelligent to address complex infrastructure

(c) Clerk

Over the past 15 years, we have seen a unique trend emerge in managing infrastructure ­ increasing complexity. Oddly enough, it began with the mainstream adoption of virtualisation and has rapidly accelerated since the introduction of cloud computing. To further exacerbate the issue, the ability of software that can manage that complexity ­ what analysts such as Gartner call IT Operations Management (ITOM)​ ­ has been unable to contain its growth.

When an organisation struggles with cloud computing ­ whether it is due to stability, cost, performance, security or the many other reasons that account for failed cloud initiatives ­ it is often due to the inability of an organisation to manage the complexity of their newfound infrastructure. The below chart is an attempt to visualise the complexity challenge in its historical context. The red line plots the growth of infrastructure since 2000; the blue line plots the ability of software to manage this complexity. The gap between the red and blue lines is known as the Complexity Gap, where chaos can reign and cloud initiatives fail.

What are the driving reasons behind this growing complexity gap?

Dynamic infrastructure

A critical source of the increased complexity comes from the primary benefits of cloud computing: on­demand infrastructure and pricing. The ease with which we can provision and deprovision infrastructure has fundamentally changed the way we develop applications.

While early virtualisation provided us a faster way to provision and deprovision virtual machines, the infrastructure often had lifecycles not so dissimilar from their physical ancestor. But when on­demand infrastructure was coupled to consumption based pricing in the public cloud, it socially engineered new behaviour for the design and operation of cloud infrastructure. The long-lived virtual machines of the early cloud were places with autoscaling, service­oriented architectures, auction­based compute, and innovative new platform services. These new architectures have provided us the ability to compose more fault tolerant, cost­effective, high-performance and feature-rich solutions than we have in the past. But they brought with them a downside: complexity.

The pace of innovation

It looks four years for the industry to standardise on a de facto functional specification for Infrastructure as a Service (IaaS). Just as enterprises were getting their hands around managing an IaaS cloud, vendors such as Amazon unleashed a torrent of new infrastructure and platform innovations. The new services provide innovations in all aspects of infrastructure: compute, storage, databases, deployment, networking, mobile, analytics, and application development. They also include mind-bending new services (e.g. AWS Lambda) whose existence has the potential to create new types of applications.

When disruptive innovations occur, it is common for users to want to use them in a similar way to the technology they are supplanting. The early digital cameras, for all their innovations, were used in a manner more alike to film­based cameras. But as the disruptive technology matures, its use tends to expand into uses very different from its predecessor (e.g. cameras in mobile phones, on headsets, used as an interface to the physical world). As cloud computing matures as a disruptive technology, it is revealing to use new ways in which we can develop, deploy, and operate applications that were never before possible. But with this incredible innovation comes one obvious consequence: complexity.

Lack of integrated management

The explosion of growth in data centres in the 1990s brought with it an increase in complexity of infrastructure. This complexity gap fostered enormous innovation in the software industry that eventually resulted in the $20B+ IT Operations Management (ITOM) we have today.

This market was for over a decade dominated by five providers ­ IBM, HP, CA, BMC, and Microsoft ­ and their broad management suites. For years, these companies, along with a large assortment of SMB players, managed to contain much of the complexity of our rapidly-growing infrastructure.  Unfortunately, these products were designed for a different generation of infrastructure, and no longer provide the ability to contain the complexity.

This has given rise to a new generation of cloud management solutions ­ e.g. Chef, New Relic, Ansible, Docker, Stackdriver ­ which are focused on managing the complexity of a single vertical slice of the overall ITOM stack. While the vertical focus of products allows customers to assembled best-of-breed suites for their needs, the resulting solutions require the use of multiple products and console to manage the infrastructure. Using multiple disconnected products can often feel like looking through “keyholes” to manage your infrastructure, with each product providing only partial insight into the overall infrastructure. To compensate for this lack of integration, many companies are building their own integration, using custom software, spreadsheets.

New distribution of ownership

Gone are the days in which IT had full control over the provisioning, deprovisioning and operations of infrastructure in support of lines of business. This centralised control started to erode in the mid­2000s and has accelerated over the last decade, with the cloud adding fuel to the fire. It is increasingly common for lines of business to “go rogue” to achieve their business goals, leveraging external cloud services and even managing their own infrastructure. Their experiences have showed them the pace of innovation and agility that can come from outside of IT, and now there is no going back.

This change in ownership has increased the complexity for IT to provide the governance, compliance and risk management required to protect their businesses. IT needs to find new ways to exert soft controls to protect the business, while not inhibiting the agility their internal customers expect now from the cloud. Unfortunately all the ITOM tools available today are built to take advantage only of a centralised model.

Specialised knowledge

The cloud has created a technology rift that requires the adoption of new technologies and approaches to managing infrastructure. Traditional operations engineers, for example, are being challenged with concepts such as DevOps/infrastructure as code that require they acquire new skills and adapt to different mindsets; software engineers are being challenged with new IaaS/PaaS services that fundamentally change the approach to software architectures.

Unfortunately, only a portion of our existing talent pool has proven able and willing to make this shift, resulting in a talent crunch for the remaining resources. Even for those willing to make the transition, becoming an expert in the emerging technologies takes time and hands-on experience, which can be hard to find in many environments. Managing talent acquisition, retention and training is essential to a successful cloud strategy, but is also more complex and resource intensive than it was in pre­cloud days.

Managing TCO / ROI

Managing Total Cost of Ownership (TCO) and Return on Investment (ROI) pre­cloud was complex. Managing it in the cloud is turning out to be incredibly complex. With dozens of different services being used, per minute billing by some providers, and a constant flow of changes occurring within your infrastructure, being able to quantify TCO and ROI requires smart software instead of analysts with spreadsheets.

In a few commands and minutes, a DevOps engineer can fundamentally alter the TCO of a project or application, or shift the profitability of a business initiative. Managing TCO and ROI requires both smart software and constant vigilance, as “cloud drift” puts your business at constant risk.


We all know the incredible benefits of cloud computing: agility, flexibility, elasticity, consumption­based pricing, cost, quality of service, and resilience. These benefits have been sufficiently powerful that cloud computing is in the early phases of reshaping the landscape of computing, forever changing how we engage with infrastructure. But these benefits have come at a cost: complexity. The success of your cloud strategy will be directly affected by your willingness and ability to confront and manage this complexity.

Why the future is cloud autonomics


Many great innovations have come out of cloud computing, such as on-demand infrastructure, consumption-based pricing and access to global computing resources. However, these powerful innovations have come at a high cost: complexity.

Managing cloud infrastructure today is substantially more complex than managing traditional data center infrastructure. While some of the complexity is a direct consequence of operating in a highly transient and shared computing environment, most has to do with the unintended side effects of cloud computing. For example, consumption-based pricing allows us to pay for only what we use, but requires careful monitoring and continuous optimization to avoid resource waste and poor cost management.

API-driven resources allow us to launch compute and storage with a few lines of code, but require that the resources be highly configurable to support the various needs of its different users. Purpose-built services (e.g. Amazon S3) substantially reduce the barrier to building new types of applications, but require that we obtain the necessary expertise to manage and operate the new services.

The rise of cloud management

These early experiences with the growing complexity of managing cloud environments spawned a new generation of open source and commercial products intended to help us contain this complexity. However, the continued pace of innovation in public, private and hybrid clouds, combined with the increasing importance of multi-cloud environments, has continued to widen the gap between the complexity of our infrastructure and our ability to manage that infrastructure. It has become increasingly clear that something needs to change to support the cloud environments of tomorrow.

The genesis of autonomic computing

In 2001, IBM released a manifesto predicting a looming software complexity crisis caused by our inability to manage the rapid growth in computation, communication and information technology. The solution proposed was autonomic computing, a term which took its name from the autonomic central nervous system, essentially an automatic control system for the human body. In its manifesto, IBM defined autonomic computing as self-managing systems that can configure, optimize, heal and protect themselves without human intervention.

While the paper launched a field of research that remains active today, its impact has not been widely felt in the industry, in large part because the gap IBM forecasted did not become a substantial impediment to businesses until the early mainstream adoption of cloud computing, almost a decade later. But now, with the gap between the complexity of infrastructure and our ability for software to manage this complexity continuing to widen, the IBM manifesto seems suddenly prescient.

The impact of cloud autonomics

Cloud autonomics is the use of autonomic computing to enable organisations to more effectively harness the power of cloud computing by automating management through business policies. Instead of relying on manual processes to optimise cost, usage, security and performance of cloud infrastructure, a CIO/CTO can define business policies that define how they want their infrastructure to be managed and allow an autonomic system to execute these policies.

Cloud autonomics envisions a future in which businesses manage their clouds like brokerages manage equity trading – with policy-aware automation. A business will configure an autonomic system with governance rules. The system will then continuously monitor the business’s cloud environment, and when the environment goes out of compliance, will make the necessary changes to bring it back in line. Some sample policies cloud autonomics anticipates include:


  • The automated purchase of reserved capacity in support of organizational or functional needs (e.g. AWS reservations).
  • The automated movement of an idempotent workload from one cloud provider to another to obtain cost efficiencies.


  • The automated migration of data to another region in support of business service level agreements (SLAs).
  • The automated migration and/or backup of storage from one medium to another (e.g. migrating data in AWS EBS to S3 or Glacier).


  • The automated increase of the machine type for a workload to support more efficient operation of non-horizontally scaling workloads.


  • The automated change of network and/or endpoint security to conform to business policies.


  • The automated shutdown of idle or long-running instances in support of business policies (e.g. shutdown idle development infrastructure running more than a week).

Making cloud autonomics work for you

Cloud autonomics requires a system capable of executing a collect, analyze, decide and act loop defined in autonomic computing. Unlike autonomic computing, a cloud autonomic system relies on policy-driven optimizations instead of artificial intelligence. The system monitors one or more cloud providers and the customer infrastructure running within these cloud providers, evaluates user-defined policies, and using optimization algorithms, identifies recommendations to alter the infrastructure to be consistent with the user-defined policies. These recommendations may optionally require external approval before execution, and can seek privileges from an external system to execute approved changes.

A cloud autonomic system is capable of continual and automated optimization of cloud infrastructure based on business policies, both within a single cloud environment and across multiple clouds. It promises a future in which organizations can define policies, and have these policies securely managed down to microsecond decisions, resulting in consistent implementation and optimal resource utilization.