All posts by jaychapel

What will drive 2020 in cloud governance? In a hybrid world, a solid strategy is key

Now that we are a few weeks into 2020, we should consider what lies ahead in the ever-evolving world of cloud governance. What seems certain is that when it comes to IT governance there is still the same need to balance the benefits of agility and speed which come from decentralisation, against key business risks be they security and/or cost management.

In fact, what is meant by cloud governance really depends on where you sit within an organisation. Microsoft has produced some interesting content on these different perspectives, which they have boiled down into “five disciplines.” 

From my perspective, I think mostly around cost management and cost optimisation. Obviously, if you sit within a security related function within a company or are a vendor of security tools, cloud governance means something quite different. The other factor impacting perspective is where you stand on the so-called ‘cloud journey’. If you are still working on migrating your first workloads to the cloud you will have a completely different outlook than if you have been in the cloud for the last 10 years and built your entire business model from the ground up in the public cloud.

So, now that we are in 2020, what does this all mean? The cloud world is full of predictions, but one often cited that caught my eye is that in 2020, 83% of enterprise workloads will be in the cloud with approximately half of these being in the public cloud (AWS, Azure and GCP for example).

The growth in public cloud over the last decade has been enormous and with it a management task that has moved beyond a scale that humans can manage. Automation has been part of the cloud since its inception, but the move to automated governance has begun and without a doubt will continue to accelerate in the coming years.

Be it automated, from cloud guardrails which prevent misconfigurations which enable malicious attackers to penetrate what was considered well protected systems, to automated cloud cost control which automatically schedules resources to be available when required (and off when not) or adjusted to the right-size to meet the needs of the workload. It’s also not just the infrastructure layer that’s going to get automated as new tools emerge including application resource management, which enables the entire application stack to be automated using software. 

In reality, most of what is termed ‘automation’ in the world of cloud governance in 2020 is really just recommendations, which are then manually implemented. These often still require sophisticated workflows, approval processes and signoffs from operations and business owners. Few organisations have moved to fully automated governance actions were, in essence, the machines are being used to manage machines.

Just as with the move towards autonomous vehicles, driver augmentation via adaptive cruise control, lane-centering et al is now considered almost standard on new cars, and so is at least some level of automation in governance is becoming a standard requirement. Being delivered a list of hundreds of recommendations in the last decade was considered a vast improvement on the status quo. In the next decade, these recommendations will likely increasingly become invisible as infrastructure optimisation is managed in an ongoing and continuous manner and will require little or no human input.

The range of governance tasks to be automated is also likely to grow. I can already observe the way cost management is increasingly being automated and our own customers are getting comfortable with more ‘set it and forget it’ automation processes based on policies they define. Teams anxious about cloud security are turning to a growing market of automation tools that cover monitoring, compliance, and threat management and remediate these issues in real-time.

There is certainly a lot of headroom when it comes to automating governance. It makes me wonder where we will be by 2030. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Getting past cloud cost confusion: How to avoid the vendors’ traps and win

Cloud service providers like AWS, Azure, and Google were created to provide compute resources to save enterprises money on their infrastructure. But cloud services pricing is complicated and difficult to understand, which can often drive up bills and prevent the promised cost savings. Here are just five ways that cloud providers obscure pricing on your monthly bill.

Terminology varies

For the purpose of this article, I’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different terms are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a scale group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.”

There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google has “on-demand” resources that are frequently discounted through “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have “spot instances” in AWS, which are the same as “low-priority VMs” in Azure, and “preemptible instances” in Google.

It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but even just trying to figure out the basics of how much you’re currently spending and predicting how much you will be spending – all can be very hard to understand.

You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilisation of data centres in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. For some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track. So, you may not even realise that there’s been a price change if you’ve been running the same instances on a consistent basis.

Multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimised, memory optimised, disk optimised and other various types.

Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra-large). So, there are lots of different options available – and lots of confusion, too. 

It’s based on what you provision – not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much. But after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t typically come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used.

As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. I’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realise they’ve been running all this time, charging up bill after bill.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures. And they are great options if – and I emphasise, if – you know how to use them. To optimise your cloud environment and save money on costs, here are a few suggestions:

  • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool
  • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed
  • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends (my company, ParkMyCloud, focuses on this type of optimisation along with rightsising)
  • Review regularly to plan out usage and schedules as much as you can in advance
  • Put governance measures in place so that users can only access certain features, regions, and limits within the environment

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Tighten your belts: The four cloud resources most likely to eat up your budget

For the past several years, I have been warning companies to watch out for idle cloud resources. This often means instances purchased “on demand” that companies use for non-production purposes like development, testing, QA, staging, etc. These resources can be “parked” when they’re not being used (such as on nights and weekends). Of course, this results in great savings. But this doesn’t address the issue of how idle cloud resources extend beyond your typical virtual machine.

Why idle cloud resources are a problem

If you think about it, the problem is not very complicated. When a resource is idle, you’re paying your cloud provider for something you’re not actually using. And there’s no reason to pay for something you are not actually using.

Most non-production resources can be parked about 65% of the time, that is, parked 12 hours per day and all day on weekends. Many of the companies I talk to are paying their cloud providers an average list price of $220 per month for their instances. If you’re currently paying $220 per month for an instance and leaving it running all the time, that means you’re wasting $143 per instance per month.

Maybe that doesn’t sound like much. But if that’s the case for 10 instances, you’re wasting $1,430 per month. One hundred instances? You’re up to a bill of $14,300 for time you’re not using. And that’s just a simple micro example. At a macro level that’s literally billions of dollars in wasted cloud spend.

So what kinds of resources are typically left idle, consuming your budget? Let’s dig into that, looking at the big three cloud providers — Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Four types of idle cloud resources

  • On-demand Instances/VMs: This is the core of the conversation, and what I have addressed above. On demand resources – and their associated scale groups – are frequently left running when they’re not being used, especially those used for non-production purposes.
  • Relational databases: There’s no doubt that databases are frequently left running when not needed as well, in similar circumstances to the On Demand resources. The problem is whether you can park them to cut back on wasted spend. AWS allows you to park certain types of its RDS resources, however, you cannot park like idle database services in Azure (SQL Database) or GCP (SQL). In this case, you should review your database infrastructure regularly and terminate anything unnecessary – or change to a smaller size if possible.
  • Load balancers: AWS Elastic Load Balancers (ELB) cannot be stopped (or parked), so to avoid getting billed for the time you need to remove it. The same can be said for Azure Load Balancer and GCP Load Balancers. Alerts can be set up in Cloudwatch/Azure Metrics/Google Stackdriver when you have a load balancer with no instances, so be sure to make use of those alerts.
  • Containers: Optimizing container use is a project of its own, but there’s no doubt that container services can be a source of waste. In fact, we are evaluating the ability for my company, ParkMyCloud, to park container services including ECS and EKS from AWS, ACS and AKS from Azure, and GKE from GCP, and the ability to prune and park the underlying hosts. In the meantime, you’ll want to regularly review the usage of your containers and the utilization of the infrastructure, especially in non-production environments.


Cloud waste is a billion-dollar problem facing most businesses today. But the solution is quite simple. Make sure you’re turning off idle cloud resources in your environment. Do this by parking those resources that can be stopped and eliminating those that can’t.

Assessing the key reasons behind a multi-cloud strategy

Everyone who follows cloud computing agrees that we are starting to see more businesses utilise a multi-cloud strategy. The question this raises is: why is a multi-cloud strategy important from a functional standpoint, and why are enterprises deploying this strategy?

To answer this, let’s define “multi-cloud” since it means different things to different people. I personally like this one, as seen on TechTarget:

“the concomitant use of two or more cloud services to minimise the risk of widespread data loss or downtime due to a localised component failure in a cloud computing environment… a multi-cloud strategy can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructures to meet the needs of diverse partners and customers”

From my conversations with some cloud gurus and our customers, a multi-cloud strategy boils down to:

  • Risk mitigation – low priority
  • Managing vendor lock-in (price protection) – medium priority
  • Optimising where you place your workloads – high priority

Let’s look at each one.

Risk mitigation 

Looking at our own infrastructure at ParkMyCloud, we use AWS and other AWS services including RDS, Route 53, SNS and SES. In a risk mitigation exercise, would we look for those like services in Azure, and try to go through the technical work of mapping a 1:1 fit and building a hot failover in Azure? Or would we simply use a different AWS region – which uses fewer resources and less time?

You don’t actually need multi-cloud to do hot failovers, as you can instead use different regions within a single cloud provider. But that’s betting on the fact that those regions won’t go down simultaneously. In our case we would have major problems if multiple AWS regions went down simultaneously, but if that happens we certainly won’t be the only one in that boat.

Furthermore, to do a hot failover from one cloud provider to another (say, between AWS and Google), would require a degree of working between the cloud providers and infrastructure and application integration that is not widely available today.

Ultimately, risk mitigation just isn’t the most significant driver for multi-cloud.

Vendor lock-in

What happens when your cloud provider changes their pricing? Or your CIO says we will never be beholden to one IT infrastructure vendor, like Cisco on the network, or HP in the data centre? In that case, you lose your negotiating leverage on price and support.

On the other hand, look at Salesforce. How many enterprises use multiple CRMs?

Do you then have to design and build your applications to undertake a multi-cloud strategy from the get-go, so that transitioning everything to a different cloud provider will be a relatively simple undertaking? The complexity of moving your applications across clouds over a couple of months is nothing compared to the complexity of doing a real-time hot failover when your service is down. For enterprises this might be doable, given enough resources and time. Frankly, we don’t see much of this.

Instead, I see customers using a multi-cloud strategy to design and build applications in the clouds best suited for optimising their applications. By the way — you can then use this leverage to help prevent vendor lock-in.

Workload optimisation

Hot failovers may come to mind first when considering why you would want to go multi-cloud, but what about normal operations when your infrastructure is running smoothly? Having access to multiple cloud providers lets your engineers pick the one that is the most appropriate for the workload they want to deploy. By avoiding the “all or nothing’ approach,” IT leaders gain greater control over their different cloud services. They can pick and choose the product, service or platform that best fits their requirements, in terms of time-to-market or cost effectiveness – then integrate those services. Also, this approach may help in avoiding problems that arise when a single provider runs into trouble.

A multi-cloud strategy addresses several inter-related problems. It’s not just a technical avenue for hot failover. It includes vendor relationship management and the ability optimise your workloads based on the strengths of your teams and that CSP’s infrastructure.

By the way — when you deploy your multi-cloud strategy, make sure you have a management plan in place upfront. Too often, I hear from companies who deploy on multiple clouds but don’t have a way to see or compare them in one place. So, make sure you have a multi-cloud dashboard in place to provide visibility that spans across cloud providers, their locations and your resources, for proper governance and control. This will help you can get the most benefit out of a multi-cloud infrastructure.

Why your CFO is telling you to cut Azure costs – and what you can do about it

The rapid growth of Azure is certainly exciting for customers who have bought into the Microsoft stack. This momentum means quickly evolving product lines, balanced pricing and improved cloud services. However, dominant competitor Amazon Web Services (AWS) has had more time to feel and subsequently address growing pains that Azure users are now starting to feel. This means that AWS users have more options available to them to address certain concerns that come with using public cloud. Chief among these concerns is managing costs.

Azure spend growing

Why is this a pressing issue? As more and more companies adopt Microsoft Azure as their public cloud, the need to reduce Azure costs becomes ever more important. As IT, development and operations grow their usage of Azure cloud assets, finance is catching up. Your CFO has seen the bill and is likely thinking something like, “I thought cloud was supposed to be cheaper. So why is this bill so high?”

It’s no secret that overall Azure spend is rising rapidly. Azure is the fastest-growing cloud provider, from the standpoint of both adoption by new customers and growth within accounts of existing customers. Many users of other clouds, such as AWS, are also adopting Azure as a secondary option. But as one executive recently told me: “As we started to dive into it, we found that a large part of our spend is simply on waste. We didn’t have visibility and policies in place. Our developers aren’t properly cleaning up after themselves and resources aren’t being tracked. So it’s easy for servers to be left running. It’s something we want to change, but it takes time and energy to do that.”

Wasted spend on Microsoft Azure

How much are Azure users worrying about managing their cloud costs? According to RightScale’s 2017 State of the Cloud report, managing costs is a huge, top-of-mind challenge. RightScale found that customers consistently underestimate how much they are wasting. So, when we’re looking at Microsoft Azure specifically, how much spend is wasted?

  • The public cloud IaaS market is $23 billion
  • 12% of that IaaS market is Microsoft Azure, or $2.76 billion
  • 44% of that is spent on non-production resources – about $1.21 billion
  • Non-production resources are only needed for an average of 24% of the work week, which means up to $900,000,000 of this spend is completely wasted.

And that’s only a portion of the waste. It doesn’t even address oversized resources, orphaned volume storage and other culprits. Many of these problems are well-addressed in AWS, but the Azure support market is still catching up.

Is it any wonder that IT, development, and operations teams are being tapped by CFOs left and right to reduce costs as the Azure bill becomes a growing line item in the budget?

Control Azure costs before your CFO makes you

The good news? There are some simple ways to get started with reducing costs. Here are a few starting points:

  • Control your view –  the first step toward change is awareness, so use an Azure dashboard to view all your resources in one, consolidated place. I’ve heard from end users, upon getting a single view of all their resources in their dashboard, that they found virtual machines (VMs) they didn’t even know were running.
  • Control your processes – talk with your team and set clear guidelines around provisioning appropriately sized VMs, stopping non-production VMs when they are not needed, and governing existing VMs (for example, whose responsibility is it to make sure each team is only running the resources they actually need?)
  • Turn the lights off –schedule the “lights to turn off” when you’re not home. In other words, schedule non-production resources to turn off when no one is using them – by turning off nights and weekends, this can save 65% of the cost of the resource.
  • “Right size” your VMs – make sure you aren’t choosing larger capacity/memory/CPU than you need.
  • Set a spending limit on your Azure account – you can do a hard cutoff that will turn off your VMs once you hit the limit, or simply sign up to receive email alerts when you approach or hit the spending limit.

The growth of Azure is a tremendous development for many companies. However, the problem of cloud waste must be dealt with before it impacts the bottom line. So, automate your operations today and make your CFO happy.

Why it’s okay for MSPs to help customers stop wasting money on cloud resources


Gartner recently reported that by 2020, the “cloud shift” will affect more than $1 trillion in IT spending. The shift comes from the confluence of IT spending on enterprise software, data center systems, and IT services all moving to the cloud.

With this enormous shift and change of practices comes a financial risk that is very real: organisations are spending money on services they are not actually using. In other words, wasting money.

The size of waste

How much is actually being wasted? Let’s take a look at the cloud market as a whole. According to the Gartner study, the size of the cloud market is about $734 billion. Of that, $203.9 billion is spent on public cloud. Public cloud spend is spread across a variety of application services, management and security services, and more – all of which have their own sources of waste.

Within the $22.4 billion spent on infrastructure as a service (IaaS), about 2/3 of spending is on compute resources (rather than database or storage). But roughly half of these compute resources are used for non-production purposes: development, staging, testing, QA, and other behind the scenes work. The majority of servers used for these functions do not need to run 24 hours a day, 7 days a week. In fact, they’re generally only needed for a 40-hour work week at most (even this assumes maximum efficiency with developers accessing these servers during their entire workdays).

Since most compute infrastructure is sold by the hour, this means that for the other 128 hours of the week, companies are paying for time they’re not using.

Should managed service providers (MSPs) help in reducing this waste? If so, how?

Cost reduction as an element of total value

The concept of an MSP helping customers save money on cloud services is a tricky one. If customers purchase cloud services through the MSP, won’t reducing the amount the customer spends also reduce the MSP’s revenue?  Besides, is “saving money” really an outcome customers seek from their service providers?

I’ve been grappling with some of these questions lately as I’ve considered potential partnerships with MSPs and cloud consulting firms. When I talk to them, I ask questions about their clients’ key priorities and how they seek to deliver additional value.

I pay particular attention to how they prioritise helping their customers save money. It appears that while cost reduction for clients is seen as important, it is often framed as a way for users to get more bang for the buck – not as a reduction in total spend. MSPs report that their clients typically have annual budgets that can be spent across all cloud or IT services. Therefore, staying within budget across all services is the primary goal. But any dollar saved on cloud compute services can then be put to work in other areas of the business. This keeps the end user satisfied by giving them more value per dollar. The MSPs are satisfied by providing more, and stickier, services to their customers.

In addition to cost savings, MSPs want to deliver productivity gains to clients. This can be done by directly implementing solutions on clients’ behalf. Increasingly, however, MSPs prefer to put tools in place that their clients can then use to optimise their own cloud infrastructure. Many small businesses don’t have the technical expertise necessary to migrate their technology infrastructure to the cloud, though once they are up and running, they are often able to self-manage parts of their own infrastructure.

As one MSP recently told me, “we could probably write custom scripts for our customer to turn things on and off, but that really doesn’t scale. To be honest, I think they would prefer controlling their own environment.”

The key to MSP success in the cloud

As the role of the traditional MSP continues to evolve, the most successful providers increasingly seem to understand the following:

  • Helping customers optimise their cloud spend is very important
  • Providing customers with self-service tools to better self-manage their own cloud environments is key to sticky customers.

Although there will be many goals against which MSPs and cloud consultants are measured, it seems clear that reducing/optimising cloud spend and empowering customers with the right tools to manage cloud are two side of the same coin and a real key for MSPs to succeed.