Not everybody wants to rule the world: Why HPE isn’t worried about catching up to Dell


Adam Shepherd

10 Jul, 2018

Looking at the figures from analysts like Gartner and IDC, one could be forgiven for thinking that HPE is in a spot of trouble; according to the latest reports, the company is trailing behind its main rival Dell Technologies in revenues and market share across both servers and storage.

You would imagine HPE would be concerned about this; its market share has shrunk over the past year whilst Dell’s has expanded, and this trend doesn’t show any immediate signs of stopping. Dell has gone from strength to strength since it swallowed EMC in 2016, while the last few years have been turbulent for HPE, to say the least.

However, the company appears to be weathering the storm. New CEO Antonio Neri seems like a strong and confident leader, its recent financial results have been showing improvement, and recent announcements about its intentions to simplify its channel programme have met with approval from partners.

Now that HPE has regained some stability, surely it’s looking to retake its position at the head of the infrastructure market? Yet, according to Mark Linesch, vice president of strategy for HPE, the company isn’t remotely concerned with whether or not it holds the market crown.

“Yeah, Dell’s got a couple of points of share according to Gartner – big deal,” he tells Cloud Pro.

“We’re not worried about Dell in servers at all. They’re a tough competitor, and we take them very seriously, but no – why would we worry about Dell getting a couple of points on us in servers? Who cares?”

Instead of chasing rankings, he says, the company is focusing on delivering maximum value and satisfaction to its customers, trying to help them solve their business problems by building the best infrastructure it possibly can.

This might sound like excuses from a company hoping to save face after losing the top spot that it held for so many years, and that may well be the case. However, downplaying its traditional infrastructure to a certain extent may actually be a sound strategic move for the vendor.

“I think it’s important that at this time of its existence – a new CEO, spin outs complete, et cetera – that HPE demonstrate to the market that it can set realistic goals and achieve them, or over-achieve, even,” says 451 Research co-founder William Fellows. “I don’t think that needs to be about catching Dell.”

On the other hand, Forrester senior analyst Naveen Chhabra warns that Dell is one competitor that shouldn’t be underestimated.

“While there is no doubt that HPE is gaining customers and market share, it absolutely needs to keep an eye on the market momentum,” he says. “Dell has forged a great number of technology partnerships, has a great ecosystem internally and externally.”

Dell has its own share of issues, but nothing notable enough that HPE should not be worried about Dell. Dell has a formidable family of technology offerings across its multitude of businesses.”

A shift to the ‘Intelligent Edge’

Both experts agree, however, that the biggest imminent threat to HPE is not Dell – or any other vendor, for that matter. Instead, it’s the industry’s growing shift towards the cloud.

As cloud infrastructure becomes more robust, more affordable and more popular, HPE needs to change up its strategy. To borrow a phrase from its sister company, it needs to reinvent itself.

HPE is doing this, counterintuitively, by embracing the cloud – or at least certain aspects of it. In particular, it’s adopting cloud-like service models for its on-premise infrastructure, offering consumption-based pricing for its hardware customers through HPE GreenLake. Using its traditional infrastructure business as a bedrock, the company is hoping that it can build long-term services and subscription-based revenue models that will sustain it going forward.

In addition to this new cloud-style go-to-market model, HPE is also putting considerable weight behind what it calls ‘the intelligent edge’ – the mish-mash of connected devices, peripherals, networking hardware and industrial equipment that comprises everything that’s not in the cloud or in the data centre. The company is ploughing $4 billion into the intelligent edge over the next four years, and has indicated that it’s a significant strategic priority.

According to Chhabra, while this is is a smart play for the company, it’s not without its risks, and he cautions that the market still isn’t totally mature.

“There is no doubt that the edge business is growing and hence almost all the large infrastructure vendors are putting their bets on ‘expected developments’ on the intelligent edge,” he says. “However we still need that to mature to levels where their independent and collective losses by adoption of public cloud can be offset.”

“In my humble and honest opinion, the messaging and focus on ‘the intelligent edge’ is directional and still at corporate levels. I don’t see concrete evidences of the developments – like technology and go-to-market partnerships, solution development, et cetera – that the infrastructure vendors are making. These developments are important and critical to ensure they are either ahead of the market, or take the leading position and create a niche for themselves.”

It’s true that HPE is no longer the market leader in server shipments, and that isn’t set to change any time soon – but that might not matter. Market trends suggest that as the traditional on-prem infrastructure business is increasingly eaten by the cloud, pivoting to emerging technologies is going to be the only way that companies like HPE are going to remain relevant.

CEO Antonio Neri says he’s playing the long game with his strategy, and that makes sense. Duking it out with Dell over market share may have been the way things worked with the old HPE, but that’s not the game any more. The two companies may well end up competing on the battlefield of edge computing – Dell has made significant investments in the area itself – but when it comes to old-school infrastructure, HPE may have to lose the battle in order to win the war.

Image courtesy of HPE

How to Turn Off Notifications on a Mac

With macOS®, app notifications became an integral part of our lives. We get notified about upcoming events, scheduled meetings, emails, Facebook messages, birthdays, and websites we accidentally subscribed to. Don’t get me wrong, notifications are extremely useful and help optimize workflow. But what if sometimes we need quiet time to focus on important tasks, avoid […]

The post How to Turn Off Notifications on a Mac appeared first on Parallels Blog.

Nutanix Named Platinum Sponsor of @CloudEXPO NY | @Nutanix #Agile #DevOps #Serverless #CloudNative

DXWorldEXPO LLC announced today that Nutanix has been named “Platinum Sponsor” of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.

read more

Container usage among developers reaching tipping point, says DigitalOcean

If your organisation is not allowing its developers to use container technologies, then you will very soon be in the minority, according to the latest analysis from DigitalOcean.

The company, in its latest quarterly Currents report assessing developer trends in cloud computing, found 49% of devs are now using containers in some form. Of that number, JavaScript (57%) was the most popular language used with them, ahead of Python (46%), PHP (36%), and Go (28%). Three in five (60%) said they were using containers both for testing and development and production, while three quarters (78%) of those who aren’t using containers today still plan to adopt them.

Scalability is the name of the game for container adoption according to the report, with 39% of respondents citing it as the key aspect. Simpler software testing (24%), quicker software testing (23%) and avoiding vendor lock-in (10%) were also cited.

Not altogether surprisingly, Kubernetes is the biggest game in town. 42% of those polled say they use it, compared with 35% for nearest rival Docker Swarm. Red Hat’s OpenShift (5%), Apache Mesos (3%), and CoreOS Tectonic (1%) all polled negligibly. Yet smaller companies are more likely to be Docker houses – of those with five employees or fewer, Docker Swarm (41%) won out over Kubernetes (31%).

Compared with a relatively strong consensus on where container technologies sit, developers’ knowledge and enthusiasm over serverless computing was somewhat split. Only half of those polled said they had a strong understanding of it, with four in five of those who don’t (81%) saying they plan to do further research this year. Around one in three – 35% for the US, 32% for the UK – say they have deployed applications in a serverless environment over the past year, with AWS Lambda (58%) the most popular platform, ahead of Google Cloud Functions (23%).

With this in mind, what should prospective employees and their employers be looking for? 39% of devs polled said their top considerations for new jobs were a competitive salary and opportunity for internal growth, while the company product (17%) and freedom to use particular technologies (23%) were lower down on the list. The research also found that many developers are still going down the traditional college route – more than half (51%) said they went to college, compared with only 6% who attended a coding bootcamp.

The report polled more than 4,800 respondents, with more than half (55%) saying they were developers, 13% working in DevOps and 10% saying they were students or managers respectively.

You can read the full report here.

Box CEO Aaron Levie says Facebook data scandals could undermine trust in Silicon Valley


Adam Shepherd

9 Jul, 2018

Box CEO Aaron Levie has warned that the actions of Google and Facebook are a “contagion” which could result in major organisations losing trust in Silicon Valley as a whole.

Speaking to Recode‘s Kara Swisher, he said that Box – and, by extension, other enterprise-focused companies – could find themselves suffering if the actions of more well-known tech firms casts doubt over the motivations of Silicon Valley at large.

“The worst-case scenario for us is that Silicon Valley gets so far behind on these issues that we just can’t be trusted as an industry. And then you start to have either companies from other countries,” he said, “or you have just completely different approaches and architectures to technology.”

Even though enterprise-focused tech companies might think that they are separate from the current wave of data-harvesting and privacy scandals by virtue of the fact that they don’t handle public data in the same way, the blow-back it causes could potentially result in a loss of confidence throughout the market.

“We rely on the Fortune 500 trusting Silicon Valley’s technology, to some extent, for our success,” Levie said. “When you see that these tools can be manipulated or they’re being used in more harmful ways, or regulators are stamping them down, then that impacts anybody, whether you’re consumer or enterprise.”

As a company, Box itself isn’t worried by the looming threat of increased regulation – something that has been mooted as a potential way to curb the excesses of Facebook and Google. By virtue of the fact that many of Box’s customers are in heavily-regulated industries like banking and life sciences, the company is “almost by proxy regulated”, Levie says.

The biggest barrier to regulating the largest tech companies, he argued, is that they’re so broad and diffuse that it’s difficult to apply single regulations to them. Instead, what’s more likely according to Levie is the application of separate pieces of legislation regarding individual issues, such as campaign financing, self-driving vehicles and AI use within healthcare.

In order to successfully achieve this, he said, government and regulatory bodies should be staffed with “super-savvy” individuals who understand the industry and the tech which they will be dealing with.

“We have an extremely strong vested interest in ensuring that Silicon Valley and DC are operating effectively,” he said. “We care that we get through this mess, and that Google resolves their issues, and Facebook resolves their issues, and so on.”

Image credit: Stephen Brashear

Digital dexterity: Exploring the new ways of work

As more progressive business leaders choose to support mobile, team-oriented and non-routine ways of working, an increasing number of them are looking for assistance in adopting digital workplace technology. But why are they searching for actionable information and qualified guidance?

According to the findings from the latest Gartner survey, only 7 to 18 percent of organizations possess the 'digital dexterity' to adopt New Ways of Work (NWOW) solutions — such as virtual collaboration and a mobile workplace. Furthermore, it's already apparent that forcing employees to accept rigid and inflexible workplace mandates are a recipe for poor performance.

Ongoing change reshapes the workforce

According to the Gartner assessment, an organization with high digital dexterity has employees who have the cognitive ability and social practice to leverage and manipulate media, information and technology in unique and highly innovative ways.

By country, organizations exhibiting the highest digital dexterity were those in the U.S. (18.2 percent of respondents), followed by those in Germany (17.6 percent) and then the UK (17.1 percent).

"Solutions targeting new ways of work are tapping into a high-growth area, but finding the right organizations ready to exploit these technologies is challenging," said Craig Roth, research vice president at Gartner.

In parallel, the survey found that workers in the United States, Germany and UK have, on average, higher digital dexterity than those in France, Singapore and Japan.

Workers in the top three countries were much more open to working from anywhere, in a non-office manner. They had a desire to use consumer software and websites at work. Some of the difference in workers' digital dexterity is driven by cultural factors, as shown by large differences between countries.

For example, population density impacts the ability to work outside the office, and countries with more adherence to organizational hierarchy had decreased affinity for social media tools that drive social engagement.

The youngest workers are the most inclined to adopt digital workplace products and services. They have a positive view of tech in the workplace and a strong affinity for working in non-office environments. Nevertheless, they reported the lowest levels of agreement with the statement that 'work is best accomplished in teams'.

The survey also showed that the oldest workers are the second most likely adopters of NWOW. Those aged 55 to 74 have the highest opinion of teamwork, have progressed to a position where there is little routine work, and have the most favorable view of all age groups of internal social networking technology.

Embracing vs. resisting workplace change

In contrast, workers aged 35 to 44 were at the low-point of the adoption dip, potentially feeling fatigued with the routines of life as middle age approaches. They were most likely to report that their jobs are routine, have the dimmest view of how technology can help their work, and are the least interested in mobile work. Moreover, larger organizations on average had higher digital dexterity than smaller ones.

"Embracing dynamic work styles, devices, work locations and team structures can transform a business and its relationship to its staff. But digital dexterity doesn't come cheap," said Mr. Roth. "It takes investment in workplace design, mobile devices and software, and larger organizations find it easier to make this investment."

Leaders that insist on a 'one-size-fits-all' approach to NWOW are doomed to fail.

Why you need to work through the growing pains to make the most out of multi-cloud

It’s no surprise that the cloud is highly utilized in the enterprise for everything from workloads to failover to DevOps. According to a 2018 State of the Cloud Survey, multi-cloud adoption has arrived —81 percent of respondents currently use a multi-cloud strategy, and organizations leverage almost five clouds on average. To stay ahead of the curve, CIOs need to implement a multi-cloud strategy sooner than later.

We can and should learn from our past mistakes. When public cloud first came into existence, everyone saw it as suitable only for small-scale enterprises with a multitude of issues ranging from cumbersome IT workflows and budgeting nightmares to widespread security concerns. Nowadays, it is safe to say that if you are not using public cloud in some form, you are years behind your competitors.

But as the name implies, a multi-cloud strategy involves dealing with multiple cloud vendors. As with every digital solution, cloud providers want to completely immerse you in their ecosystem, so that it’s easier to upsell you their value-added services. This introduces the IT headache of learning and managing cloud vendor-specific tools and techniques. Solutions like Red Hat OpenShift and Pivotal Cloud Foundry help you in avoiding these traps and provide a cloud management layer so that your IT teams are only responsible for building and running applications.

Problems to address

Even though adopting a multi-cloud strategy is a smart move for most organizations, there are still problems and growing pains that need to be addressed. And fast.

  • Security: This is one of the biggest concerns when using any public cloud solution, and there is no magic bullet to solve it. Security provided by public cloud vendors is generally robust for cloud-only workloads, but a significant amount of planning is necessary to secure data that flows over the highly vulnerable public network from on-premise infrastructure to public clouds
  • Cost: Controlling costs has been an issue since public cloud came into existence. It’s easy to run into budget bleeding if cloud consumption is not monitored. Multi-cloud adoption gives you an opportunity to commoditize cloud resources and transition between public cloud vendors based on your needs. This demands a significant shift in the way enterprise IT works because it requires moving from vendor-enforced workflows to vendor-agnostic infrastructure
  • Lack of true cross-cloud solutions: Most solutions providing multi-cloud support use makeshift approaches that enable data and application movement from one cloud to another. There is a dearth of products that can provide true cross-cloud fabric so that end-users can visualize and utilize multiple clouds as a single platform instead of numerous isolated entities.

Multi-cloud deployments are still in the relatively nascent stage, limited to applications and workloads which are not business-critical or mission-critical. Some enterprises are forced to implement a multi-cloud strategy because they depend on a line-up of solutions that are not entirely supported by a single vendor. For instance, VMware support only exists for AWS and cloud support for Microsoft office applications is most cost-effective through Azure.

Making the most of multi-cloud

Looking ahead, there’s various strategic justifications for a multi-cloud deployment. The business environment is global and highly dynamic, with requirements and relationships changing constantly. Not to mention that the challenges of privacy and security threats plus related regulations are relentless. Data is multiplying exponentially, IoT is growing rapidly, and artificial intelligence and machine learning projects are sure to test the limits of cloud compute capabilities. It’s already true that traditional data storage systems cannot meet the dynamic demands of distributed applications and cloud deployments, and storage designed with distributed systems in mind makes it possible to quickly provision application specific, policy-based data services.

To respond with agility to emerging technology standards and challenges, multi-cloud adoption strategies must focus first on the standardization of cloud orchestration and commoditization of resources. As digital business models and enterprise systems grow in scale and complexity, automated orchestration capabilities will be essential to maintaining control and getting the most out of any cloud investments. Commoditization of hardware, managed services, cloud platforms, and even security solutions is making infrastructure components more turnkey and interoperable — but only if you have an overarching layer of control and visibility.

It’s especially imperative that enterprise-level multi-cloud management include automated orchestration tools for replicating data across multiple sites (data center, private cloud, and public cloud). Use cases like analytics, test and development, unstructured data and secondary storage demonstrate the necessity of finding ways to reliably and easily move workloads from cloud to cloud.

The promise of a cloud-agnostic infrastructure is to make data easier to access and more affordable to store long-term by putting different types of data into different clouds for their various benefits and cost structures. Multi-cloud deployments strengthen business continuity and resilience, empower DevOps development and cloud-native applications, and optimize regulatory compliance and service delivery for global organizations.

The trajectory and dynamism of cloud technology reflects the nature of the modern world — explosive growth and sudden contractions, the surge and retreat of markets, the fluidity of consumer trends, and the nonstop inventiveness of a young, diverse, and international citizenry. To match this energy and catch its currents, you’ll need the flexibility, control, and wide-open potential of multi-cloud. Up, up, and away we go.

If you’re adopting the cloud in your organisation, you don’t have to start from scratch

Your organization recently launched its cloud adoption journey. Executives are supportive, and grassroots enthusiasts add energy to your effort. The whole initiative launched with proper planning and focus, and your task force is working well together.

Despite the strong start, many days feel like a slog. Progress is slow, and teams regularly surface new hurdles that they must overcome before the organization can bring its first critical workloads online in the public cloud. Pressure is mounting for a quick win.

Sound familiar?

The transition to the cloud can be difficult, and one major hindrance is that organizations are trying to bite off too much complexity as they initiate their cloud adoption journey. Right by default cloud solutions can accelerate your progress.

Reinventing the wheel

Organizations understand their unique needs and vulnerabilities, but at times they forget their commonalities with others. As a result, many embrace a notion that their cloud adoption journey must start from scratch procedurally and technically. There is some truth in this narrative — cloud adoption involves significant change, and organizations may feel at times like they are rebuilding core IT operations from scratch. However, they should not actually start from scratch on the cloud adoption journey — teams will lose the time and energy to transform and innovate as they focus on rebuilding basic IT processes and foundational technology components. In fact, starting from scratch works against the most common objectives of cloud adoption — it delays operational excellence, return on investment, and innovation.

Instead of starting from scratch, organizations should focus on their commonalities with others and accelerate their cloud adoption journey with proven processes and tools. Today, many cloud adoption tool kits contain repeatable patterns, templates, and processes that can be leveraged to accelerate cloud adoption. These tool kits reduce the burden and complexity of cloud adoption and start organizations down a path deploying solutions that are right by default.

Right by default cloud solutions are tool kits that allow companies to adopt proven processes, technology components, and security best practices. These solutions cover the basics — they establish a cloud footprint with connectivity, keep the lights on with monitoring and availability, secure the perimeter, and provide run books for standard cloud operations. Organizations can customize these “default” solutions as teams become familiar with operating in a cloud landscape and are aware of what differentiates their needs from other cloud adopters.

By deploying right by default solutions, organizations are free to rethink team structures and processes, retrain teams, explore new capabilities, and embrace new norms. Right by default solutions reduce complexity and frustration by focusing team energy on differentiators and allowing organizations to customize full-featured offerings for their needs. They reduce risk and focus teams on adopting new cloud services and delivering innovative business solutions.

So why not start from scratch, deploy a few workloads, and evolve incrementally? Deferring security, governance, and operational run books leave organizations vulnerable to delayed value and increased risk. Instead, available cloud tool kits can provide tools and processes that install a footprint, establish effective cloud operations, safeguard data, and accelerate employee learning.

Getting started with right by default

Right by default solutions take many forms. How can you evaluate your options? Here are three tactics to evaluate available solutions against your organizational needs:

  • Be skeptically inquisitive about your cloud adoption journey: Keep differentiated needs in mind, but also be open to seeking and evaluating standard cloud patterns and practices. Challenge each internal assumption that a unique solution is required. Find peer organizations that are nearer the middle or end of their cloud adoption journey and ask whether they believe their cloud adoption experience and needs are unique or common to others. Most organizations discover that they face similar challenges and can apply common (right by default) solutions
  • Inquire about the gaps in any cloud solution: Be clear about the capabilities and gaps of any advertised cloud tool kit. Ask for proof of value, expect some hype, and listen for those who share shortcomings readily. Understand what help can be offered and be clear about how to customize or extend these right by default solutions at a future date
  • Experiment: Experimentation is critical when adopting emerging technology. Look beyond traditional cloud proofs-of-concept (e.g., tactical workloads, service demonstrations) to experiment with the tool kits that will accelerate cloud adoption. Dive into code templates, process flows, and run books. Explore the resulting cloud footprints. How is the network configured? How are security policies codified? How would IT team members deploy production workloads into this environment? If the answers aren't satisfactory, challenge the tool kit community or find a better solution. Be hands-on to learn what could accelerate progress.

You can find right by default solutions in the open source community, organizations specializing in making and selling software (known as independent software vendors or ISVs), and the cloud partner ecosystem. Open source and ISV solutions are low friction to evaluate — they often jump-start adoption with examples and community support, though outcomes vary significantly by vendor and solution. Partner offerings are typically proprietary tool kits that follow cloud vendor best practices and include run books and operational guidance. Ask questions and experiment to find the best fit for your organization.

As you return to the office to lead your organization’s cloud adoption journey, ask your team: Are we achieving our cloud goals for operational excellence, ROI, and innovation?

Are we reinventing the wheel, or are we adopting right by default cloud solutions?

Deliver the promise of your cloud adoption journey with right by default.

Alibaba Cloud eyes further EMEA expansion with launch of partner program

Alibaba Cloud’s expansion outside of China and Asia Pacific has been well documented – and now the provider is firming up its commitments with the launch of a new EMEA partner program.

The partner program will aim to look at four key areas; developing digital transformation in targeted vertical industries; supporting the development of talent; advancing technology innovation; and enhancing marketplaces.

The overall effect, in the company’s words, is ‘to create an inclusive ecosystem that can benefit all those involved’. What this means is the support of companies such as Intel and Accenture, as well as Station F, the world’s largest startup campus.

“Our goal in EMEA is to bring powerful and elastic cloud services to our customers and create a well-connected, comprehensive ecosystem with our partners to accelerate cloud technology development in the regional cloud industry,” said Yeming Wang, Alibaba Cloud EMEA general manager in a statement.

Speaking to this publication back in May, Wang noted the changing landscape as key to Alibaba Cloud’s proposed expansion. Not only is it a global strategy from the whole Alibaba group, but there are political and technological ramifications – from China opening its doors to outside trade more on the one hand, to a rise in multi-cloud initiatives on the other.

“Today, we have a lot of clients asking to adopt Alibaba as a second or third public cloud provider,” said Wang. “Alibaba Cloud, from a user experience point of view, is quite similar to AWS. That is why we got comments from different clients – they say ‘if the guys are AWS certified or [an] AWS expert, then you’re halfway to being very familiar with Alibaba also.’”

Amazon is certainly the target for Alibaba globally. According to the latest note from analyst firm Synergy Research, AWS leads across all geographies, with Microsoft and Google comprising the top three everywhere instead of Asia Pacific, where Alibaba is second.

Tighten your belts: The four cloud resources most likely to eat up your budget

For the past several years, I have been warning companies to watch out for idle cloud resources. This often means instances purchased “on demand” that companies use for non-production purposes like development, testing, QA, staging, etc. These resources can be “parked” when they’re not being used (such as on nights and weekends). Of course, this results in great savings. But this doesn’t address the issue of how idle cloud resources extend beyond your typical virtual machine.

Why idle cloud resources are a problem

If you think about it, the problem is not very complicated. When a resource is idle, you’re paying your cloud provider for something you’re not actually using. And there’s no reason to pay for something you are not actually using.

Most non-production resources can be parked about 65% of the time, that is, parked 12 hours per day and all day on weekends. Many of the companies I talk to are paying their cloud providers an average list price of $220 per month for their instances. If you’re currently paying $220 per month for an instance and leaving it running all the time, that means you’re wasting $143 per instance per month.

Maybe that doesn’t sound like much. But if that’s the case for 10 instances, you’re wasting $1,430 per month. One hundred instances? You’re up to a bill of $14,300 for time you’re not using. And that’s just a simple micro example. At a macro level that’s literally billions of dollars in wasted cloud spend.

So what kinds of resources are typically left idle, consuming your budget? Let’s dig into that, looking at the big three cloud providers — Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Four types of idle cloud resources

  • On-demand Instances/VMs: This is the core of the conversation, and what I have addressed above. On demand resources – and their associated scale groups – are frequently left running when they’re not being used, especially those used for non-production purposes.
  • Relational databases: There’s no doubt that databases are frequently left running when not needed as well, in similar circumstances to the On Demand resources. The problem is whether you can park them to cut back on wasted spend. AWS allows you to park certain types of its RDS resources, however, you cannot park like idle database services in Azure (SQL Database) or GCP (SQL). In this case, you should review your database infrastructure regularly and terminate anything unnecessary – or change to a smaller size if possible.
  • Load balancers: AWS Elastic Load Balancers (ELB) cannot be stopped (or parked), so to avoid getting billed for the time you need to remove it. The same can be said for Azure Load Balancer and GCP Load Balancers. Alerts can be set up in Cloudwatch/Azure Metrics/Google Stackdriver when you have a load balancer with no instances, so be sure to make use of those alerts.
  • Containers: Optimizing container use is a project of its own, but there’s no doubt that container services can be a source of waste. In fact, we are evaluating the ability for my company, ParkMyCloud, to park container services including ECS and EKS from AWS, ACS and AKS from Azure, and GKE from GCP, and the ability to prune and park the underlying hosts. In the meantime, you’ll want to regularly review the usage of your containers and the utilization of the infrastructure, especially in non-production environments.

Conclusion

Cloud waste is a billion-dollar problem facing most businesses today. But the solution is quite simple. Make sure you’re turning off idle cloud resources in your environment. Do this by parking those resources that can be stopped and eliminating those that can’t.