Todas las entradas hechas por deepakjannu

What can 80 M&As tell us about the state of IT operations management software?

IT operations management (ITOM) software helps enterprises manage the health, availability, and performance of modern IT environments. Analyst firm Gartner expects the ITOM software market to grow to $37 billion in annual revenues by 2023, with legacy on-prem tools giving way to powerful SaaS solutions for hybrid performance monitoring and management.

August this year saw five significant ITOM tool exits, with Splunk acquiring SignalFx for a cool $1.05 billion, Resolve Systems buying out FixStream, Virtual Instruments purchasing Metricly, VMware splurging on Veriflow, and Park Place Technologies acquiring Entuity. In related news, application performance monitoring provider, Dynatrace, went public at a $6.7 billion valuation while cloud monitoring tool, Datadog, recently announced its $100 million IPO listing.

To better understand ITOM software acquisition patterns, we assembled a dataset of 80+ acquisitions and buyouts of ITOM tool vendors since January 2015. This dataset lets IT buyers analyse and decipher the answers to the following questions:

  • What industry trends are responsible for a new wave of acquisitions?
  • Which ITOM categories have seen the most number of acquisitions and buyouts?
  • Which technology leaders have acquired innovative startups in the last few years?
  • What role has private equity played in fueling market innovation and consolidation?
  • What are the strategic reasons behind an incumbent assembling an acquisition portfolio?

Here are five things we learned from 80+ ITOM software acquisitions over the last five years:

Industry trends fuel new category creation

Research firm IDC expects public cloud spending to grow from $229 billion in 2019 to $500 billion in 2023. The runaway adoption of public cloud infrastructure has unleashed massive disruption in the ITOM software market. Traditional approaches to performance monitoring and cost optimisation are no longer relevant in a world of on-demand, ephemeral, and elastic cloud services. Enterprise cloud consumption has led to several technology acquisitions in the following categories:

  • Cloud monitoring: Cloud monitoring tools deliver visibility and control of business-critical services built on multi-cloud and cloud native architectures.

    IT operations and DevOps teams have heavily invested in cloud monitoring point tools, which explains the purchase of eight cloud monitoring startups (SignalFx, Metricly, Outlyer, Server Density, Unigma, Wavefront, Opsmatic, Boundary, and Librato) by industry incumbents like Splunk, VMware, New Relic, BMC Software, and SolarWinds
     

  • Cloud management platforms: Cloud management platforms (CMPs) help enterprises migrate on-prem workloads to cloud environments with capabilities for discovery, provisioning, orchestration, and workload balancing.

    Technology vendors like Apptio, Flexera, Nutanix, Microsoft, Cisco, ServiceNow, and IBM have made eight CMP acquisitions across startups like FittedCloud, RightScale, Botmetric, Cloudyn, CliQr, ITapp, Gravitant, and FogPanel
     

  • Cloud cost optimisation: Cloud cost optimisation tools let business and IT teams manage public cloud consumption by identifying underutilised and idle cloud instances and delivering real-time recommendations for cloud workload placement. Given the pressing need to avoid cloud sticker shock, industry leaders purchased six cloud cost optimisation tools (Cloudability, ParkMyCloud, StratCloud, CloudHealth Technologies, Cmpute.io, and Cloud Cruiser)
     
  • Network performance monitoring: How do enterprises deliver compelling customer experiences across on-prem, private cloud, and public cloud networks? Network performance monitoring and diagnostics tools offer real-time insight into network traffic utilisation and help troubleshoot problems with multi-layer visibility.

    Industry incumbents and investors capitalised on the demand for network monitoring by snapping up eight different tool providers (Entuity, Veriflow, Corvil, Netfort, Savvius, Performance Vision, Gigamon, and Danaher Communications)
     

  • AIOps: The adoption of hybrid and cloud native architectures has led to endless alert storms, where it is nearly impossible for human operators to extract the signal from the noise. Artificial intelligence for IT operations (AIOps) tools apply machine learning and data science techniques to the age-old problem of IT event correlation and analysis.

    Larger incumbents have swallowed seven AIOps startups (FixStream, SignifAI, Savision, Evanios, Perspica, Event Enrichment HQ, and Metafor), underlining the need for AI/ML approaches to isolate and pinpoint incident root cause(s).

Growth by acquisition

Since 2015, serial acquirers like SolarWinds, Cisco, ServiceNow, Splunk, Datadog, New Relic, Flexera, VMware, and Nutanix have acquired thirty-two diverse startups across performance monitoring, hybrid discovery, IT service management, cloud management platforms, cloud cost optimisation, and AIOps.

SolarWinds leads the pack with seven deals (Samanage, Loggly, Scout, TraceView, LOGICnow, Papertrail, and Librato) followed by Splunk (SignalFx, VictorOps, Rocana, and Metafor), Cisco (Cmpute.io, Perspica, AppDynamics, and Cliqr) and ServiceNow (FriendlyData, Parlo, DxContinuum, and ITapp) with four acquisitions each.

ITOM software leaders have dedicated corporate strategy, business development, and investment teams that are constantly scouting for the next big thing. Acquiring the right startup can ensure competitive parity, market entry, or talent infusion, which is critical for technology incumbents with stale and aging product portfolios.    

Private equity continues to reshape the ITOM software landscape

Private equity (PE) firms like Bain Capital, Insight Partners, KKR, Thoma Bravo, and Vista Equity Partners have had an outsized influence on the ITOM tools market. Companies like Apptio, BMC Software, Cherwell, Connectwise, Continuum Managed Services, Dynatrace, Flexera, Ivanti, Kaseya, LogicMonitor, Optanix, Resolve Systems, Riverbed, and SolarWinds have all benefited from strategic PE investments.

In the managed services software segment, Thoma Bravo alone controls Connectwise, Continuum, and SolarWinds MSP, while Vista Equity Partners engineered a merger between two portfolio companies, Datto and Autotask to create a new managed services leader. Expect PE firms to invest, acquire, and divest portfolio companies, creating new ITOM software winners and losers in the process.

No sign of mega deals slowing down

While Splunk’s billion-dollar deal for SignalFx was astounding, there have been several blockbuster acquisitions and buyouts in the ITOM software market.  In the last five years, Broadcom acquired CA Technologies for $18.9 billion, Thoma Bravo purchased Connectwise for $1.5 billion, KKR bought out BMC Software for $8.5 billion, Elliott Management acquired Gigamon for $1.6 billion, Cisco spent $3.7 billion on AppDynamics, Micro Focus engineered a reverse merger with HPE Software for $8.8 billion, NetScout purchased Danaher Communications for $2.3 billion, and Thoma Bravo took Riverbed private for $3.5 billion.

Just these eight deals generated $47+ billion demonstrating sustained momentum and continued investments in ITOM software firms from leading technology vendors and VC/PE firms.

The elusive quest for a unified ITOM platform

Platform thinking is the motivation behind several recent ITOM acquisitions (Splunk’s takeover of Metafor and VictorOps for modern incident management or SolarWind’s TraceView and Librato acquisitions for real-time observability).

The big four ITOM vendors (BMC, CA, IBM, and HP) famously used acquisitions to build their ITOM minisuites (chasing the ever-popular “single pane of glass”). Unfortunately, inorganic product strategies never resulted in a unified platform that could combine disparate performance and capacity insights in a single place.

It is an open question if current industry leaders like ServiceNow, Splunk, and SolarWinds have learned any lessons from the 'big four' acquisition debacles. Every technology acquisition requires significant engineering resources and product roadmap enhancements for successful integration with an incumbent’s platform. Enterprise IT buyers should carefully verify whether there remains continued focus and commitment to making the acquisition work before writing a big check to an industry leader that touts its recent acquisitions as proof of its innovation DNA.

The bottom line?

Next-generation technology startups are constantly redefining customer expectations with innovative solutions for modern digital operations management. Industry incumbents will continue to use acquisitions as a means to acquire modern technologies, battle-tested talent, and market credibility.

IT buyers should partner with technology startups for emerging use cases as well as evaluate how incumbent vendors are modernising their technology portfolios and truly integrating the acquired technology to achieve the long-sought-after vision of a single pane of glass. Otherwise, they may instead end up with the more common scenario of a single glass of pain.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

IT operations in 2020: Five things to prepare for – from AIOps to multi-cloud and more

The threat of digital disruption has forced senior executives and technology leaders to rethink business models, data assets, and distribution channels, to create more innovative products and services that will delight customers and overcome more nimbler competitors. Over the last decade, enterprises have completely transformed the way they build, deploy, manage, and maintain mission-critical services as a response to increasing digitisation.

Developers have responded to the enterprise transformation challenge by adopting innovative technologies and practices including the consumption of public cloud services, the embrace of agile and DevOps for rapid software delivery, the shift from monolithic development patterns to microservices development, and machine learning models for process innovation.

IT operations teams have historically ensured the availability and performance of enterprise workloads by minimising change and avoiding disruption. Given the demands of digital business, digital operations teams will need to take advantage of established and emerging technology trends to drive product momentum, deliver compelling customer experiences, and ensure long-term corporate survival.

In 2020, IT operations teams will need to embrace these five shifts to scale up innovation and respond effectively to digital disruption:  

How IT operations can stay relevant in a DevOps world

At the 2009 Velocity conference, a session on 10+ Deploys Per Day: Dev and Ops Cooperation at Flickr by John Allspaw and Paul Hammond showed how enterprises could accelerate release velocity with automated infrastructure tooling, continuous integration and deployment processes, and shared metrics. This Velocity talk ignited the DevOps movement, calling for a new model of trust, collaboration, and accountability between Dev and Operations teams.

A decade later, DevOps has broad mainstream adoption, with site reliability engineers and DevOps specialists being the top earners in Stack Overflow’s 2019 Developer Survey. DevOps is key to enabling business agility and minimising friction, with Gartner predicting that 90 percent of the top 100 global companies will slash operational inefficiencies with DevOps practices by 2020. Meanwhile, a recent McKinsey study found that few business executives believe “their IT functions make meaningful contributions in areas that promote strong business performance.”

These trends might ensure that DevOps teams are the ones calling the shots with active participation in digital experience products leading to larger organisational budgets and greater organisational clout. Does this mean that IT operations will have to stay content managing legacy application and infrastructure portfolios (aka ‘keeping the lights on’)?

Takeaway: IT operations will need to combine their traditional focus on reliability, resilience, security, and efficiency with greater attention to release velocity, continuous improvement, and customer-centricity. Innovations in IT operations can support digital transformation initiatives and assure that the new speed of DevOps won’t put the business at risk.

AIOps: Not such an old-school incident management workflow

A recent IDC study finds that IT operations teams are the biggest buyers of artificial intelligence tools for rapid pattern recognition, seamless incident collaboration, and faster issue resolution. In 2020, it is time to move away from siloed and reactive to proactive and preventive incident management using the power of machine learning and data science. A modern AIOps solution can drastically reduce the human time spent identifying, logging, categorising, prioritising, responding, and closing incidents by:

  • Analysing and processing a wide variety of events across different monitoring tools so that duplicate and noisy alerts are automatically suppressed
  • Using machine data intelligence to get ahead of alert storms, speed up root cause(s) analysis, and reduce service disruptions
  • Sending real-time, contextual alerts to on-call service delivery teams with bidirectional integrations for IT service management tools  
  • Addressing routine incidents at scale using automated remediation so that human operators can focus on high-value business projects

Takeaway. Digital operations teams should start piloting AIOps initiatives to understand how machine learning-powered event management can reduce the human time spent on incident detection, first response, alert prioritisation, and root cause analysis.

New ways to control the chaos of multi-cloud management

Flexera’s 2019 State of the Cloud Report found that 84 percent of IT leaders use five different cloud providers as part of their enterprise cloud strategy. Given that AWS alone has 170+ unique services across 23 product categories, it is no easy task managing different cloud services across leading cloud platforms. So, what are the driving forces behind multi-cloud adoption?

Given the dominance of AWS which has a 35% market share in the cloud infrastructure services market, CIOs are looking to work with other cloud providers like Microsoft and Google to preempt fears of cloud lock-in. The other reason for selecting multi-cloud platforms is 451 Research’s best execution venue strategy of picking the right cloud environment for a specific type of business workload so that IT teams can optimise for both performance and cost.

Here are three factors that cloud teams will need to carefully consider while deploying a multi-cloud enterprise strategy:

  • Resource complexity: Cloud infrastructure teams will need to select the right instance type for their workload requirements across thousands of cloud SKU instances. Picking and optimising right-sized instances is an ongoing task and requires difficult tradeoffs based on architecture, demand, performance, resilience, and cost
     
  • Multi-cloud monitoring: While there are plenty of native monitoring tools like Amazon CloudWatch, Azure Monitor, and Google Stackdriver, these solutions are best employed for cloud-provider specific insights. Enterprises should either invest either in open source tooling (Prometheus/Graphite, Grafana) or third-party monitoring tools that can easily integrate, capture, and present insights from multi-cloud environments
     
  • Embed FinOps thinking in your cloud centre of excellence: Optimising cloud costs across instance types and pricing models (on-demand, dedicated, spot, and reserved) is a complex exercise. The emerging discipline of FinOps helps enterprises better plan and predict cloud budgets by bringing together best practices for optimising cloud spending. FinOps offers a new procurement model that emphasises shared accountability for cloud financial management across technology, finance, and business teams so that enterprises getter a better return for their cloud investments.

Takeaway. Enterprise IT teams should learn from FinOps pioneers on how to make the right tradeoffs between cost, performance, and resilience for cloud services. Cloud architects should experiment with both open source and commercial monitoring tools to understand how they can drive real-time visibility and ensure faster incident response for multi-cloud operations. 

Cloud transforms the enterprise data centre

Corporate data centres are increasingly taking on attributes of public cloud infrastructure with on-demand consumption and pay-per-use pricing models. Here are three trends that are a clear indication of how data centres are evolving in the cloud era:

  • Hybrid cloud models: For a long while, public cloud platforms refused to acknowledge that certain workloads could only operate on-prem due to latency, security, or compliance requirements. Cloud providers have now openly embraced the hybrid cloud value proposition, with Microsoft launching Azure Stack in 2017, followed by AWS Outposts in 2018, and Google Anthos in 2019. Hybrid cloud solutions allow enterprises to run workloads within their data centres and not worry about day-to-day management while letting cloud providers breach the final frontier of data centre gravity  
     
  • Consumption-based infrastructure models: Enterprises can leverage a host of innovative solutions (HPE GreenLake, Dell Flex on Demand, Lenovo TruScale Infrastructure Services, and Cisco Open Pay) that let them tap into flexible payment models for data centre resources. IT teams can defer capital expenditures, work with the latest hardware, track real-time usage, and outsource management to the OEM or a managed service provider, allowing them to purely focus on business outcomes
     
  • Write once, run anywhere with orchestration engines: Container orchestration engines like Kubernetes, Docker Swarm, and Apache Mesos have exploded in popularity as they allow IT teams to run cloud-native services anywhere and offer a consistent management framework for building and scaling distributed applications. Cloud-native services can be deployed across data centre and cloud environments using container orchestration engines, ensuring a high degree of portability, faster release velocity, and better operational control with abstracted infrastructure

Takeaway. Data centres are ripe for disruption and IT teams should outsource the heavy lifting involved in designing, deploying, monitoring, and maintaining mission-critical infrastructure. Data centre managers should work with both hyperscale and OEM providers to tap into the power and flexibility of hybrid cloud and consumption-based utility models.

How to tackle the looming skills crisis

Research firm IDC expects that 30% of IT roles involving emerging technology skills will remain unfilled through 2022. A recent survey found that 94% of IT decision-makers are finding it somewhat difficult, difficult, or very difficult to hire DevOps professionals, cloud native developers, and multi-cloud operators. Disruptive technology trends have ensured that  IT operations teams have to constantly upgrade their skills to remain relevant.

  • The popularity of cloud native infrastructure requires a new set of skills across lifecycle automation and configuration, observability and analysis, and security and compliance for driving reliable and scalable applications
  • The adoption of AIOps solutions needs IT practitioners who are familiar with advanced statistical techniques and can combine data-driven insights and human intuition to reduce application downtime and ensure a faster recovery

Takeaway. CIOs will need to invest heavily in skills development programs to attract and retain employees. IT leaders will use a mix of internally run programs, hands-on learning, and external providers to counteract the skills gap in a competitive job market.

Conclusion

In a world where change is the only constant, IT operations will need to become increasingly proactive and dynamic to meet the needs of the business. Technology operations management will emerge as a renewed discipline, where innovation is only limited by imagination.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

A guide for database as a service providers: How to stand your ground against AWS – or any other cloud

Last August, Redis Labs introduced a Commons Clause license for its popular in-memory database to prevent cloud providers like Amazon Web Services, Microsoft Azure and Google Cloud Platform (GCP) from “taking advantage of the open source community for years by selling (for hundreds of millions of dollars) cloud services based on open source code they didn’t develop.”

NoSQL database platform MongoDB followed suit in October 2018 announcing a Server Side Public License (SSPL) to protect “open source innovation” and stop “cloud vendors who have not developed the software to capture all of the value while contributing little back to the community.” Event streaming company, Confluent issued its own Community License in December 2018 to make sure cloud providers could no longer “bake it into the cloud offering, and put all their own investments into differentiated proprietary offerings.”

What prompted these open source firms to introduce such restrictive licensing terms? While global database management systems revenues hit $37 billion in 2017, analyst firm Gartner projects that the database platform as a service (dbPaaS) segment alone will reach $10 billion by 2021.

While the dbPaaS segment is one of the fastest growing areas in the overall database market, much of the dbPaaS adoption is being driven and captured by cloud providers. The three leading cloud platforms (AWS, Azure, and GCP) offer a range of relational, non-relational, time series, in-memory and graph database engines to meet every conceivable enterprise need.

The big fight: Commercial open source database vendors vs. cloud platforms

These licensing changes from commercial open source vendors have ignited heated debates about the very definition of open source software, the need for a special license to block cloud providers from piggybacking on popular open source tools and how to create sustainable (and profitable) open source organizations.

In related developments, MongoDB failed to gain approval for the SSPL from the Open Source Initiative (OSI) in January 2019 and Redis introduced the Source Available License as a permissive open source license in February 2019. While these licensing disputes are still ongoing, here are three strategies that open source players can use to compete and win against hyperscale cloud providers in a crowded database market:

Launch and market the heck out of your database platform as a service

Gartner predicts that global SaaS revenues will touch nearly $100 billion in 2020, at a four-year compounded annual growth of 14%. There’s a strong appetite among enterprise buyers for truly multitenant, highly scalable and cost-effective dbPaaS.

Instead of letting cloud providers steal market share with their managed database products, open source vendors should deliver the most compelling managed database platform experience with strong data governance, robust security, continuous backups, and automated patching. Database vendors should build their offerings in a cloud-agnostic way for both hybrid and multi-cloud scenarios so that their dbPaaS can work well across on-prem workloads and different cloud providers.

Despite all the gloom and doom over cloud providers strip-mining open source jewels, MongoDB’s fully managed cloud database, Atlas registered a 400% annual growth and generated 34% of their 2018 revenues, grossing $100 million in annual recurring revenues. Other database vendors like Confluent, Elasticsearch, InfluxDB and Redis have also introduced database-as-a-service offerings to help customers manage production-ready and mission-critical workloads on their cloud service.   

Offer more professional and managed services

Most enterprise customers want to focus more on their core business and invest less in either dedicated IT infrastructure or expensive DBAs for provisioning and maintaining databases. Database vendors should bring in their best solution consultants and implementation architects to deliver the right advice on moving on-prem data to a cloud service.

They should also supplement consulting services with recommended blueprints, developer-friendly documentation, robust APIs and automated migration tools. These providers should also build a service provider ecosystem that can share insights on which workloads to migrate, offer hand-holding during migration and ongoing services to optimise database health.

Enhance and maximise database performance

While cloud monitoring tools like Amazon CloudWatch, Azure Monitor and Google Stackdriver offer basic metrics for database monitoring, commercial database vendors have an unfair advantage when it comes to ensuring the availability and uptime of their managed database as a service. These vendors can deliver platform services that offer comprehensive monitoring and smart alerting as well as perform upgrades, backups and recovery, for higher availability, better maintenance and faster scaling.

Conclusion: It’s too early to declare winners

Veteran software industry executive and technology columnist, Matt Asay has closely reported on the widening mutual distrust between open source companies and cloud providers. Asay’s diagnosis is grim: “This conflict is made worse by the fact that AWS, Microsoft, and Google are so much better at turning software into the services that companies increasingly want…Or put even more bluntly: Cloud vendors are selling what enterprises actually want.”

While cloud providers have assembled a diverse array of managed database offerings, commercial open source companies have more than a fighting chance to turn the tables on their opponents. Instead of introducing restrictive licensing terms or blocking cloud providers from contributing code, database vendors should focus on delivering a superior and differentiated cloud service that becomes the gold standard for ease of operations, seamless deployment and increased productivity.

Read more: AWS’ contribution to Elasticsearch may only further entrench the open source vendor and cloud war

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

The end of an era: Why it’s time to ditch the big four in ITOM – and what it means for IT leaders

In July 2018, Broadcom announced its plan to acquire CA Technologies for almost $19 billion. While analysts have furiously debated the merits of a chip manufacturer buying an enterprise software company, the CA acquisition heralds a momentous shift in the $25 billion IT operations management (ITOM) software market.

For more than two decades, four technology vendors – BMC, CA, IBM and HP – have dominated the ITOM software market. In 2012, these big four collectively accounted for 55% of the ITOM software industry. By 2017, their market share had declined to less than 30% (Gartner).

More crucially, the CA acquisition means that the big four as you’ve known them no longer exist. Here is how the big four lost their way – and why IT leaders need to start working with a new breed of insurgents that are transforming IT operations management.

A quick history lesson: How four incumbents lost their way

A decade ago, most ITOM startups expected to scale and then sell out to a big four provider at some point in their journey. Today, most startups begin with a business plan that’s all about stealing market share from a big four product suite. Here’s a synopsis of where these four companies stand today.

BMC: BMC Software started life as a mainframe management tools company in 1980. By 2013, it was clear that the company had run out of steam. BMC’s annual revenues from 2010-2013 showed a tepid CAGR growth of 0.78% – $1.91bn in 2010 versus $1.97bn in 2013. In May 2013, private equity players Bain Capital and Golden Gate Capital acquired a majority stake in BMC for $6.9bn. Five years later, another private equity firm, KKR, agreed to buy BMC from its previous investors for $8.5bn. In 2018, BMC’s annual revenues were still stuck at $2bn, despite significant product and go-to-market investments over the last five years.

CA Technologies: CA was the dominant mainframe utility software company of the 1980s. A serial acquirer, CA was well known for buying companies and milking customers for maintenance fees. In recent years, CA experienced the same problems of stagnant products and stalled growth. The company registered a negative CAGR of 0.18%, with revenues marginally declining from $4.26bn in 2015 to $4.23bn in 2018. With all options exhausted, CA sold itself to Broadcom last month.

IBM: IBM Tivoli started in 1989 as a systems management vendor for IBM mainframe hardware. In 1996, IBM bought out Tivoli and folded over thirty acquisitions into the Tivoli division. By 2012, IBM Tivoli was the leading ITOM software player with $3.2bn in annual revenues. However, IBM made few actual investments in upgrading the Tivoli architecture to meet the demands of a new generation of ITOM buyers. In 2016, IBM signed a 15-year deal with HCL Technologies for offloading the development and maintenance of Tivoli products. Today, the best mention you get to the venerable Tivoli brand on IBM’s website is deep in its Cloud & Smarter Infrastructure division.

HP: HP launched its OpenView family of IT management tools in the early 1990s. After acquiring Peregrine, Mercury Interactive, and Opsware in the last decade, HP grew its OpenView portfolio to more than $1bn in annual revenues. However, with all the troubles that HP experienced after the Autonomy deal, it sold its entire ITOM software portfolio to Micro Focus in 2016. Reacting to the HPE-Micro Focus $8.8bn merger, The Register pointed out that “Micro Focus is considered by some to be something of a retirement home for software businesses that have seen better days.”

The big four are (mostly) dead – here’s why

So why did the big four players lose their way? A distinct absence of innovation, a strong dependence on legacy portfolios and maintenance revenues, and unwieldy product suites sealed the fate of the big four. Here are our top three reasons for their decline:

Reason #1: Acquisitions are not a substitute for organic innovation

A big reason for the fall of the legacy ITOM software providers was an over-reliance on acquisitions. The big four executives spent all their time pursuing deals and acquiring the hottest technology startups – instead of keeping their product stacks modern and relevant. The playbook was simple: fold the latest acquisition into an existing division and incentivise armies of salespeople to bundle and sell the new solution. Here’s a quick timeline of some notable acquisitions made by the big four since 2000:

  • BMC Software has splurged on companies like Remedy (IT service management), ProactiveNet (performance management), RealOps (runbook automation), BladeLogic (data centre automation), Cordiant (application performance monitoring), and Numara (IT service management) to bolster its ITOM software portfolio
  • CA Technologies built its monitoring portfolio with acquisitions like Wily Technology, Nimsoft, WatchMouse, and RunScope while Arcot, Xceedium and IdMLogic helped shape its identity management solutions
  • IBM Tivoli acquired CIMS Lab (IT asset utilisation), Micromuse (event correlation), Collation (discovery), BigFix (patch management), and Intelliden (network automation) to keep its Tivoli division growing every year
  • HP bought companies like Peregrine Systems (IT service management), Trustgenix (identity management), Mercury Interactive (IT service delivery), Bristol Technology (business transaction monitoring), Opsware (data centre monitoring), and ArcSight (security) to extend the capabilities of its OpenView suite

Reason #2: How legacy software and maintenance fees propped up big four revenues

If there’s one technology that embodies legacy, it is mainframes. The dirty secret of the big four was their addiction to mainframe monitoring and management for revenue generation. If you look at CA’s revenues (excluding services) in 2018, mainframe solutions accounted for 55% of revenues and 64% of segment operating margins. In contrast, enterprise solutions drove only 45% of revenues and just 9% of its segment operating margins. Similarly, for BMC, mainframe tools brought in 43% of overall revenues in 2013 – the last year in which the company reported financial results before selling itself.

Another factor that prevented the big four from embracing innovation, in the form of SaaS delivery models, is maintenance fees. At BMC, maintenance revenues accounted for 52% ($1.12bn), 50% ($1.08bn), and 50% ($1.02bn) of overall revenues in 2013, 2012 and 2011. Micro Focus made 67% ($720.7m) and 66% ($754.5m) of its revenues from maintenance fees in 2017 and 2016.

Reason #3: Big four suites: Bloated, disjointed, and out of touch with market realities

When you analyse any big four solution, you find suites like HP OpenView are built on legacy tools like Operations Manager i and Network Node Manager i. Even BMC’s recent Cognitive Service Management sits on age-old solutions like Remedy and Discovery. The big four resorted to buying and folding different products into their ITOM portfolios to keep flagship suites like Tivoli and TrueSight relevant. Sales teams then sold the mantra of a single pane of glass for enhanced visibility and control across your IT infrastructure.

Most big four suites would take several quarters to implement, along with the need for expensive third-party professional services. Besides the time and cost overruns, the process of consolidating disparate products into a single framework was a Herculean challenge. Most big four suite implementations failed to deliver the efficiency, simplicity, and scalability that was originally promised during the sale.

Don’t fear change – embrace it

What’s next for DevOps and IT operations teams? New players have emerged to fill the vacuum created by the exit of the big four. Cutting-edge, cloud-based technologies are taking the place of tool suites. And business consolidation, including the likes of Splunk/VictorOps, VMware/CloudHealth, are presenting new challengers to old technology. The future is agile, modular and flexible. As business blazes a new trail forward, it’s time for technology to transform along with it.