The 2019 Forbes Cloud 100 analysed: Stripe top again amid big data boom and strong exits

Forbes has put out its latest Cloud 100, and while payments provider Stripe remains top of the shop the big five has a refreshing feel to it.

Stripe remains at #1 for the third successive year in the media firm’s list of private cloud companies – as in cloud companies which are privately held – while behind it the top five consists of data warehouse Snowflake, robotic process automation (RPA) provider UiPath, infrastructure automation firm HashiCorp, and data analytics company Datadog.

Part of this turnover is down to a strong year of exits. No fewer than five of last year’s top 10 went public over the past 12 months; Slack, Zoom Video Communications, CrowdStrike, Elastic and Eventbrite. In addition, Qualtrics, which placed at #7 last time out, was acquired by SAP for $8 billion (£6.2bn) in November. Looker, which ranked just outside the top third (#34) in 2018, was acquired by Google for $2.6bn while Cylance, ranked #18, was bought by BlackBerry.

UiPath is a particularly interesting case. Last year’s list saw the company debut at #14, with Forbes admitting its rise ‘absolutely came out of left field.’ In a statement acknowledging its bronze medal, CEO Daniel Dines said the company ‘could not have achieved this kind of growth and success without teams around the world, investors who are true partners, and customers and partners who have bet on [their] automation technology to transform their businesses.’

As before, the Cloud 100 is put together alongside Bessemer Venture Partners and Salesforce Ventures. The investors do have skin in the game in certain cases; cloud CRM software builder Vlocity, ranked at #24, received a $60 million series C funding round in March to which Bessemer and Salesforce both contributed. The hopefuls were whittled down based on growth, sales, valuation and culture, as well as consultation from 40 CEOs at publicly-held cloud companies.

The list was praised by Forbes as being the ‘strongest and most diverse’ group of companies assembled yet. “Though infrastructure and development companies lead the way on the 2019 Cloud 100 list, design tools are making a move and no-office, fully remote setups are gaining traction,” the company wrote.

In terms of what the companies do, this holds a strong case. Plenty of big data and backend-facing companies now infiltrate the top end of the rankings, which could be seen as affirmation of the money getting behind it. Snowflake secured a mammoth $450 million funding round back in October, while Apache Kafka software provider Confluent, which made the top 10, was valued at $2.5 billion following a $125m series D round in January. Rubrik, Cloudflare and Databricks – which nabbed $250m in series E funding in February – also made the top 20.

While it means fewer fluffy SaaS and B2C cloud apps are taking the honours, the diversity charge struggles when it comes to who runs the Cloud 100. Only four CEOs in the 100 are female, with two – Nicole Eagan and Poppy Gustafsson at Darktrace – at the same company. Melanie Perkins, CEO of Canva, and Rachel Carlson, chief executive of Guild Education, complete the set.

This appears to be a recurring challenge for Forbes; the company’s list of 100 most innovative leaders earlier this month featured a grand total of one woman. Barbara Rentler, CEO of Ross Stores, placed #75 and was not even afforded the luxury of a photo. Forbes has since taken its medicine; editor Randall Lane noted the disparity of women chief executives as a contributor to the ‘flawed’ methodology behind the list.

The top 10, in descending order, are Stripe, Snowflake, UiPath, HashiCorp, Datadog, Procore, Tanium, InVision, Rubrik and Confluent. You can see the full list here.

Postscript: The 2019 Cloud 100 already has its first graduate; Cloudflare has gone for IPO, with the NYSE doing the honours earlier today.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How is Kubernetes leading the game in enabling NFV for cloud-native?

The impact of cloud-native readiness on applications, which are mostly orchestrated using Kubernetes, can be seen on VMware’s announcements at the recent VMworld 2019. This made it clear to the IT world that focus of IT infrastructure has shifted to containerisation from virtualisation. Going cloud-native and shifting workloads on top of Kubernetes clusters is a key trend being followed by the industry.

CNCF (Cloud Native Computing Foundation) has shown the aggression to push their projects to enterprise IT infrastructure and telecom service providers to build the core of data centres using new containerised and microservices methods.

NFV and telecom use cases have also started shifting to a cloud-native landscape in the last two years. NFV techniques have help CXOs move to the software-defined and centric data centre with virtual network functions (VNFs) as core elements, being orchestrated using VNF managers (VNFM). The VNF's orchestration can be done using commercial VNFM platforms offered by Nokia, Cisco, Ericsson, Huawei, and NEC; and an open-source platform like OpenStack Tacker. Now with the cloud-native movement in the IT domain, VNFs are becoming cloud-native network functions (CNFs).

Cloud-native development of network functions:

  • Makes applications or code portable and reusable – in other words can be repetitively used independent of the underlying infrastructure
  • Allows the application to scale up and down where there is demand
  • Can be deployed with microservices way but not mandatorily
  • Is suitable for elastic and distributed computing environments

Cloud-native development also enables NFV to embrace DevOps, agile techniques, and more importantly allows container orchestration engines like Kubernetes to handle workloads – which also means that more dynamism comes into the picture at the core stack of NFV.

Earlier, CNFs were in evaluation phase to check for readiness by several vendors and service providers to be used in NFV use cases. In 2018, I wrote about the benefits of deploying network functions in containers and being architected using microservices. Also, I wrote on why cloud-native VNFs are important in NFV success.

The below image shows how VNFs were managed in the past, how it is currently managed along with CNFs, and showing how Kubernetes can be a de facto framework to handle network functions and applications pushed into CNFs and VNFs.

Kubernetes in the picture

We can now see how Kubernetes has evolved so much in the data centre of every size for handling every workload type. Kubernetes is also becoming a choice to orchestrate workloads at edges as well. We have seen several collaborations for new solutions for 5G that specifically focused on handling containers using Kubernetes and legacy virtual machines using OpenStack.

There are several ways Kubernetes can be useful for NFV use cases for handling network functions and applications. Kubernetes can be useful in hosting all cloud-native software stack into the clusters.

If you are a software or solution provider, Kubernetes can help you orchestrate all workload types like VNFs, CNFs, VMs, containers, and functions. With Kubernetes, it has become possible for all workloads to co-exist in one architecture. ONAP is leading service orchestrator and NFV MANO platform to handle services deployed in NFV. A Kubernetes plugin specifically developed for ONAP makes it possible to orchestrate different services and workloads cater through multiple sites.

ONAP has challenges in terms of installation and maintenance, while concerns have also been noted related to heavy consumption of resources like storage and memory. To work along with Kubernetes, ONAP release a lightweight version, which will fit with many NFV architectures. It is called ONAP4K8S. Requirements and package contents are published on its profile page.

There can be cases where it is not possible to completely get away from virtual machines. Some of the existing functions need to reside with virtual machines and cannot be containerised. For such cases, Kubernetes community KubeVirt and Mirantis’s Virlet frameworks can be integrated to dynamically manage virtual machines along with containers. Kubernetes also becomes a choice for enabling orchestration at the edge of the network. Kubernetes based control plane uses less number of resources that makes it suitable for edge nodes even with one server.

Cloud-native NFV stack

The Akraino edge stack is hosting a blueprint, Integrated Cloud Native (ICN) NFV Stack, under which all developments of making NFV core cloud-native are in progress. The current progress of integrating open-source cloud-native projects for NFV stack is shown below:

Srinivasa Rao Addepalli (senior principal engineer and chief architect at Intel) and Ravi Chunduru (Associate Fellow, Verizon) will be presenting a session at the upcoming Open Networking Summit Europe 2019 on how Kubernetes can be used at core of NFV and how Linux Foundation communities (ONAP, OPNFV, CNCF, LFE) are doing efforts to make NFV core a cloud-native.

Editor's note: Download Calsoft’s eBook – A Deep-Dive On Kubernetes For Edge – which focuses on current scenarios of adoption of Kubernetes for edge use cases, latest Kubernetes and edge case studies, deployment approaches, commercial solutions and efforts by open communities.

Image sources: https://events.linuxfoundation.org/wp-content/uploads/2018/07/ONS2019_Cloud_Native_NFV.pdf

The post How is Kubernetes Leading the Game in Enabling NFV for Cloud Native? appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Lacework secures $42 million in funding round to forge ahead with ‘Snowflake for security’ plan

Lacework, a provider of end-to-end cloud security automation across the biggest public clouds, has raised $42 million (£34m) in a financing round aimed at building momentum and educating security teams.

The funding, which was put together by Sutter Hill Ventures and Liberty Ventures, will be aimed at ‘supporting product innovation and go-to-market activities to help educate security, compliance and DevOps teams that want a way to embed security continuously through build-time to run-time operations’, as the company puts it.

Lacework’s platform covers both public and private cloud, aiming to automate overall cloud security and compliance while providing comprehensive risk assessment across cloud workloads and containers. The company promises ‘unprecedented visibility, automating intrusion detection, delivering one-click investigation, and simplifying cloud compliance.’

The company appears to be in solid hands when it comes to its funders, with Sutter Hill Ventures having already bet this year on Vlocity – leading a $60 million series C round in March – as well as participating in a series D round for network monitoring and intelligence firm ThousandEyes. Yet Lacework may be the horse to bet on from this stable. Sutter Hull managing director Stefan Dyckerhoff was previously CEO of Lacework, and combined the leading roles at both companies before passing on the chief exec’s role in June.

The new CEO is Dan Hubbard, previously chief product officer, while Andy Byron, previously of Cybereason and Fuze, is joining as president to lead Lacework sales and marketing teams.

“Our new funding, new perspectives on the board of directors, and with Andy joining, are all going to be critical for how we build on our solid foundation as a cloud and container security leader,” said Hubbard. “Lacework and our growing list of customers agree that there is a need for a new generation of security companies that are purpose-fit to secure today’s modern infrastructure.”

Lacework has gained two other board members, with Mike Speiser, partner at Sutter Hill, and John McMahon, currently on the board of data warehouser Snowflake, joining. Speiser argued that the goal was for Lacework to become ‘the Snowflake for security'. “It’s clear that DevOps and security teams want a single platform for their security and compliance needs, and only Lacework provides that,” said Speiser.

It's worth noting that Snowflake trousered a whopping $450 million in its last funding round, back in November – so perhaps there is a little while to go before Lacework gets to that point.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft confirms a Teams client for Linux is on its way


Keumars Afifi-Sabet

10 Sep, 2019

Microsoft is developing an iteration of its collaboration tool, Teams, for Linux systems after high demand from users, but hasn’t provided a release date.

The company confirmed on a user feedback forum last week that it’s actively working on a Teams client, and that more information would be divulged soon. Users have previously been forced to use an in-browser version of Teams on Linux systems, which suffers from limitations in functionality and user experience (UX).

The popular collaboration tool is currently available on Windows, macOS, iOS and Android, as well as within a web browser, with Linux the only missing piece of the puzzle.

The biggest issues with the web iteration of Teams include the inability to video conference or share desktops and applications effectively, as well as difficulty organising presentations.

Linux users have been demanding a client for Teams for years, with the original post that Microsoft replied to on UserVoice, for example, dating back to November 2016.

Notably, Teams’ biggest rival in the collaboration space is Slack, which does have a functional Linux client that launched last year. The Ubuntu Snap tool has been used to put the app into a bubble so it could run in a Linux environment, and provide secure isolation.

In confirming a Linux client for Teams, Microsoft is encroaching on one of Slack’s most significant differentiating factors from the industry giant.

It’s particularly significant given that Microsoft announced in July that it has more users than its key competitor; boasting more than 13 million active daily users versus Slack’s latest reported figures of 10 million users.

This can partially be attributed to the fact it’s packaged into Microsft’s Office 365 ecosystem of productivity apps by default. But it’s also been considered fairly staggering considering Teams was lagging behind its rival as soon as April.

The rivalry between the two platforms has indeed been hitting up during 2019, with Microsoft banning its employees from using Slack in June, declaring some versions of the workplace service are not secure.

Oracle earnings flat as CEO steps down citing health reasons


Keumars Afifi-Sabet

12 Sep, 2019

Oracle’s chief executive Mark Hurd is stepping down from the company on a temporary basis for health-related reasons, as the software giant released its quarterly earnings earlier than expected.

The co-CEO, who shares his role with fellow chief executive Safra Catz, requested a leave of absence to address health-related issues, the company announced yesterday, following nine years at the company.

The news was announced in conjunction with the latest financial results, and has been disclosed just a few days before the company’s annual OpenWorld conference is set to kick off in San Francisco next week.

«To all my friends and colleagues at Oracle, though we all worked hard together to close the first quarter, I’ve decided that I need to spend time focused on my health. At my request, the Board of Directors has granted me a medical leave of absence,» said Hurd. 

«As you all know, Larry, Safra and I have worked together as a strong team, and I have great confidence that they and the entire executive management team will do a terrific job executing the exciting plans we will showcase at the upcoming OpenWorld.»

Hurd’s announcement overshadows the company’s roughly flat financial results for the first quarter of the 2019/20 financial year. Oracle declared revenues of $9.2 billion last quarter; the same figure year-on-year versus the first quarter for 2018/19.

Cloud services and license support revenue, meanwhile, climbed slightly to $6.8 billion versus $6.6 billion year-on-year, while cloud license and on-premise license revenue slumped to $812 million from $867 million.

«Autonomy is the defining attribute of a Generation 2 Cloud,» said Oracle’s CTO Larry Ellison. «Next week at our OpenWorld conference, we will announce more Autonomous Cloud Services to complement the Oracle Autonomous Database.»

As for the company’s leadership, Catz will continue on as sole CEO, while Ellison, who co-founded the company, will handle Hurd’s responsibilities. The company hasn’t announced how long Hurd will be away from his post.

Slack launches Euro Data Residency


Bobby Hellard

12 Sep, 2019

Slack is rolling out the ability for customers to keep their data in Europe with Data Residency, the company has announced.

The first will be in Frankfurt, Germany, before rapidly expanding across Europe.

Until now, Slack customers had their data stored within the US, but with its rapid adoption around the world, the company has recognised this is not always suitable for data regulations and companies outside the States.

So its introducing Data Residency, which is currently in beta, as Ilan Frank, head of enterprise product at Slack told IT Pro.

«It went live on Wednesday, our first customer is already live in beta right now,» he said. «This will be the case for the next three months, but in December we will be making this generally available.

«It should be completely invisible to the end-user and that is what we hope to gain from the beta. The biggest question with something like data residency is performance, so we are optimising that and expect to see no visible latency change for the end-user.»

The rise of cloud computing has transformed how companies use and store data. Businesses around the world are creating their own internal policies for where data can be stored, while governments and third-party regulators are enforcing data residency requirements.

For Slack, it’s more about its own growth and user experience, particularly as the company has become one of the go-to communications platform for startups.

«This goes along with our increased popularity in large enterprise companies,» added Frank. «We are seeing a lot of demand for Slack in large and regulated companies. And with that, the demand for finely granulated controls, an increased focus on security and enterprise

The data for this initiative is user-generated, which includes messages, posts, files and searches, and will be stored at rest within the desired data region and whether in transit or at rest, will be encrypted.

Five key tips to prioritise the security of DevOps tools and processes

The demands of today’s tech-savvy customer have placed huge emphasis on software development and user experience as a barometer for success. DevOps adoption has grown rapidly as a result, with many businesses looking at routes to either introduce or accelerate DevOps workflows within their IT organisations.

‘Tool chains’ are an integral part of any DevOps programme, helping automate the delivery, development, and management of software applications and deliver better products to both customers and business units, more efficiently and effectively. The collaborative nature of these development and production environments makes them difficult to protect however, particularly the privileged accounts and secrets associated with them.

Navigating this risk and securing key tools and infrastructure is therefore critical if organisations are to achieve successful DevOps outcomes and progress on their digital transformation journeys. To do so, there are five key measures they must consider to prioritise protection of DevOps tools and processes:

The crucial importance of selection and configuration policies

Any security conversation should always begin a full inventory of the DevOps tools being used by dev teams. After all, it’s impossible to defend environments if you don’t know they exist. This process can be cumbersome, but is massively important for open source tools.

Once these tools are accounted for, security teams should undertake an evaluation to identify any existing security deficiencies and address them promptly. This could involve making sure tools are not being used in an unsecure default configuration for example, and that they are kept up to date.

As part of this evaluation process, security teams should also find a way to get a seat at the table. That means collaborating with the group with the business that is responsible for tool selection and configuration, or working closely with IT procurement to select the best tools for the organisation so that enterprise security standards are established at the outset.

Keep your DevOps tools on lockdown

Attackers only need to exploit one vulnerability to carry out their mission, so it’s important to take a holistic approach to addressing security requirements and potential vulnerabilities. This starts with securing the secrets and credentials associated with DevOps and cloud management tools in an encrypted vault protected with multi-factor authentication (MFA).

Once complete, access privileges should be reviewed so that users are only granted “just in time” access. In other words, provide high-level access only when it’s needed to perform certain tasks, and ensure that this temporary usage is closely monitored.

Access to high-risk commands within DevOps tools should also be limited. For instance, Docker users often run a Docker container with the —privileged flag, which provides the container with direct access to host elements. Where possible, security teams should mandate that users are not able run containers with this flag, and if it’s a “must,” severely limit user access and monitor and record all activities with the –privileged flag.

Once you have addressed access, it’s also advisable to adopt other cyber hygiene best practices, such as setting up access controls that segregate DevOps pipelines. This prevents attackers from gaining access to one and then moving to another, ensuring that credentials and secrets are not shared between DevOps tool accounts and Windows sysadmin accounts. It also removes all unnecessary accounts with access to DevOps tools, including those of developers who may have changed roles or no longer require access to these tools.

Manage the proliferation of privilege

Enforcing the principle of least privilege should be a requisite for every company. Doing so limits each user’s level of access to DevOps tools to the minimum necessary for their role. However, it will be less effective unless security teams configure DevOps tools to require dual authorisation for certain critical functions. They should for example require that a second person must review and approve any changes before a change to a Puppet manifest file goes live.

Additionally, teams should ensure separation of duties for build automation tools such as Jenkins, which often retain permission to perform all duties without restriction, from building and testing to packaging. In the case of Jenkins this problem can be overcome by separating duties by implementing multiple Jenkins nodes, each dedicated to one function (build or test or package) for each application.

This ensures each node will have a unique identity and a limited set of privileges, which minimizes the impact of a potential compromise.

Keep your secrets safe in code repositories

Code repositories such as GitHub have become infamous in recent years due to IT teams erroneously leaving code in publicly accessible locations. Security teams should therefore develop risk-based policies for developers that secure the use of such repositories.

It’s worth noting however that beyond credentials, code may contain details about the organisation’s internal network that could be useful to attackers. Ideally firms should therefore use an on-premises rather than a cloud-based code repository, if it’s possible to do so without adversely affecting workflow.

If this approach is applied, then the next step is to scan the environment to make sure that any on-premises code repositories are inaccessible from outside the network. If cloud-based repositories are used however, then security teams should ensure they are configured to be private.

Above all, every organisation should make it their policy that code is automatically scanned to ensure it does not contain secrets before it can be checked in to any repository.

Invest in the protection of your infrastructure

Cyber attackers seek the path of least resistance, and for many organisations, this remains their employees. Well-crafted phishing emails can often do the trick, so IT teams should make sure that all workstations and servers undergo regular patching, vulnerability scanning and security monitoring.

Away from hardware, it’s also important to monitor your cloud infrastructure for signs of unusual credential usage or configuration changes (such as making private data stores public). This means ensuring VM and container images used in development and production environments come from a sanctioned source and are kept up to date.

To ensure security remains “baked in” to countless rounds of automatic rebuilds, security teams should also work with their DevOps counterparts to automate the configuration of VMs and containers so that, when a new machine or container is spun up, it is automatically configured securely and given appropriate controls – without requiring human involvement.

The benefits of DevOps are plain and clear for all to see – hence the rapid adoption that we have witnessed in recent years. Adopting a DevSecOps approach, using the measures outlined above, is critical to ensuring application and infrastructure security from the outset of any software development activity.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud launches Cloud Dataproc on Kubernetes in alpha

Google Cloud has announced the launch of Cloud Dataproc on Kubernetes, adding another string to the bow for the product which offers a managed cloud service for running Apache Spark and Hadoop clusters.

Google – which originally designed Kubernetes before handing it to the Cloud Native Computing Foundation (CNCF) – is promising ‘enterprise-grade support, management, and security to Apache Spark jobs running on Google Kubernetes Engine clusters’, in the words of a blog post confirming the launch.

Christopher Crosbie and James Malone, Google Cloud product managers, noted the need for Cloud Dataproc to utilise Kubernetes going forward. “This is the first step in a larger journey to a container-first world,” Crosbie and Malone wrote. “While Apache Spark is the first open source processing engine we will bring to Cloud Dataproc on Kubernetes, it won’t be the last.

“Kubernetes has flipped the big data and machine learning open source software world on its head, since it gives data scientists and data engineers a way to unify resource management, isolate jobs, and build resilient infrastructures across any environment,” they added. “This alpha announcement of bringing enterprise-grade support, management, and security to Apache Spark jobs on Kubernetes is the first of many as we aim to simplify infrastructure complexities for data scientists and data engineers around the world.”

To say Kubernetes is not a major priority for both vendors and customers would be something of a falsification. The recent VMworld jamboree in San Francisco two weeks ago saw the virtualisation giant launch a major attack on the product, with the primary launch being VMware Tanzu, a product portfolio which looked at enterprise-class building, running and management of software on Kubernetes.

As this publication put it when KubeCon and CloudNativeCon hit Barcelona back in May, it was a ‘milestone’ for the industry. Brian Grant and Jaice Singer DuMars certainly thought so; the Google Cloud pair’s blog post at the time agreed Kubernetes had ‘become core to the creation and operation of modern software, and thereby a key part of the global economy.’

The goal now is to get the most out of it, whether you’re an enterprise decision maker or developer alike. Writing for CloudTech last month Ali Golshan, co-founder and CTO at StackRox, noted the acceleration in user deployments. “Despite the fact that container security is a significant hurdle, containerisation is not slowing down,” Golshan wrote. “The advantages of leveraging containers and Kubernetes – allowing engineers and DevOps teams to move fast, deploy software efficiently, and operate at unprecedented scale – is clearly overcoming the anxiety of security concerns.”

Golshan also noted, through StackRox research, that Google still ranked third among the hyperscalers for container deployments in the public cloud but had gained significantly in the past six months.

“Enterprises are increasingly looking for products and services that support data processing across multiple locations and platforms,” said Matt Aslett, research vice president at 451 Research. “The launch of Cloud Dataproc on Kubernetes is significant in that it provides customers with a single control plane for deploying and managing Apache Spark jobs on Google Kubernetes Engine in both public cloud and on-premises environments.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft expands European Azure presence with Germany and Switzerland launches

Microsoft has announced the launch of new Azure availability in Germany and Switzerland, citing increased data residency and security concerns as key to European expansion.

Azure is now available from cloud data centre regions located in Zurich and Geneva, for the Switzerland release announced at the end of last month, while Germany’s newest regions are in the North and West Central zones, in Berlin and Frankfurt respectively.

The communications announcing the Germany and Switzerland releases, from Azure corporate vice president Tom Keane, were almost identical, save swapping out a customer story here and stock photo there. Among Microsoft’s German customers are Deutsche Bank, Deutsche Telekom and SAP, while Swiss companies utilising Azure include Swisscom, insurance firm Swiss Re, and wealth manager UBS Group.

Customers in Germany are promised compliance specific to the country, including C5 (Cloud Computing Compliance Controls Catalogue) attestation. Alongside Office 365, Dynamics 365 and Power Platform, customers will be able to benefit from containers, Internet of Things (IoT), and artificial intelligence (AI) solutions, Microsoft added.

“These investments help us deliver on our continued commitment to serve our customers, reach new ones, and elevate their businesses through the transformative capabilities of the Microsoft Azure cloud platform,” wrote Keane.

Microsoft is not the first hyperscaler to hoist its flag atop Switzerland, with Google opening its Zurich data centre region back in March. Both Google and AWS have sites in Frankfurt, with AWS first to launch there back in 2014.

This is not the end of the European expansion for Microsoft, with two new regions in Norway planned. The sites, in Stavanger and Oslo, are set to go live later this year.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

What can 80 M&As tell us about the state of IT operations management software?

IT operations management (ITOM) software helps enterprises manage the health, availability, and performance of modern IT environments. Analyst firm Gartner expects the ITOM software market to grow to $37 billion in annual revenues by 2023, with legacy on-prem tools giving way to powerful SaaS solutions for hybrid performance monitoring and management.

August this year saw five significant ITOM tool exits, with Splunk acquiring SignalFx for a cool $1.05 billion, Resolve Systems buying out FixStream, Virtual Instruments purchasing Metricly, VMware splurging on Veriflow, and Park Place Technologies acquiring Entuity. In related news, application performance monitoring provider, Dynatrace, went public at a $6.7 billion valuation while cloud monitoring tool, Datadog, recently announced its $100 million IPO listing.

To better understand ITOM software acquisition patterns, we assembled a dataset of 80+ acquisitions and buyouts of ITOM tool vendors since January 2015. This dataset lets IT buyers analyse and decipher the answers to the following questions:

  • What industry trends are responsible for a new wave of acquisitions?
  • Which ITOM categories have seen the most number of acquisitions and buyouts?
  • Which technology leaders have acquired innovative startups in the last few years?
  • What role has private equity played in fueling market innovation and consolidation?
  • What are the strategic reasons behind an incumbent assembling an acquisition portfolio?

Here are five things we learned from 80+ ITOM software acquisitions over the last five years:

Industry trends fuel new category creation

Research firm IDC expects public cloud spending to grow from $229 billion in 2019 to $500 billion in 2023. The runaway adoption of public cloud infrastructure has unleashed massive disruption in the ITOM software market. Traditional approaches to performance monitoring and cost optimisation are no longer relevant in a world of on-demand, ephemeral, and elastic cloud services. Enterprise cloud consumption has led to several technology acquisitions in the following categories:

  • Cloud monitoring: Cloud monitoring tools deliver visibility and control of business-critical services built on multi-cloud and cloud native architectures.

    IT operations and DevOps teams have heavily invested in cloud monitoring point tools, which explains the purchase of eight cloud monitoring startups (SignalFx, Metricly, Outlyer, Server Density, Unigma, Wavefront, Opsmatic, Boundary, and Librato) by industry incumbents like Splunk, VMware, New Relic, BMC Software, and SolarWinds
     

  • Cloud management platforms: Cloud management platforms (CMPs) help enterprises migrate on-prem workloads to cloud environments with capabilities for discovery, provisioning, orchestration, and workload balancing.

    Technology vendors like Apptio, Flexera, Nutanix, Microsoft, Cisco, ServiceNow, and IBM have made eight CMP acquisitions across startups like FittedCloud, RightScale, Botmetric, Cloudyn, CliQr, ITapp, Gravitant, and FogPanel
     

  • Cloud cost optimisation: Cloud cost optimisation tools let business and IT teams manage public cloud consumption by identifying underutilised and idle cloud instances and delivering real-time recommendations for cloud workload placement. Given the pressing need to avoid cloud sticker shock, industry leaders purchased six cloud cost optimisation tools (Cloudability, ParkMyCloud, StratCloud, CloudHealth Technologies, Cmpute.io, and Cloud Cruiser)
     
  • Network performance monitoring: How do enterprises deliver compelling customer experiences across on-prem, private cloud, and public cloud networks? Network performance monitoring and diagnostics tools offer real-time insight into network traffic utilisation and help troubleshoot problems with multi-layer visibility.

    Industry incumbents and investors capitalised on the demand for network monitoring by snapping up eight different tool providers (Entuity, Veriflow, Corvil, Netfort, Savvius, Performance Vision, Gigamon, and Danaher Communications)
     

  • AIOps: The adoption of hybrid and cloud native architectures has led to endless alert storms, where it is nearly impossible for human operators to extract the signal from the noise. Artificial intelligence for IT operations (AIOps) tools apply machine learning and data science techniques to the age-old problem of IT event correlation and analysis.

    Larger incumbents have swallowed seven AIOps startups (FixStream, SignifAI, Savision, Evanios, Perspica, Event Enrichment HQ, and Metafor), underlining the need for AI/ML approaches to isolate and pinpoint incident root cause(s).

Growth by acquisition

Since 2015, serial acquirers like SolarWinds, Cisco, ServiceNow, Splunk, Datadog, New Relic, Flexera, VMware, and Nutanix have acquired thirty-two diverse startups across performance monitoring, hybrid discovery, IT service management, cloud management platforms, cloud cost optimisation, and AIOps.

SolarWinds leads the pack with seven deals (Samanage, Loggly, Scout, TraceView, LOGICnow, Papertrail, and Librato) followed by Splunk (SignalFx, VictorOps, Rocana, and Metafor), Cisco (Cmpute.io, Perspica, AppDynamics, and Cliqr) and ServiceNow (FriendlyData, Parlo, DxContinuum, and ITapp) with four acquisitions each.

ITOM software leaders have dedicated corporate strategy, business development, and investment teams that are constantly scouting for the next big thing. Acquiring the right startup can ensure competitive parity, market entry, or talent infusion, which is critical for technology incumbents with stale and aging product portfolios.    

Private equity continues to reshape the ITOM software landscape

Private equity (PE) firms like Bain Capital, Insight Partners, KKR, Thoma Bravo, and Vista Equity Partners have had an outsized influence on the ITOM tools market. Companies like Apptio, BMC Software, Cherwell, Connectwise, Continuum Managed Services, Dynatrace, Flexera, Ivanti, Kaseya, LogicMonitor, Optanix, Resolve Systems, Riverbed, and SolarWinds have all benefited from strategic PE investments.

In the managed services software segment, Thoma Bravo alone controls Connectwise, Continuum, and SolarWinds MSP, while Vista Equity Partners engineered a merger between two portfolio companies, Datto and Autotask to create a new managed services leader. Expect PE firms to invest, acquire, and divest portfolio companies, creating new ITOM software winners and losers in the process.

No sign of mega deals slowing down

While Splunk’s billion-dollar deal for SignalFx was astounding, there have been several blockbuster acquisitions and buyouts in the ITOM software market.  In the last five years, Broadcom acquired CA Technologies for $18.9 billion, Thoma Bravo purchased Connectwise for $1.5 billion, KKR bought out BMC Software for $8.5 billion, Elliott Management acquired Gigamon for $1.6 billion, Cisco spent $3.7 billion on AppDynamics, Micro Focus engineered a reverse merger with HPE Software for $8.8 billion, NetScout purchased Danaher Communications for $2.3 billion, and Thoma Bravo took Riverbed private for $3.5 billion.

Just these eight deals generated $47+ billion demonstrating sustained momentum and continued investments in ITOM software firms from leading technology vendors and VC/PE firms.

The elusive quest for a unified ITOM platform

Platform thinking is the motivation behind several recent ITOM acquisitions (Splunk’s takeover of Metafor and VictorOps for modern incident management or SolarWind’s TraceView and Librato acquisitions for real-time observability).

The big four ITOM vendors (BMC, CA, IBM, and HP) famously used acquisitions to build their ITOM minisuites (chasing the ever-popular “single pane of glass”). Unfortunately, inorganic product strategies never resulted in a unified platform that could combine disparate performance and capacity insights in a single place.

It is an open question if current industry leaders like ServiceNow, Splunk, and SolarWinds have learned any lessons from the 'big four' acquisition debacles. Every technology acquisition requires significant engineering resources and product roadmap enhancements for successful integration with an incumbent’s platform. Enterprise IT buyers should carefully verify whether there remains continued focus and commitment to making the acquisition work before writing a big check to an industry leader that touts its recent acquisitions as proof of its innovation DNA.

The bottom line?

Next-generation technology startups are constantly redefining customer expectations with innovative solutions for modern digital operations management. Industry incumbents will continue to use acquisitions as a means to acquire modern technologies, battle-tested talent, and market credibility.

IT buyers should partner with technology startups for emerging use cases as well as evaluate how incumbent vendors are modernising their technology portfolios and truly integrating the acquired technology to achieve the long-sought-after vision of a single pane of glass. Otherwise, they may instead end up with the more common scenario of a single glass of pain.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.