Cisco buys London firm IMImobile for £550m


Bobby Hellard

7 Dec, 2020

Cisco has acquired London-based cloud communications provider IMImobile for approximately $730 million (£550m). 

The acquisition, which is the largest Cisco deal in the UK for almost three years, is expected to close in the first quarter of 2021. 

The tech giant is looking to push further into automation to reach out to its customers with IMImobile software being brought in to boost its existing customer relationship management (CRM) offerings.

IMImobile sells ‘customer interactions management’ software that automates a constant connection between businesses and clients through enhanced social media, messaging and audio channels. The firm is based in London, with offices in the US, Canada, India, South Africa and the UAE. 

With IMImobile onboard, Cisco aims to expand its customer services with an end-to-end interaction system that drives faster and smarter interactions in order to orchestrate the lifecycle journey of its customers.

Cisco’s Webex Contact Center will also be able to make use of IMImobile’s artificial intelligence technology for customer journeys. 

“We are excited to join Cisco and become part of one of the world’s leading technology companies as they seek to enable great customer experiences,” said Jay Patel, IMImobile CEO.

“We believe there will be a world of dynamic, always-on connections between global businesses and their customers and the combination of our respective technologies will enable to us make every interaction matter more for our clients.”

When the deal completes in the new year, the IMImobile team will join Cisco’s contact centre business unit, led by Cisco VP and GM Omar Tawakol.

“We look forward to working with IMImobile to help create a comprehensive CXaaS solution for the market – one that gives businesses a platform to provide delightful experiences across the entire customer lifecycle journey,” said Jeetu Patel, senior vice president and GM of Cisco’s security and applications business.

How can the cloud industry adapt to a post-COVID world?


Keumars Afifi-Sabet

3 Dec, 2020

One of the unexpected silver linings to the global coronavirus crisis has been the rapid growth the cloud industry has enjoyed. The shift to remote working during the various lockdowns that have taken place over the course of 2020, was largely, if not entirely, facilitated by cloud services. This has meant that while other sectors have struggled and there has been an overall economic downturn, cloud companies have performed relatively well financially. 

Although they wouldn’t want to characterise the past few months as profiting from the pandemic, the likes of Zoom and Microsoft Teams have surged in usage and revenue, with the latter surpassing 44 million users as early as March.  This period has also accelerated many digital transformation projects, with engineers more than capable of carrying out projects at pace and scale, including the traditionally lethargic public sector. This success, however, has been driven entirely by the effects of the pandemic, forcing the industry to question whether, and how, it can adapt once their services are no longer as highly sought after.

Shifting sands

While we all rejoiced at the news that a potential COVID-19 vaccine may be available for distribution before the end of the year, shares in a handful of companies dropped sharply in response, including at least 15% reduction in the valuation of Zoom. 

Whether things go back to the way they were, or cloud companies continue to play a more pivotal role than ever, is yet to be determined. For independent cloud consultant Danielle Royston, the goal of going ‘back to normality’ in 2021 is misplaced. “There’s no point wasting time and energy trying to return to the halcyon days of pre-COVID,” she says. “Let’s focus instead on some of the positive ‘disruptions’ we’ve seen this year. In all the companies I’ve been at, I’ve promoted – and in some cases fully converted to – remote working. I saw this as the inevitable direction that work and society was going, as the cloud computing tools were already there. And it makes sense: A better quality of life for employees, ease of collaboration, cutting the costs of business travel.”

This is a trend that Tom Wrenn, cloud investment expert and partner at private equity firm ECI Partners, predicts will continue well into next year, telling Cloud Pro that COVID-19 forced many companies into rapidly adopting cloud-based operations. These, driven by government-enforced lockdowns, allowed them to continue operating remotely. “Now, having done a basic shift to cloud-based systems,” he adds, “2021 will be the year of full cloud adoption, with businesses starting to optimise all its benefits; for example, data analytics and AI. If rapid investment was needed in 2020, next year businesses will want to see a return on that investment and will expect to see more from their cloud computing providers.”

Remoting-in

Although the recent transition to remote working is a trend sparked by COVID-19, the consensus is that it’s the beginning of a wider cultural shift. Former IBM boss Ginni Rometty is among the latest to suggest as much, claiming mass remote working will continue in some form as part of a broader hybrid model in future. This may involve companies keeping some physical presence while establishing the infrastructure and equipment to allow workers to work remotely as and when desired.

Cisco CTO for UK and Ireland, Chintan Patel, agrees, telling Cloud Pro that remote working gained widespread acceptance during COVID-19, even in organisations where it was unthinkable before. This means cloud and software as a service (SaaS) tools will continue to remain a crucial part of many setups, even though businesses will mostly return to a form of ‘hybrid’ model. “For remote working, cloud plays a central role; think secure cloud-based collaboration, accessing cloud-based business applications, and extending the security perimeter to thousands of devices,” he explains. “It’s important to note, though, that cloud-based consumption models are not limited to remote working only. As to those returning to the offices, we see technology can help make the workplace more secure and efficient. As and when companies prepare for a return to office, they also need to optimise their space, address worker concerns about sanitation and social distancing and plan how to communicate policies and information clearly.”

Technology will play a major part in instigating the changes needed in future, with a key role to play for many of the firms that have enjoyed success during the pandemic. While demand for software such as video conferencing platforms may not be as sky-high as it was at the beginning of the pandemic, Wrenn argues the next big step is how cloud companies can eat further into the market share enjoyed by the traditional telephone industry. “More and more businesses are using Microsoft Teams or Zoom to interact,” he explains, “when previously they would have used conference lines or even called a person directly due to it being more convenient. Cloud providers need to think about how they can make the most of this opportunity as the way in which people interact changes.”

To infinity and beyond

To some extent, we should all consider ourselves lucky the global pandemic happened when it did, given that cloud computing has only in recent recently become as advanced as it is now. Thus, rather than ‘profiting from the pandemic’, this period has been the making of the industry. After all, “cloud storage, processing, and compute facilities are already set up, and ready to expand easily and automatically, as and when enterprises need,” according to Royston, who claims this wouldn’t have been the case ten to 15 years go. “It would’ve been an epic failure and caused even more disruption and long-term damage to global economies. This year, white-collar workers being able to quickly adapt to working from home in their millions is part of what’s helped many sectors stay afloat.  And it’s because of the investment and ongoing work of hyperscalers over the past few years that’s meant businesses can support workers in doing this.”

Connectivity, too, will continue to grow as organisations’ reliance SaaS tools increases too, Patel adds, with firms expecting more from these companies beyond provision. With cloud infrastructures becoming increasingly diverse, especially with applications adding more layers of complexity, businesses will be looking to strengthen their infrastructure. This will be achieved by gaining deeper visibility across their IT estates, ensuring workloads have continuous access to required resources and running systems that connect and protect at scale – from on-prem to hybrid cloud configurations. This is in addition to using technologies such as machine learning to give customers tools to manage their ever-growing data lakes. This is where providers can step in to guide customers on their migration journeys.

As such, the greatest challenge facing cloud providers, in light of the above, will largely be customer retention, according to Tom Wrenn. “If we take online meeting services as an example, historically businesses would have had to invest in a service, such as [Cisco] WebEx, which is often costly and comes with a lot of equipment,” he says. “Today, however, businesses are using Zoom and Teams for this and can just turn services on and off with little upfront investment. This means that customers aren’t locked into providers in a way they once were. As a result, cloud computing providers will need to over-deliver for their clients, retaining a high level of customer service as well as ensuring that service levels don’t decline as they undergo a huge period of growth.”

Google buys Actifio to bring backup and disaster recovery to Google Cloud


Rene Millman

3 Dec, 2020

Google has announced it will acquire disaster recovery firm Actifio in a bid to boost its Google Cloud business. Terms of the deal were undisclosed.

Actifio provides customers with the opportunity to protect virtual copies of data in their native format, manage these copies throughout their entire lifecycle, and use these copies for scenarios such as development and test.

The company’s technology can deal with data stored in several different environments such as SAP HANA, Oracle, Microsoft SQL Server, PostgreSQL, and MySQL, virtual machines (VMs) in VMware, Hyper-V, physical servers, and Google Compute Engine.

Google said the acquisition would “help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios.”

The company added that it was committed to “supporting our backup and disaster recovery technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.”

“We know that customers have many options when it comes to cloud solutions, including backup and DR, and the acquisition of Actifio will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios,” said Brad Calder, VP of engineering at Google in the blog post.

Ash Ashutosh, CEO at Actifio said that backup and recovery are essential to enterprise cloud adoption and, “together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.”

“The market for backup and DR services is large and growing, as enterprise customers focus more attention on protecting the value of their data as they accelerate their digital transformations,” said Matt Eastwood, Senior Vice President of Infrastructure Research at IDC.

“We think it is a positive move for Google Cloud to increase their focus in this area.”

How to automate your infrastructure with Ansible


Danny Bradbury

2 Dec, 2020

Hands up if you’ve ever encountered this problem: you set up an environment on a server somewhere, and along the way, you made countless web searches to solve a myriad of small problems. By the time you’re done, you’ve already forgotten most of the problems you encountered and what you did to solve them. In six months, you have to set it all up again on another server, repeating each painstaking step and relearning everything as you go.

Traditionally, sysadmins would write bash scripts to handle this stuff. Scripts are often brittle, requiring just the right environment to run in, and it takes extra code to ensure that they account for different edge cases without breaking. Scaling that up to dozens of servers is a daunting task, prone to error.

Ansible solves that problem. It’s an IT automation tool that lets you describe what you want your environment to look like using simple files. The tool then uses those files to go out and make the necessary changes. The files, known as playbooks, support programming steps such as loops and conditionals, giving you lots of control over what happens to your environment. You can reuse these playbooks over time, building up a library of different scenarios.

Ansible is a Red Hat product, and while there are paid versions with additional support and services bolted on, you can install this open-source project for free. It’s a Python-based program that runs on the box you want to administer your infrastructure from, which must be a Unix-like system (typically Linux). It can administer Linux and Windows machines (which we call hosts) without installing anything on them, making it simpler to use at scale. To accomplish this, it uses SSH certificates, or remote PowerShell execution on Windows.

We’re going to show you how to create a simple Linux, Apache, MySQL and PHP (LAMP) stack setup in Ansible.

To start with, you’ll need to install Ansible. That’s simple enough; on Ubuntu, put the PPA for Ansible in your sources file and then tell the OS to go and get it:

$ sudo apt update

$ sudo apt install software-properties-common

$ sudo apt-add-repository –yes –update ppa:ansible/ansible

$ sudo apt install ansible

To test it out, you’ll need a server that has Linux running on it, either locally or in the cloud. You must then create an SSH key for that server on your Ansible box and copy the public key up to the server.

Now we can get to the fun part. Ansible uses an inventory file called hosts to define many of your infrastructure parameters, including the hosts that you want to administer. Ansible reads information in key-value pairs, and the inventory file uses either the INI or YAML formats. We’ll use INI for our inventory.

Make a list of the hosts that you’re going to manage by putting them in the inventory file. Modify the default hosts file in your /etc/ansible/ folder, making a backup of the default one first. This is our basic inventory file:

# Ansible hosts

 [LAN]

db_server ansible_host=192.168.1.88

db_server ansible_become=yes

db_server ansible_become_user=root

The phrase in the square brackets is your label for a group of hosts that you want to control. You can put multiple hosts in a group, and a host can exist in multiple groups. We gave our host an alias of db_server. Replace the IP address here with the address of the host you want to control.

The next two lines enable Ansible to take control of this server for everything using sudo. ansible-become tells it to become a sudo user, while ansible-become-user tells it which sudoer account to use. Note that we haven’t listed a password here.

You can use Ansible to run shell commands that influence multiple hosts, but it’s better to use modules. These are native Ansible functions that replicate many Linux commands, such as copy (which replicates cp), user, and service to manage Linux services. Here, we’ll use Ansible’s apt module to install Apache on the host.

ansible db_server -m apt -a ‘name=apache2 state=present update_cache=true’ -u danny –ask-become-pass

The -m flag tells us we’re running a module (apt), while -a specifies the arguments. update_cache=true tells Ansible to update the packages cache (the equivalent of apt-get upgrade), which is good practice. -u specifies the user account we’re logging in as, while –ask-become-pass tells Ansible to ask us for the user password when elevating privileges.

state=present is the most interesting flag. It tells us how we want Ansible to leave things when it’s done. In this case, we want the installed package to be present. You could also use absent to ensure it isn’t there, or latest to install and then upgrade to the latest version.

Then, Ansible tells us the result (truncated here to avoid the reams of stdout text).

db_server | CHANGED => {

    “ansible_facts”: {

        “discovered_interpreter_python”: “/usr/bin/python3”

    },

    “cache_update_time”: 1606575195,

    “cache_updated”: true,

    “changed”: true,

    “stderr”: “”,

    “stderr_lines”: [],

Run it again, and you’ll see that changed = false. The script can handle itself whether the software is already installed or not. This ability to get the same result no matter how many times you run a script is known as idempotence, and it’s a key feature that makes Ansible less brittle than a bunch of bash scripts.

Running ad hoc commands like this is fine, but what if we want to string commands together and reuse them later? This is where playbooks come in. Let’s create a playbook for Apache using the YAML format. We create the following file and save it as /etc/ansible/lampstack.yml:

– hosts: lan

  gather_facts: yes

  tasks:

  – name: install apache

    apt: pkg=apache2 state=present update_cache=true

  – name: start apache

    service: name=apache2 state=started enabled=yes

    notify:

    – restart apache

  handlers:

    – name: restart apache

      service: name=apache2 state=restarted

hosts tells us which group we’re running this script on. gather_facts tells Ansible to interrogate the host for key facts. This is handy for more complex scripts that might take steps based on these facts.

Playbooks list individual tasks, which you can name as you wish. Here, we have two: one to install Apache, and one to start the Apache service after it’s installed.

notify calls another kind of task known as a handler. This is a task that doesn’t run automatically. Instead, it only runs when another task tells it to. A typical use for a handler is to run only when a change is made on a machine. In this case, we restart Apache if the system calls for it.

Run this using ansible-playbook lampstack.yml –ask-become-pass.

So, that’s a playbook. Let’s take this and expand it a little to install an entire LAMP stack. Update the file to look like this:

– hosts: lan

  gather_facts: yes

   tasks:

  – name: update apt cache

    apt: update_cache=yes cache_valid_time=3600

   – name: install all of the things

    apt: name={{item}} state=present

    with_items:

      – apache2

      – mysql-server

      – php

      – php-mysql

      – php-gd

      – php-ssh2

      – libapache2-mod-php

      – python3-pip

   – name: install python mysql library

    pip:

      name: pymysql

   – name: start apache

    service: name=apache2 state=started enabled=yes

    notify:

    – restart apache

   handlers:

    – name: restart apache

      service: name=apache2 state=restarted

Note that we’ve moved our apt cache update operation into its own task because we’re going to be installing several things and we don’t need to update the cache each time. Then, we use a loop. The {{item}} variable repeats the apt installation with all the package names indicated in the with_items group. Finally, we use Python’s pip command to install a Python connector that enables the language to interact with the MySQL database.

There are plenty of other things we can do with Ansible, including breaking out more complex Playbooks into sub-files known as roles. You can then reuse these roles to support different Ansible scripts.

When you’re writing Ansible scripts, you’ll probably run into plenty of errors and speed bumps that will send you searching for answers, especially if you’re not a master at it. The same is true of general sysadmin work and bash scripting, but if you use this research while writing an Ansible script, you’ll have a clear and repeatable recipe for future infrastructure deployments that you can handle at scale.

AWS to bring 5G edge compute service to the UK in 2021


Sabina Weston

2 Dec, 2020

Amazon Web Services (AWS) has announced plans to bring its 5G edge compute service to the UK in early 2021. 

First unveiled at last year’s re:Invent event, AWS Wavelength offers optimised solutions for mobile edge computing applications, simplifying application traffic in order to fully utilise the latency and bandwidth benefits offered by modern 5G networks.

The service manages to shorten the time of mobile data response from seconds to milliseconds, making it ideal for time-sensitive sectors such as driverless cars or surgeries, as well as less critical scenarios like gaming.

Speaking at this year’s re:Invent, a three-week event which commenced on 1 December, AWS CEO Andy Jassy said that AWS Wavelength will be launched in the UK in partnership with Vodafone Business. 

This will be part of the new Vodafone Business Edge Innovation Program (EIP), which has opened its registration submissions today. The programme will provide startups, ISVs, businesses, as well as freelance developers exclusive access to edge computing training to help them develop, test and deploy a Proof-of-Concept (PoC) 5G application on AWS Wavelength and Vodafone 5G network.

Vodafone and AWS will roll out Wavelength in spring 2021, starting with a commercial Multi-access Edge Computing (MEC) centre in London. The MEC centre will use Vodafone’s 5G network in order to provide an ultra-low latency zone over the extended area of the UK capital.

Commenting on the announcement, Vodafone Business CEO Vinod Kumar said that “working with AWS on edge computing means we are making it simpler for both independent software vendors and our customers to experiment with this emerging technology”. 

“We’re doing this by offering an incubation space to create and test applications that we can then industrialise and scale. And we’re already seeing some innovative applications that provide positive business outcomes from Dedrone, Digital Barriers, HERE Technologies, Groopview, and Unleash live, with so much more to come once our MEC innovation programme is running,” he added.

As well as the updates to Wavelength, AWS also used re:Invent to announce a new ML-powered operations service called Amazon DevOps Guru. The service uses machine learning to help developers detect and solve operational problems with applications.

AWS’ Machine Learning VP Swami Sivasubramanian said that the idea behind DevOps Guru was borne from customer requests to “continue adding services around areas where we can apply our own expertise on how to improve application availability and learn from the years of operational experience that we have acquired running Amazon.com”. 

“With Amazon, we have taken our experience and built specialised machine learning models that help customers detect, troubleshoot, and prevent operational issues while providing intelligent recommendations when issues do arise,”  he said.

“This enables teams to immediately benefit from operational best practices Amazon has learned from running Amazon.com, saving customers the time and effort that would otherwise be spent configuring and managing multiple monitoring systems,” he added.

Salesforce escalates Microsoft rivalry with £20.7bn Slack acquisition


Bobby Hellard

2 Dec, 2020

Salesforce has agreed to buy workplace messaging platform Slack in a deal worth $27.7 billion (£20.7 billion), the largest acquisition in the cloud giant’s history. 

Under the terms of the deal, Slack will now operate as a Salesforce company, but it will still be led by CEO Stewart Butterfield.  

The acquisition was first reported earlier this week but an official announcement came late on Tuesday, ahead of Salesforce’s annual conference Dreamforce. It is one of the largest deals in recent years, falling just short of IBM’s $34 billion takeover of Red Hat in 2019

The two companies will form a unified platform for enterprise collaboration with Slack integrated into every Salesforce cloud. The communications service will also become the new interface for Salesforce 360 customers. 

Salesforce CEO, Marc Benioff, called Slack one of the most beloved platforms in enterprise software history and said the acquisition was a “match made in heaven”. In turn, Butterfield called it the “most strategic combination in the history of software”.

Butterfield also pointed out that Salesforce “started the cloud revolution”, referencing the company’s early work selling software as a subscription service (SaaS). It is now the standard practice and a billion-dollar industry with companies like Microsoft dominating with its online Microsoft 365 suite.

Slack has endured a long rivalry with Microsoft and its competing Teams platform, which benefits from being bundled in with 365 subscriptions. There is a suggestion that joining Salesforce, a customer relationship management (CRM) software company with a large enterprise portfolio and customer base, will help push Slack further into that market and potentially level the playing field. 

“Together, Salesforce and Slack will shape the future of enterprise software and transform the way everyone works in the all-digital, work-from-anywhere world,” Benioff said in a statement. “I’m thrilled to welcome Slack to the Salesforce Ohana once the transaction closes.”

Microsoft has also wadded into competition with Salesforce by recently making CRM software a priority. Benioff previously said that his company was the world’s fastest-growing enterprise software company while announcing plans to create 12,000 new jobs over the next year.

IT Pro 20/20: Why tech can’t close the diversity gap


Dale Walker

1 Dec, 2020

Welcome to the tenth issue of IT Pro 20/20, our digital magazine that brings all of the previous month’s most important tech issues into clear view.

Diversity has always been a challenge for the technology industry. It’s one of those few industries that struggles to maintain a varied talent pool, with white males still taking the single biggest share of the employee demographic.

This is a problem we’ve known about for a long time, and even though the figures have improved very little in recent years, awareness has. Unfortunately, there’s a very real danger that what work UK businesses have put in to make their workforce as diverse as possible could be entirely undone by the pandemic.

When faced with shrinking budgets, it’s easy to imagine companies choosing to sideline or even close some of the newer, more costly, diversity initiatives. We also know from recent research in the US that women are more likely to be furloughed than their male colleagues as, by the nature of recent diversity efforts, women are more likely to be holding those very vulnerable entry-level positions.

In this month’s issue, we aim to show why a struggling business should start thinking of diversity as less of a business luxury and more as a route to recovery and, ultimately, a competitive advantage in a fractured post-pandemic market. We appreciate you taking the time to download IT Pro 20/20, and we hope you enjoy this month’s issue.

DOWNLOAD ISSUE 11 OF IT PRO 20/20 HERE

The next IT Pro 20/20 will be available on Friday 18 December – previous issues can be found here. If you would like to receive each issue in your inbox as they release, you can subscribe to our mailing list here.

AWS is bringing Apple’s macOS to its cloud service


Bobby Hellard

1 Dec, 2020

Amazon Web Services (AWS) has announced that Apple’s macOS operating system will be available on its cloud service for developers.  

Amazon EC2 Mac instances for macOS and will run on Mac mini computers and will support developers building apps for the iPhone, iPad, Mac, Apple Watch, Apple TV and Safari.

This is the first big announcement to come from AWS re: Invent, which started on Monday as a three-week virtual conference. The cloud giant previously offered EC2instances for Windows and Linux, but has now opened it up to macOS which is a popular system for many developers.

What’s more, AWS has also said that cloud support for devices with Apple’s new M1 chip is planned for 2021. 

AWS will make Apple computers available in its data centres, starting with the Mac Mini, which will help developers to create and test apps remotely rather than maintain their own devices. It’s thought this could help Apple to further pivot towards building more of its own software and services. 

“You can provision new instances in minutes, giving you the ability to quickly and cost-effectively build code for multiple targets without having to own and operate your own hardware,” said Jeff Barr, AWS chief evangelist. “You pay only for what you use, and you get to benefit from the elasticity, scalability, security, and reliability provided by EC2.”

CCS Insight senior VP Nick McQuire suggested that the growing partnership between AWS and Apple will likely be the headline trend at re: Invent 2020. 

“Not only will this move help to improve the cost, security and efficiency of building applications, what is most interesting is the potential for 5G capabilities down the line as both parties have been pioneering 5G solutions to the enterprise in 2020,” he told IT Pro.

“When you consider the direction mobile phones and applications are taking, with low latency and high throughput becoming the norm in support of technologies like VR, AR, edge computing and media streaming, for example, this AWS and Apple tie-up has formidable potential.”

Zoom caps breakthrough year with a 367% surge in revenue


Bobby Hellard

1 Dec, 2020

Zoom has reported $777.2 million in revenue for its third quarter, roughly four times more than it reported during the same period in 2019. 

It is the second quarter in a row that the video conferencing service has recorded quadruple growth, with paid customers increasing 485% year-on-year. 

The increase rounds off a highly successful year for Zoom. This time last year it was a relatively obscure company, hardly known outside of the US, but by March the video conferencing service had become a household name.

The need for collaboration software, tied to the fact it offered a free service alongside a paid business tier, helped put the company in startups, schools and homes around the world. 

“Strong demand and execution led to revenue growth of 367% year-over-year with solid growth in non-GAAP operating income and cash flow in our third fiscal quarter,” said Zoom CEO Eric Yuan.

“We expect to strengthen our market position as we finish the fiscal year with an increased total revenue outlook of approximately $2.575 billion to $2.580 billion for the fiscal year 2021, or approximately 314% increase year-over-year.”

Zoom is expecting its growth to continue into 2021 with total revenue estimated to be between $806 million and $811 million for Q4. However, its revenue will be offset by higher cloud costs due to the sheer amount of free users.

The company said it had 433,700 customers with more than 10 employees in Q3, which is a 485% increase from 2019 but only a 17% increase from the second quarter. 

The video conferencing service does use its own data centres, but it also heavily relies on vendors like AWS and Oracle, which means it will bear some the cost for its free tier users. This pushed its gross profit margins down to 66.7%, below analyst estimates of 72.1%.

While the firm has its own high expectations for Q4, its gross margins are likely to remain lower than expected as the spike in free users continues to offset its overall business revenues. 

Zoom also announced this that it has selected AWS as its preferred cloud provider. This comes just months after the company shifted a portion of its cloud infrastructure to Oracle Cloud due to an unprecedented surge in new users following the announcement of lockdown restrictions.

Zoom selects AWS as preferred cloud provider over Oracle


Sabina Weston

1 Dec, 2020

Zoom has announced plans to extend its strategic partnership with Amazon Web Services (AWS), selecting it as its preferred cloud provider.

The announcement follows that Zoom would be shifting a portion of its cloud infrastructure to Oracle Cloud due to an unprecedented surge in new users following the announcement of lockdown restrictions earlier this year.

Announced in late April, the deal saw Oracle join major cloud rivals AWS and Microsoft Azure in providing support to Zoom. However, AWS managed to retain the bulk of the workload, and it seems its efforts have not gone unnoticed.

Zoom CEO Eric Yuan credited AWS with helping the platform manage “unprecedented global demand this past year”.

“We’ve been able to handle it in significant part by running the substantial majority of our cloud-based workloads on our preferred cloud provider, AWS, and relying on AWS’s performance and scalability,” he said. 

“Looking forward, we will continue to innovate alongside AWS to reinvent virtual collaboration and deliver secure and exciting experiences for our customers.”

Commenting on the announcement of the extended multi-year agreement, AWS CEO Andy Jassy said that “COVID-19 changed everything for Zoom, putting demands on the company to meet the video conferencing needs of hundreds of millions of new participants around the globe”.

“AWS was there from the beginning to ensure Zoom could scale to meet these new requirements virtually overnight,” he added.

AWS has been a long-term cloud provider for Zoom, having supplied the platform with necessary infrastructure since its launch in 2011.

“When organizations build on AWS – as Zoom has done since 2011 – they transform their business, expanding and innovating much faster. Together, Zoom and AWS have delivered great experiences for new Zoom users around the world, and we look forward to using the cloud to develop new ways to help the world communicate,” said Jassy.

The announcement comes weeks after the video-conferencing platform added a set of security features to help users combat ‘Zoom-bombing’ attacks. The new controls will help account holders remove unwanted guests and also spot if their meeting’s ID number has been shared online.