Backblaze Personal review: The simplest cloud backup we’ve seen


Darien Graham-Smith

26 Feb, 2021

Backblaze only does one thing, but it does it well

Price 
$6

When it comes to cloud backup, it doesn’t get much easier than Backblaze. There are only three buttons, and you needn’t touch any of them: it comes configured to scan your system for personal files, no matter where they’re located on your network, and automatically upload them to Backblaze’s servers. Continual updates occur whenever you make a change or create a new document. For most people, that’s ample protection with zero configuration.

Of course, if you want to get your hands dirty, there are a few things you can customise. Specific file types, locations and drives can be included and excluded – you’re even able to back up external drives – and you can optionally switch from continuous operation to daily or on-demand backups. And if you don’t trust the automatic encryption, you can also set your own encryption key.

For the most part, though, you shouldn’t need to interact with Backblaze until it’s time to restore a backed-up item. Even then, the client stays in the background because your uploaded files are browsed and downloaded from the publisher’s website. Here you can also rescue lost or overwritten files from the past 30 days, and if one of your computers is stolen, you can bring up a map showing where it was when the Backblaze software last touched base.

On that note, be aware that your subscription only entitles you to back up a single PC or Mac. That’s a necessary restriction: each account comes with unlimited storage to ensure that even the biggest files get protected. And remember that if you work with big video files or the like, they will inevitably take a while to reach Backblaze’s servers. Our 2GB folder took 49mins 35secs to upload. That’s a step up from some of the most sluggish options we’ve seen, but it’s still a drag if you want to back up your day’s work before leaving the office.

The final thing to be clear about is that Backblaze is very much a single-purpose tool: it doesn’t handle local backups at all, nor can it create an image of your hard disk for disaster-recovery purposes. That means it’s only one component in a backup strategy, rather than a complete solution – but as cloud components go, it’s terrifically convenient and effective. 

HPE acquires cloud intelligence platform CloudPhysics


Daniel Todd

26 Feb, 2021

HPE has doubled down on its commitment to data-driven insights with the acquisition of cloud analysis platform provider CloudPhysics, as well as the release of its new Software-Defined Opportunity Engine (SDOE).

The acquisition adds CloudPhysics’ SaaS-based, data-driven platform for analysis of on-premises and cloud setups, bringing detailed insights to customers and partners across most IT environments, the firm said. 

CloudPhysics’ solution monitors and analyses IT infrastructures, estimates costs and viability of cloud migrations, as well as model a customer’s IT infrastructure as a virtual environment. 

Quick to deploy, the solution can generate insights in as little as 15 minutes, HPE says, while its data capture includes over 200 metrics for VMs, hosts, datastores and networks.

“Through increased visibility and understanding, CloudPhysics transforms the procurement process to be a win-win for both customers and partners,” commented Tom Black, Senior Vice President and General Manager of HPE Storage.  

“Our partners will benefit from shorter sales cycles, an increase in assessed to qualified opportunities and higher close rates. More importantly, our partners become trusted advisors to accelerate the transformation agendas of our customers.”

The CloudPhysics solution will be integrated into HPE’s freshly unveiled Software-Defined Opportunity Engine (SDOE), which is designed to provide customers with data-backed customised sales proposals. 

Powered by HPE InfoSight, the SDOE solution uses intelligence and deep learning to generate holistic technology recommendations for businesses to optimise their infrastructure and accelerate digital transformation.

The platform auto-generates a quote with the best storage solution for a customer in as little as 45 seconds – a dramatic reduction on a process that could take weeks previously.

HPE says SDOE will enable businesses to work as trusted partners with their customers as they build a detailed understanding of their workloads, configuration and usage patterns. On average, the streamlined tool also eliminates five meetings from the current sales process, the tech firm added. 

“By utilising software and data-driven analytics, HPE Storage is transforming the sales and customer experience with real intelligence – removing complexity and guesswork, and turning it into a simple and data-driven decision-making process, based on the preference and specific needs of the customer,” Black added. 

HPE’s new business unit aims to fuel enterprise 5G adoption


Sabina Weston

25 Feb, 2021

Hewlett Packard Enterprise (HPE) has unveiled a new organisation that aims to help telcos and enterprises take advantage of the wide range of opportunities offered by the 5G market.

The newly-formed Communications Technology Group (CTG) was created by combining HPE’s Telco Infrastructure team with its Communications & Media Solutions (CMS) software arm. The latter alone has generated more than $500 million (£355m) of revenue in the fiscal year 2020, with orders growing by 18% and revenue increasing by 6% sequentially in the fourth quarter of the fiscal year 2020, according to HPE.

CTG comprises over 5,000 professionals who aim to provide consultancy, integration, installation, and support services tailored to the telecoms market.

Commenting on the announcement, CTG SVP and GM Phil Mottram said that “HPE aims to become the transformation catalyst for the 5G economy”, building on “more than 30 years of experience designing, building and tuning telco-grade infrastructure and software”.

“CTG’s founding principle is to drive innovation from edge to cloud through secure, open solutions. We are collaborating with customers and partners to build open 5G solutions that deliver efficiency, reduce risk and complexity, and future-proof the network across the telco core, the radio access network and the telco edge,” he added.

Mottram also unveiled two solutions which he described as foundational to HPE CTG, the first one being the HPE Open RAN Solution Stack, which features the industry’s first Open RAN workload-optimised server – the new HPE ProLiant DL110 Gen10 Plus. The Open RAN Solution Stack aims to enable the commercial deployment of Open RAN at scale in global 5G networks and includes infrastructure with RAN-specific blueprints, as well as orchestration and automation software.

The second “foundational” solution is the HPE 5G Core Stack, first announced in March 2020, which provides telecoms firms with 5G tech at the core of their mobile networks.

CTG’s portfolio is expected to play a crucial part in advancing HPE’s edge-to-cloud platform as a service strategy, providing enterprise and telco solutions alike, such as open, cloud-native, and flexible solutions which aim to facilitate the rollout of 5G services or modular aaS solutions that use cloud economics in order to help businesses manage demand and future-proof their enterprise.

“I am confident that with such a solid foundation and common purpose, we will strengthen our thought leadership in the telecoms sector and are set on a path for innovation and growth,” said Mottram.

How to build a CMS with React and Google Sheets


Jessica Cregg

24 Feb, 2021

Launching a website can feel like a landmark moment. It’s the digital age equivalent of hanging an ‘Open For Business’ sign in your window. If your proverbial sign is static and rarely requires updates – for example, if it’s simply intended to communicate basic operating practices like opening hours, contact information and a list of services – then you can likely build a simple site in HTML and CSS, and wrap the project there. But for most businesses, something far more dynamic is required.

As the term implies, a Content Management System, or CMS, is a structure that enables users to model, edit, create and update content on a website without having to hard-code it into the site itself. Many businesses use off-the-shelf CMS tools like WordPress, Joomla, Magento or Squarespace to handle this function.

When it comes to pin-pointing the factors that really make a really good CMS stand out, the most successful out-of-the-box solutions share one common trait – they’re easy to interpret. For any CMS, its success as a product completely relies on how easy it is for teams to create, manage and update logical data structures. The more pain-free this process is, the easier it becomes for teams to create robust content models. That being said, as any engineer will tell you, the implementation of relational databases is in itself its own art form. 

The problem with these out-of-the-box solutions is that they can often include far too many features, cater to completely different levels of tech literacy, and what’s more, can make it far too easy for you to rack up an expensive bill. An alternative option is to build your own custom CMS that’s designed to meet your specific requirements.

While building the thing that you need is often viewed as the scenic route to solving your problem, when tackled the right way, it can turn out to be the quickest solution. An excellent way of streamlining a project with the potential for infinite scope-creep, while putting any information that’s lying dormant in a spreadsheet to good use, is to use Google Sheets as the foundation for your CMS. 

Despite the best efforts of AirTable, Notion and even Smartsheets, there’s a reason why spreadsheets have stood the test of time as a widely accepted file format. Chances are, no matter the skill level or aversion to technology, most people within a business are going to know how to navigate their way around a spreadsheet. 

In the following tutorial, you’ll see how using popular JavaScript tools including the Node.js framework  and React library, you can stand up a dynamic web application built right on top of Google Sheets. We’ll create a Node project to act as a container for our API queries, and then harness React to parse our data, which will then be presented and served to the user via a dynamic front-end. Here we’ll be taking advantage of React’s tendency to abstract away a lot of the internal complexities of our application’s inner workings, along with its reusable components. The latter feature is perfect if you’re building out a broad website with a multitude of pages that you’d like to all have a consistent look and feel. 

For this example, we’ll be using the ‘Meet the Team’ page for a fictional technology enterprise business. If you’d like to use the example data we’re using for the purposes of this demonstration, you’ll find the spreadsheet linked below. 

What you’re essentially going to do is access your spreadsheet as an API, querying it in order to pull out the content for your front-end React page. We’ll be accessing the data from the sheet in JSON format, and then using the dotenv package in order to store the credentials needed to access the information from our Node project. This will parse the information and feed it through to the front-end, presenting it in a far more polished and stylised format. 

First up, let’s use the terminal to generate a new project from which we’ll be working from. Create a new directory for your project by running ‘mkdir’ followed by your project name, and then step into that directory using the ‘cd’ command. To create our Node project we’ll firstly run ‘$ npm init -y’ from the terminal, before creating two additional files we need to get up and running, using the ‘touch’ command. 

One of these is our .env file which will contain any sensitive keys, credentials or variable settings we’ll need in order to access our Sheet. Remember to keep this section in your .gitignore if you decide to share your repository publicly to prevent your keys from being deactivated and your credentials from being stolen. The last step, as illustrated in the code snippet below, is to install a few external packages we’ll be using in our project. We’ve covered off dotenv, and the googlesheetsapi is no surprise. The final one is Express, which we’ll be using as our Node.js framework due to its lightweight nature and its ability to quickly spin up servers.

$ mkdir project-name

$ cd project-name

$ npm init -y

$ touch index.js .env

$ npm i express google-spreadsheet dotenv

Once you’ve installed your external packages, open up your node project in your preferred text editor and initialise your server using the below code snippet: 

require(“dotenv”).config()

const { GoogleSpreadsheet } = require (“google-spreadsheet”)

const { OAuth2Client } = require(‘google-auth-library’);

const express = require(“express”)()

 // process.env global var is injected by Node at runtime

// Represents the state of the sys environment

const p = process.env

 // Set up GET route in Express

express.get(‘/api/team/’, async (request, response) => {

  response.send(“hello world”)

})

 // Express listener, on port specified in .env file

express.listen(p.PORT, () =>

  console.log(`Express Server currently running on port ${p.PORT}`)

)

Here, we’re essentially calling the packages which allow us to access the spreadsheets object, and on line six, we’re setting a variable of ‘p’ that creates a shortcut to process.env. Essentially, this will represent the state of our system environment for the application as it starts running. By adding a shortcut, we’ll be able to effectively access the spreadsheets object with far less effort. 

The rest of the program is initialising our express GET route (a.k.a. how we’ll be querying our sheet) and setting up a listen function in place to assist with our build. This will give us a handy prompt as to which port the express server is running on while we’re working to connect our React front-end to the application.    

Lastly, head into your .env file and assign your port number as per below in plain text, “PORT=8000” and hit save. 

Run node index.js from the root in your terminal, and you should see the listen console.log message appear stating which port your server’s currently running on – in this case, it’s port 8000. 

If you head to your browser and access the port that your application is running on, you’ll notice that your GET method is failing. Let’s fix that. 

At the moment, we’ve got our method for querying our data established. We now need to get the data flowing through our server. The next step here is to assemble your Google Spreadsheet with both the headings and the content you’d like your node project and React app to pull through. If you’d like to follow along, feel free to make a copy of the ‘Meet the Team’ spreadsheet we’ve created. 

The first thing you’re going to need to lift is your SPREADSHEET_ID. You can find this by copying the long string (or slug) found in the URL following the /d/. For example, in this case, our slug is “1f7o11-W_NSygxXjex8lU0WZYs8HlN3b0y4Qgg3PX7Yk”. Once you’ve grabbed this string, include it in your .env file under SPREADSHEET_ID. At this stage, your .env file should look a little something like this: 

Next, head to the Google Sheets Node.js Quickstart page to enable the Google Sheets API. You’ll need to click the blue ‘Enable’ button before naming your project, selecting ‘Web Server’ and entering your localhost URL when configuring the 0auth values. 

If executed correctly, this step will have enabled the Google Sheets API from your Google Cloud Platform (GCP) account. To confirm that’s happened, simply head over to your GCP console and you’ll spot your project in your profile page. 

To authenticate this exchange of data, we’ll need to generate an API key. This will authorise your app to access your Google Drive, identify your Spreadsheet via the SPREADSHEET_ID, and then perform a GET request to retrieve the data to be parsed and displayed on your React front-end. 

To get hold of your API key, you’ll want to navigate to the credentials section of the GCP console, click blue “+ CREATE CREDENTIALS” and select the “API key” option. 

Once your key’s been generated, copy it and add it into your .env file under API_KEY. 

Perfect! Now let’s initialise and use the key within our code to authenticate our Google Sheets query. In the snippet below, you’ll notice that we’re using the await operator to coincide with the async function initiated at the beginning of the index.js program shown earlier in this tutorial. To view the complete code as a reference, you can head here to review and even clone the repository. 

Now we’ve authorised our sheets object, it’s time to minify the data. This is a process by which we remove any unnecessary, superfluous data from the JSON object, so that we’re left with only the most vital sections of the object which we’ll model within our React front-end. 

Head to the index.js file of the linked repository and you’ll see how we’ve been able to do this. We start off with an empty array, and then we iterate through the rows in the sheet assigning a key corresponding to the column header, and match the value to that of the cell data for that particular row.

If you’re familiar with React, then you’ve more than likely used the package create-react-app which is by far one of Facebook’s greatest gifts to the world of application development. With one command, you’re able to spin up an instance giving you the file structure and most of what you need to get going out of the box. 

The ‘create-react-app’ command generates a locally hosted, single-page React application that requires no configuration in order to get going. When you run the command, create-react-app will run local checks on your target directory, builds your package.json, creates your dependency list and forms the structure of your bundled JS files. 

Let’s kick off this process by running the following at the root of our application:

$ npx create-react-app client

$ cd client

$ npm start

If you run into any problems with the version of npm/npx that you’re running, then the below modified command with an added bit of cache clearance should steer you right: 

$ npm uninstall -g create-react-app && npm i -g npm@latest && sudo npm cache clean -f && npx create-react-app client

If your command has run successfully, you’ll start to see create-react-app install its required packaged and build out the below file structure. 

Lastly, you’ll notice that your application now has two package.json files. We’ll need to make an edit to the scripts section in the root, and then add one line below the ‘private’ section of your client directory. This is so that you can fire everything up with one simple command – the eternal saviour that is npm start. 

In your root:

“start”: “node index.js -ignore ‘./client’ “

And in your client: 

“proxy”: “http://localhost:8000

Now with one run of npm start you’ll be met with your React front-end pulling through information from your Google Sheets CMS. 

If you check out the back-end server (running on port 8000) in your browser, you’ll be met with a JSON object displaying your CMS data in its raw, unformatted form. 

But, more importantly for the many stakeholders eager to access your newly-built site, here’s what you’ll see displayed on the front-end. 

What was once entries on a spreadsheet, is now a fully-formatted, somewhat shiny React web app, ready to be deployed through your favourite hosting service. The great thing about this solution is that it can be incorporated into your wider React-based application, and styled using one of many free themes and templates out there, meaning that your final output can actually scale with you. There you have it – yet another reason to love spreadsheets.

Intel joins forces with Google Cloud for 5G edge services


Bobby Hellard

24 Feb, 2021

Intel and Google Cloud have announced a partnership that will see the two firms develop integrated services for network providers to develop 5G innovations across various platforms.

The collaboration is another step towards Intel’s goal to develop 5G networks with software-defined infrastructures and further evidence of Google Cloud’s ambitions in the 5G arena.

Telecom-based cloud architectures and integrated services from both Intel and Google Ccloud will help to accelerate scalable network and edge deployments, particularly with multi-cloud architectures. These are thought to be critical in achieving the full potential of 5G, edge computing and AI across many industries, such as manufacturing, retail and healthcare.

The partnership will focus on three areas, the first of which will be to aid the acceleration of Virtualised RAN (vRAN) and Open Radio Access Networks (ORAN) with infrastructure and hardware support. There will also be a Network Functions Validation lab to support vendors in testing, optimising, and validating their core network functions that run on Google Cloud’s Anthos for Telecom platform. The lab environment will also expand to help customers conceive, plan, and validate their 5G and edge application strategies.

The partnership also includes edge services developed with Intel’s compute-optimisation technology and blueprints to accelerate edge transformation in certain industries.

“The next wave of network transformation is fueled by 5G and is driving a rapid transition to cloud-native technologies,” said Dan Rodriguez, Intel corporate vice president and general manager of the network platforms group.

“As communications service providers build out their 5G network infrastructure, our efforts with Google Cloud and the broader ecosystem will help them deliver agile, scalable solutions for emerging 5G and edge use cases.”

Google Cloud has been very active in this space. The firm recently announced an initiative to deliver more than 200 partner applications to the edge via its network and 5G service.

VMware patches critical ESXi and vSphere Client vulnerabilities


Keumars Afifi-Sabet

24 Feb, 2021

VMware has fixed three critically-rated flaws across its virtualisation products that could be exploited by hackers to conduct remote code execution attacks against enterprise systems.

The firm has issued updates for three flaws present across its VMware ESXi bare-metal hypervisor and vSphere Client virtual infrastructure management platform, including a severe bug rated 9.8 out of ten on the CVSS scale.

This vulnerability, tracked as CVE-2021-21972, is embedded in a vCenter Server plugin in the vSphere Client. Attackers with network access to port 443 may exploit this to execute commands with unrestricted privileges on the underlying operating system that hosts vCenter Server.

Also patched is CVE-2021-21974, which is a heap buffer overflow vulnerability in the OpenSLP component of ESXi and is also rated a severe 8.8. Cyber criminals lying dormant within the same network segment as ESXi, also with access to port 427, may trigger the issue in OpenSLP which could also result in remote code execution. 

Finally, CVE-2021-21973 is a server-side request forgery (SSRF) flaw in vSphere Client which has arisen due to improper validation of URLs in a vCenter Server plugin. This is not as severe as the other two bugs, having only been rated 5.3, but can also be exploited by those with access to port 443 to leak information. 

There are workarounds that users can deploy for both CVE-2021-21972 and CVE-2021-21973 that are detailed here until a fix is deployed by the system administrator. 

Users can patch these flaws, however, by updating the products to the most recent versions. These include 7.0 U1c, 6.7U3I and 6.5 U3n of vCenter Server, 4.2 and 3.10.1.2 of Cloud Foundation, as well as ESXi70U1c-17325551, ESXi670-202102401-SG and ESXi650-202102101-SG of ESXi.

These vulnerabilities were privately brought to the attention of VMware and customers are urged to patch their systems immediately.

Red Hat closes purchase of multi-cloud container security firm StackRox


Rene Millman

24 Feb, 2021

Red Hat has finalised its acquisition of container security company StackRox. 

StackRox’s Kubernetes-native security technology will enable Red Hat customers to build, deploy, and secure applications across multiple hybrid clouds.

In a blog post, Ashesh Badani, senior vice president of cloud platforms at Red Hat, said over the past several years, the company has “paid close attention to how our customers are securing their workloads, as well as the growing importance of GitOps to organisations.”

“Both of these have reinforced how critically important it is for security to “shift left” – integrated within every part of the development and deployment lifecycle and not treated as an afterthought,” Badani said.

Badani said the acquisition would allow Red Hat to add security into container build and CI/CD processes. 

“This helps to more efficiently identify and address issues earlier in the development cycle while providing more cohesive security up and down the entire IT stack and throughout the application lifecycle.”

He added the company’s software provides visibility and consistency across all Kubernetes clusters, helping reduce the time and effort needed to implement security while streamlining security analysis, investigation, and remediation.

“StackRox helps to simplify DevSecOps, and by integrating this technology into Red Hat OpenShift, we hope to enable users to enhance cloud-native application security across every IT footprint,” added Badani. Red Hat initially announced the acquisition in January. The terms of the deal were not disclosed.

In the previous announcement, Red Hat CEO Paul Cormier said securing Kubernetes workloads and infrastructure “cannot be done in a piecemeal manner; security must be an integrated part of every deployment, not an afterthought.”

Red Hat said it would open source StackRox’s technology post-acquisition and continue supporting the KubeLinter community and new communities as Red Hat works to open source StackRox’s offerings. 

KubeLinter is an open-source project StackRox started in October 2020 that analyses Kubernetes YAML files and Helm charts for correct configurations, focusing on enabling production readiness and security earlier in the development process.

StackRox will continue supporting multiple Kubernetes platforms, including Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).

Kamal Shah, CEO of StackRox, said the deal was “a tremendous validation of our innovative approach to container and Kubernetes security.”

Mindtree achieves Google Cloud partner status


Bobby Hellard

23 Feb, 2021

Mindtree has announced it has achieved the Application Development Partner Specialisation in Google Cloud’s Partner Advantage Programme. 

The ‘specialisation’ is an acknowledgement of the cloud firm’s expertise and success in building apps and services using the Google Cloud Platform.

Mindtree is an Indian-based tech with offices in Banglore and the US. The firm provides services for outsourcing, data analytics, e-commerce, mobile applications and cloud migrations. 

Google Cloud’s Partner Advantage Programme is designed to provide its customers with access to qualified companies that have demonstrated technical expertise and is designed for firms that show particular success in specialised service areas. For Mindtree, that specialism is in driving digital transformations with cloud migrations and application adoption. 

“Mindtree is committed to helping enterprises grow and scale their business leveraging Google Cloud’s world-class infrastructure and robust set of cloud solutions,” said Radhakrishnan Rajagopalan, the global head of customer success, data and intelligence at Mindtree.

“This recognition instils further confidence in enterprises seeking to migrate their legacy applications and workloads onto Google Cloud that Mindtree can effectively help an organisation drive their cloud adoption initiatives forward.”

Through its collaboration with Google Cloud, Mindtree has boosted its status as cloud migration and digital transformation specialists. For Google, it is further evidence of its cloud strategy to priorities migration and analytical services as it looks to cement its place as the third-biggest cloud provider. 

While they cannot match the sheer breadth of services offered by AWS, other providers like Google and IBM have taken different approaches and sought to specialise in areas of cloud computing,

In 2020, Google Cloud acquired a number of analytical firms and also migration specialists, such as Cornerstone Technologies and Looker. In May 2020, Google also announced that Splunk services were available in beta, with a full rollout later in the year. That partnership was also aimed at providing fast and reliable analytical services to Google Cloud customers. 

IBM reportedly mulls sale of Watson Health business


Bobby Hellard

22 Feb, 2021

IBM is reportedly considering a sale of Watson Health, its artificial intelligence-based medical data service, in order to further streamline its business. 

Talks are at an early stage, according to sources cited in The Wall Street Journal, and the tech giant is thought to be exploring all options.

The sale of Watson Health would be another significant change to IBM’s business under CEO Arvind Krishna. He has yet to complete his first full year in the role but has already overseen multiple acquisitions, including cloud monitoring service Instana and Salesforce consultancy 7Summits.

The company is also in the process of splitting its business in two, which will see its managed infrastructure services unit spun off into a separate entity with the IBM brand focusing on cloud computing.

It’s part of a wider strategy to gain market share in the hybrid cloud market, following the company’s $34bn acquisition of Red Hat in 2019. However, going forward, it appears that Watson Health isn’t part of those plans. The WSJ suggests the tech giant may still continue running the service, but it is in the early stages of pursuing a deal. The company is thought to be exploring a range of options from a sale to a merger with a ‘blank-check’ company. 

IT Pro has approached IBM for confirmation but had not received a response at the time of publication.

Watson Health helps medical providers manage and process data and it generates around $1 billion annually, but it isn’t thought to be profitable as yet. The service was launched with high expectations for revolutionising healthcare with artificial intelligence, with IBM aiming to speed up diagnoses for cancer and other serious conditions. 

However, Watson hasn’t seen much success in the healthcare sector, the WSJ notes, “in part because physicians were hesitant to adopt artificial intelligence”. This lead to IBM laying off a percentage of its Watson Health workforce back in 2018. 

Dropbox takes $400m hit after move to sublease office space


Bobby Hellard

19 Feb, 2021

Cloud storage firm Dropbox has reported a ‘one-off’ loss of $398.2 million in its fourth-quarter report, due to its decision to sublease most its office space as part of a remote work strategy.

The company announced in October that it would be shifting all of its employees to permanent remote work, with the company only using a small portion of its office space for occasional in-person collaboration. As a result, Dropbox then moved to sublease the rest of its space to the market.

The cloud firm, which has leases in San Francisco, Seattle, Austin, and Ireland, noted in its Q4 results that it had incurred impairment charges of $398.2 million after an assessment of its current real estate was downgraded.

“We reassessed our real estate asset groups and estimated the fair value of the office space to be subleased using current market conditions,” the firm noted in its report. “Where the carrying value of the individual asset groups exceeded their fair value, an impairment charge was recognised for the difference. As a result, we recorded total impairment charges of $398.2 million in the fourth quarter of 2020 for right-of-use and other lease-related assets.” 

Although this is regarded as a ‘one-off’ loss, it stands as a significant slump for a company that would have otherwise recorded a strong 2020 performance. In Q1 the firm recorded a profit for the first time since it debuted on the stock market in 2018. With the spread of COVID and the swift change to remote working, the company reported further gains in Q2 and Q3, the latter of which saw net incomes of $32.7 million, almost double the figure reported in the same period a year before ($17.2m).

Dropbox notably decided not to implement the sort of ‘hybrid‘ work strategy that has become popular throughout the industry, where employees can choose to work from home or in the office. The company has maintained that hybrid work arrangements could create two different employee experiences that could ultimately create “barriers to inclusion” and career inequalities.

Instead, it’s ushering in a ‘Virtual First‘ strategy where remote is now the “primary” setup for all of its employees and the day-to-day default for individual work, with the remaining office space being used for occasional collaborative work. 

The firm is also planning to spread further afield with on-demand, collaborative spaces touted for other regions, although these are yet to be announced.