IT Pro 20/20: Keeping the lights on


Dale Walker

2 Mar, 2021

Welcome to the 14th issue of IT Pro 20/20, our sister title’s digital magazine.

Now that we have a better idea about when the lockdown will finally end, many of us will naturally be thinking about our return to the office. It’s likely that, having grown accustomed to remote working, for most of us this return will be phased and, depending on your role, you may find yourself able to negotiate how often you make the commute in. Some will be desperate to get moving again, while others will have taken cues from the past year to take advantage of new-found flexibility.

However, before the conversation shifts towards life after lockdown, we’ve taken the opportunity to highlight areas of our industry that have played crucial, yet often overlooked roles in this great remote working experiment.

In this issue, we look at how data centres have coped with immense pressure from customers, the benefits and pitfalls of onboarding new staff remotely, how smart cities will underpin life post-pandemic, and much more.

DOWNLOAD THE 14TH ISSUE OF IT PRO 20/20 HERE

The next IT Pro 20/20 will be available on 31 March – previous issues can be found here. If you would like to receive each issue in your inbox as they release, you can subscribe to our mailing list here.

IBM brings its hybrid cloud to the edge


Rene Millman

1 Mar, 2021

IBM has announced it’ll make its hybrid cloud available on any cloud, on-premises, or at the edge via its IBM Cloud Satellite.

Big Blue said it’s worked with Lumen Technologies to integrate its Cloud Satellite service with the Lumen edge platform to enable customers to use hybrid cloud services in edge computing environments. The firm also said it will collaborate with 65 ecosystem partners, including Cisco, Dell Technologies, and Intel, to build hybrid cloud services.

It said that IBM Cloud Satellite is now generally available to customers and can bring a secured, unifying layer of cloud services to clients across environments, regardless of where their data resides. IBM added that this technology would address critical data privacy and data sovereignty requirements. 

IBM said customers using the Lumen platform and IBM Cloud Satellite would be able to deploy data-intensive applications, such as video analytics, across highly distributed environments and take advantage of infrastructure designed for single-digit millisecond latency.

The collaboration will enable customers to deploy applications across more than 180,000 connected enterprise locations on the Lumen network to provide a low latency experience. They can also create cloud-enabled solutions at the edge that leverage application management and orchestration via IBM Cloud Satellite and build open, interoperable platforms that give customers greater deployment flexibility and more seamless access to cloud-native services like artificial intelligence (AI)internet of things (IoT), and edge computing.

One example given of how this would benefit customers is using cameras to detect the last time surfaces were cleaned or flag potential worker safety concerns. Using an application hosted on Red Hat OpenShift via IBM Cloud Satellite from the proximity of a Lumen edge location, such cameras and sensors can function in near real-time to help improve quality and safety, IBM claimed.

IBM added that customers across geographies can better address data sovereignty by deploying this processing power closer to where the data is created.

“With the Lumen Platform’s broad reach, we are giving our enterprise customers access to IBM Cloud Satellite to help them drive innovation more rapidly at the edge,” said Paul Savill, SVP enterprise product management and services at Lumen. 

“Our enterprise customers can now extend IBM Cloud services across Lumen’s robust global network, enabling them to deploy data-heavy edge applications that demand high security and ultra-low latency. By bringing secure and open hybrid cloud capabilities to the edge, our customers can propel their businesses forward and take advantage of the emerging applications of the 4th Industrial Revolution.”

IBM is also extending its Watson Anywhere strategy with the availability of IBM Cloud Pak for Data as a Service with IBM Cloud Satellite. IBM said this would give customers a “flexible, secure way to run their AI and analytics workloads as services across any environment – without having to manage it themselves.”

Service partners also plan to offer migration and deployment services to help customers manage solutions as-a-service anywhere. IBM Cloud Satellite customers can also access certified software offerings on Red Hat Marketplace, which they can deploy to run on Red Hat OpenShift via IBM Cloud Satellite.

Ransomware operators are exploiting VMware ESXi flaws


Keumars Afifi-Sabet

1 Mar, 2021

Two ransomware strains have retooled to exploit vulnerabilities in the VMware ESXi hypervisor system publicised last week and encrypt virtual machines (VMs).

The company patched three critical flaws across its virtualisation products last week. These included a heap buffer overflow bug in the ESXi bare-metal hypervisor, as well as a flaw that could have allowed hackers to execute commands on the underlying operating system that hosts the vCenter Server.

Researchers with CrowdStrike have since learned that two groups, known as ‘Carbon Spider’ and ‘Sprite Spider’, have updated their weapons to target the ESXi hypervisor specifically in the wake of these revelations. These groups have historically targeted Windows systems, as opposed to Linux installations, in large-scale ransomware campaigns also known as big game hunting (BGH).

The attacks have been successful, with affected victims including organisations that have used virtualisation to host many of their corporate systems on just a few ESXi servers. The nature of ESXi means these served as a “virtual jackpot” for hackers, as they were able to compromise a wide variety of enterprise systems with relatively little effort.

This follows news that cyber criminals last week were actively scanning for vulnerable businesses with unpatched VMware vCenter servers, only days after VMware issued fixes for the three flaws.

“By deploying ransomware on ESXi, Sprite Spider and Carbon Spider likely intend to impose greater harm on victims than could be achieved by their respective Windows ransomware families alone,” said CrowdStrike researchers Eric Loui and Sergei Frankoff. 

“Encrypting one ESXi server inflicts the same amount of damage as individually deploying ransomware on each VM hosted on a given server. Consequently, targeting ESXi hosts can also improve the speed of BGH operations.

“If these ransomware attacks on ESXi servers continue to be successful, it is likely that more adversaries will begin to target virtualization infrastructure in the medium term.”

Sprite Spider has conventionally launched low-volume BGH campaigns using the Defray777 strain, first attempting to compromise domain controllers before exfiltrating victim data and encrypting files. 

Carbon Spider, meanwhile, has traditionally targeted companies operating point-of-sale (POS) devices, with initial access granted through phishing campaigns. The group abruptly shifted its operational model in April last year, however, to instead undertake broad and opportunistic attacks against large numbers of victims. It launched its own strain, dubbed Darkside, in August 2020.

Both strains have compromised ESXI systems by harvesting credentials that can be used to authenticate to the vCenter web interface, which is a centralised server admin tool that can control multiple ESXi devices. 

After connecting to vCenter, Sprite Spider enables SSH to allow persistent access to ESXi devices, and in some cases changes the root password or the host’s SSH keys. Carbon Spider, meanwhile, accesses vCenter using legitimate credentials but also logged in over SSH using the Plink tool to drop its Darkside ransomware.

Backblaze Personal review: The simplest cloud backup we’ve seen


Darien Graham-Smith

26 Feb, 2021

Backblaze only does one thing, but it does it well

Price 
$6

When it comes to cloud backup, it doesn’t get much easier than Backblaze. There are only three buttons, and you needn’t touch any of them: it comes configured to scan your system for personal files, no matter where they’re located on your network, and automatically upload them to Backblaze’s servers. Continual updates occur whenever you make a change or create a new document. For most people, that’s ample protection with zero configuration.

Of course, if you want to get your hands dirty, there are a few things you can customise. Specific file types, locations and drives can be included and excluded – you’re even able to back up external drives – and you can optionally switch from continuous operation to daily or on-demand backups. And if you don’t trust the automatic encryption, you can also set your own encryption key.

For the most part, though, you shouldn’t need to interact with Backblaze until it’s time to restore a backed-up item. Even then, the client stays in the background because your uploaded files are browsed and downloaded from the publisher’s website. Here you can also rescue lost or overwritten files from the past 30 days, and if one of your computers is stolen, you can bring up a map showing where it was when the Backblaze software last touched base.

On that note, be aware that your subscription only entitles you to back up a single PC or Mac. That’s a necessary restriction: each account comes with unlimited storage to ensure that even the biggest files get protected. And remember that if you work with big video files or the like, they will inevitably take a while to reach Backblaze’s servers. Our 2GB folder took 49mins 35secs to upload. That’s a step up from some of the most sluggish options we’ve seen, but it’s still a drag if you want to back up your day’s work before leaving the office.

The final thing to be clear about is that Backblaze is very much a single-purpose tool: it doesn’t handle local backups at all, nor can it create an image of your hard disk for disaster-recovery purposes. That means it’s only one component in a backup strategy, rather than a complete solution – but as cloud components go, it’s terrifically convenient and effective. 

HPE acquires cloud intelligence platform CloudPhysics


Daniel Todd

26 Feb, 2021

HPE has doubled down on its commitment to data-driven insights with the acquisition of cloud analysis platform provider CloudPhysics, as well as the release of its new Software-Defined Opportunity Engine (SDOE).

The acquisition adds CloudPhysics’ SaaS-based, data-driven platform for analysis of on-premises and cloud setups, bringing detailed insights to customers and partners across most IT environments, the firm said. 

CloudPhysics’ solution monitors and analyses IT infrastructures, estimates costs and viability of cloud migrations, as well as model a customer’s IT infrastructure as a virtual environment. 

Quick to deploy, the solution can generate insights in as little as 15 minutes, HPE says, while its data capture includes over 200 metrics for VMs, hosts, datastores and networks.

“Through increased visibility and understanding, CloudPhysics transforms the procurement process to be a win-win for both customers and partners,” commented Tom Black, Senior Vice President and General Manager of HPE Storage.  

“Our partners will benefit from shorter sales cycles, an increase in assessed to qualified opportunities and higher close rates. More importantly, our partners become trusted advisors to accelerate the transformation agendas of our customers.”

The CloudPhysics solution will be integrated into HPE’s freshly unveiled Software-Defined Opportunity Engine (SDOE), which is designed to provide customers with data-backed customised sales proposals. 

Powered by HPE InfoSight, the SDOE solution uses intelligence and deep learning to generate holistic technology recommendations for businesses to optimise their infrastructure and accelerate digital transformation.

The platform auto-generates a quote with the best storage solution for a customer in as little as 45 seconds – a dramatic reduction on a process that could take weeks previously.

HPE says SDOE will enable businesses to work as trusted partners with their customers as they build a detailed understanding of their workloads, configuration and usage patterns. On average, the streamlined tool also eliminates five meetings from the current sales process, the tech firm added. 

“By utilising software and data-driven analytics, HPE Storage is transforming the sales and customer experience with real intelligence – removing complexity and guesswork, and turning it into a simple and data-driven decision-making process, based on the preference and specific needs of the customer,” Black added. 

HPE’s new business unit aims to fuel enterprise 5G adoption


Sabina Weston

25 Feb, 2021

Hewlett Packard Enterprise (HPE) has unveiled a new organisation that aims to help telcos and enterprises take advantage of the wide range of opportunities offered by the 5G market.

The newly-formed Communications Technology Group (CTG) was created by combining HPE’s Telco Infrastructure team with its Communications & Media Solutions (CMS) software arm. The latter alone has generated more than $500 million (£355m) of revenue in the fiscal year 2020, with orders growing by 18% and revenue increasing by 6% sequentially in the fourth quarter of the fiscal year 2020, according to HPE.

CTG comprises over 5,000 professionals who aim to provide consultancy, integration, installation, and support services tailored to the telecoms market.

Commenting on the announcement, CTG SVP and GM Phil Mottram said that “HPE aims to become the transformation catalyst for the 5G economy”, building on “more than 30 years of experience designing, building and tuning telco-grade infrastructure and software”.

“CTG’s founding principle is to drive innovation from edge to cloud through secure, open solutions. We are collaborating with customers and partners to build open 5G solutions that deliver efficiency, reduce risk and complexity, and future-proof the network across the telco core, the radio access network and the telco edge,” he added.

Mottram also unveiled two solutions which he described as foundational to HPE CTG, the first one being the HPE Open RAN Solution Stack, which features the industry’s first Open RAN workload-optimised server – the new HPE ProLiant DL110 Gen10 Plus. The Open RAN Solution Stack aims to enable the commercial deployment of Open RAN at scale in global 5G networks and includes infrastructure with RAN-specific blueprints, as well as orchestration and automation software.

The second “foundational” solution is the HPE 5G Core Stack, first announced in March 2020, which provides telecoms firms with 5G tech at the core of their mobile networks.

CTG’s portfolio is expected to play a crucial part in advancing HPE’s edge-to-cloud platform as a service strategy, providing enterprise and telco solutions alike, such as open, cloud-native, and flexible solutions which aim to facilitate the rollout of 5G services or modular aaS solutions that use cloud economics in order to help businesses manage demand and future-proof their enterprise.

“I am confident that with such a solid foundation and common purpose, we will strengthen our thought leadership in the telecoms sector and are set on a path for innovation and growth,” said Mottram.

How to build a CMS with React and Google Sheets


Jessica Cregg

24 Feb, 2021

Launching a website can feel like a landmark moment. It’s the digital age equivalent of hanging an ‘Open For Business’ sign in your window. If your proverbial sign is static and rarely requires updates – for example, if it’s simply intended to communicate basic operating practices like opening hours, contact information and a list of services – then you can likely build a simple site in HTML and CSS, and wrap the project there. But for most businesses, something far more dynamic is required.

As the term implies, a Content Management System, or CMS, is a structure that enables users to model, edit, create and update content on a website without having to hard-code it into the site itself. Many businesses use off-the-shelf CMS tools like WordPress, Joomla, Magento or Squarespace to handle this function.

When it comes to pin-pointing the factors that really make a really good CMS stand out, the most successful out-of-the-box solutions share one common trait – they’re easy to interpret. For any CMS, its success as a product completely relies on how easy it is for teams to create, manage and update logical data structures. The more pain-free this process is, the easier it becomes for teams to create robust content models. That being said, as any engineer will tell you, the implementation of relational databases is in itself its own art form. 

The problem with these out-of-the-box solutions is that they can often include far too many features, cater to completely different levels of tech literacy, and what’s more, can make it far too easy for you to rack up an expensive bill. An alternative option is to build your own custom CMS that’s designed to meet your specific requirements.

While building the thing that you need is often viewed as the scenic route to solving your problem, when tackled the right way, it can turn out to be the quickest solution. An excellent way of streamlining a project with the potential for infinite scope-creep, while putting any information that’s lying dormant in a spreadsheet to good use, is to use Google Sheets as the foundation for your CMS. 

Despite the best efforts of AirTable, Notion and even Smartsheets, there’s a reason why spreadsheets have stood the test of time as a widely accepted file format. Chances are, no matter the skill level or aversion to technology, most people within a business are going to know how to navigate their way around a spreadsheet. 

In the following tutorial, you’ll see how using popular JavaScript tools including the Node.js framework  and React library, you can stand up a dynamic web application built right on top of Google Sheets. We’ll create a Node project to act as a container for our API queries, and then harness React to parse our data, which will then be presented and served to the user via a dynamic front-end. Here we’ll be taking advantage of React’s tendency to abstract away a lot of the internal complexities of our application’s inner workings, along with its reusable components. The latter feature is perfect if you’re building out a broad website with a multitude of pages that you’d like to all have a consistent look and feel. 

For this example, we’ll be using the ‘Meet the Team’ page for a fictional technology enterprise business. If you’d like to use the example data we’re using for the purposes of this demonstration, you’ll find the spreadsheet linked below. 

What you’re essentially going to do is access your spreadsheet as an API, querying it in order to pull out the content for your front-end React page. We’ll be accessing the data from the sheet in JSON format, and then using the dotenv package in order to store the credentials needed to access the information from our Node project. This will parse the information and feed it through to the front-end, presenting it in a far more polished and stylised format. 

First up, let’s use the terminal to generate a new project from which we’ll be working from. Create a new directory for your project by running ‘mkdir’ followed by your project name, and then step into that directory using the ‘cd’ command. To create our Node project we’ll firstly run ‘$ npm init -y’ from the terminal, before creating two additional files we need to get up and running, using the ‘touch’ command. 

One of these is our .env file which will contain any sensitive keys, credentials or variable settings we’ll need in order to access our Sheet. Remember to keep this section in your .gitignore if you decide to share your repository publicly to prevent your keys from being deactivated and your credentials from being stolen. The last step, as illustrated in the code snippet below, is to install a few external packages we’ll be using in our project. We’ve covered off dotenv, and the googlesheetsapi is no surprise. The final one is Express, which we’ll be using as our Node.js framework due to its lightweight nature and its ability to quickly spin up servers.

$ mkdir project-name

$ cd project-name

$ npm init -y

$ touch index.js .env

$ npm i express google-spreadsheet dotenv

Once you’ve installed your external packages, open up your node project in your preferred text editor and initialise your server using the below code snippet: 

require(“dotenv”).config()

const { GoogleSpreadsheet } = require (“google-spreadsheet”)

const { OAuth2Client } = require(‘google-auth-library’);

const express = require(“express”)()

 // process.env global var is injected by Node at runtime

// Represents the state of the sys environment

const p = process.env

 // Set up GET route in Express

express.get(‘/api/team/’, async (request, response) => {

  response.send(“hello world”)

})

 // Express listener, on port specified in .env file

express.listen(p.PORT, () =>

  console.log(`Express Server currently running on port ${p.PORT}`)

)

Here, we’re essentially calling the packages which allow us to access the spreadsheets object, and on line six, we’re setting a variable of ‘p’ that creates a shortcut to process.env. Essentially, this will represent the state of our system environment for the application as it starts running. By adding a shortcut, we’ll be able to effectively access the spreadsheets object with far less effort. 

The rest of the program is initialising our express GET route (a.k.a. how we’ll be querying our sheet) and setting up a listen function in place to assist with our build. This will give us a handy prompt as to which port the express server is running on while we’re working to connect our React front-end to the application.    

Lastly, head into your .env file and assign your port number as per below in plain text, “PORT=8000” and hit save. 

Run node index.js from the root in your terminal, and you should see the listen console.log message appear stating which port your server’s currently running on – in this case, it’s port 8000. 

If you head to your browser and access the port that your application is running on, you’ll notice that your GET method is failing. Let’s fix that. 

At the moment, we’ve got our method for querying our data established. We now need to get the data flowing through our server. The next step here is to assemble your Google Spreadsheet with both the headings and the content you’d like your node project and React app to pull through. If you’d like to follow along, feel free to make a copy of the ‘Meet the Team’ spreadsheet we’ve created. 

The first thing you’re going to need to lift is your SPREADSHEET_ID. You can find this by copying the long string (or slug) found in the URL following the /d/. For example, in this case, our slug is “1f7o11-W_NSygxXjex8lU0WZYs8HlN3b0y4Qgg3PX7Yk”. Once you’ve grabbed this string, include it in your .env file under SPREADSHEET_ID. At this stage, your .env file should look a little something like this: 

Next, head to the Google Sheets Node.js Quickstart page to enable the Google Sheets API. You’ll need to click the blue ‘Enable’ button before naming your project, selecting ‘Web Server’ and entering your localhost URL when configuring the 0auth values. 

If executed correctly, this step will have enabled the Google Sheets API from your Google Cloud Platform (GCP) account. To confirm that’s happened, simply head over to your GCP console and you’ll spot your project in your profile page. 

To authenticate this exchange of data, we’ll need to generate an API key. This will authorise your app to access your Google Drive, identify your Spreadsheet via the SPREADSHEET_ID, and then perform a GET request to retrieve the data to be parsed and displayed on your React front-end. 

To get hold of your API key, you’ll want to navigate to the credentials section of the GCP console, click blue “+ CREATE CREDENTIALS” and select the “API key” option. 

Once your key’s been generated, copy it and add it into your .env file under API_KEY. 

Perfect! Now let’s initialise and use the key within our code to authenticate our Google Sheets query. In the snippet below, you’ll notice that we’re using the await operator to coincide with the async function initiated at the beginning of the index.js program shown earlier in this tutorial. To view the complete code as a reference, you can head here to review and even clone the repository. 

Now we’ve authorised our sheets object, it’s time to minify the data. This is a process by which we remove any unnecessary, superfluous data from the JSON object, so that we’re left with only the most vital sections of the object which we’ll model within our React front-end. 

Head to the index.js file of the linked repository and you’ll see how we’ve been able to do this. We start off with an empty array, and then we iterate through the rows in the sheet assigning a key corresponding to the column header, and match the value to that of the cell data for that particular row.

If you’re familiar with React, then you’ve more than likely used the package create-react-app which is by far one of Facebook’s greatest gifts to the world of application development. With one command, you’re able to spin up an instance giving you the file structure and most of what you need to get going out of the box. 

The ‘create-react-app’ command generates a locally hosted, single-page React application that requires no configuration in order to get going. When you run the command, create-react-app will run local checks on your target directory, builds your package.json, creates your dependency list and forms the structure of your bundled JS files. 

Let’s kick off this process by running the following at the root of our application:

$ npx create-react-app client

$ cd client

$ npm start

If you run into any problems with the version of npm/npx that you’re running, then the below modified command with an added bit of cache clearance should steer you right: 

$ npm uninstall -g create-react-app && npm i -g npm@latest && sudo npm cache clean -f && npx create-react-app client

If your command has run successfully, you’ll start to see create-react-app install its required packaged and build out the below file structure. 

Lastly, you’ll notice that your application now has two package.json files. We’ll need to make an edit to the scripts section in the root, and then add one line below the ‘private’ section of your client directory. This is so that you can fire everything up with one simple command – the eternal saviour that is npm start. 

In your root:

“start”: “node index.js -ignore ‘./client’ “

And in your client: 

“proxy”: “http://localhost:8000

Now with one run of npm start you’ll be met with your React front-end pulling through information from your Google Sheets CMS. 

If you check out the back-end server (running on port 8000) in your browser, you’ll be met with a JSON object displaying your CMS data in its raw, unformatted form. 

But, more importantly for the many stakeholders eager to access your newly-built site, here’s what you’ll see displayed on the front-end. 

What was once entries on a spreadsheet, is now a fully-formatted, somewhat shiny React web app, ready to be deployed through your favourite hosting service. The great thing about this solution is that it can be incorporated into your wider React-based application, and styled using one of many free themes and templates out there, meaning that your final output can actually scale with you. There you have it – yet another reason to love spreadsheets.

Intel joins forces with Google Cloud for 5G edge services


Bobby Hellard

24 Feb, 2021

Intel and Google Cloud have announced a partnership that will see the two firms develop integrated services for network providers to develop 5G innovations across various platforms.

The collaboration is another step towards Intel’s goal to develop 5G networks with software-defined infrastructures and further evidence of Google Cloud’s ambitions in the 5G arena.

Telecom-based cloud architectures and integrated services from both Intel and Google Ccloud will help to accelerate scalable network and edge deployments, particularly with multi-cloud architectures. These are thought to be critical in achieving the full potential of 5G, edge computing and AI across many industries, such as manufacturing, retail and healthcare.

The partnership will focus on three areas, the first of which will be to aid the acceleration of Virtualised RAN (vRAN) and Open Radio Access Networks (ORAN) with infrastructure and hardware support. There will also be a Network Functions Validation lab to support vendors in testing, optimising, and validating their core network functions that run on Google Cloud’s Anthos for Telecom platform. The lab environment will also expand to help customers conceive, plan, and validate their 5G and edge application strategies.

The partnership also includes edge services developed with Intel’s compute-optimisation technology and blueprints to accelerate edge transformation in certain industries.

“The next wave of network transformation is fueled by 5G and is driving a rapid transition to cloud-native technologies,” said Dan Rodriguez, Intel corporate vice president and general manager of the network platforms group.

“As communications service providers build out their 5G network infrastructure, our efforts with Google Cloud and the broader ecosystem will help them deliver agile, scalable solutions for emerging 5G and edge use cases.”

Google Cloud has been very active in this space. The firm recently announced an initiative to deliver more than 200 partner applications to the edge via its network and 5G service.

VMware patches critical ESXi and vSphere Client vulnerabilities


Keumars Afifi-Sabet

24 Feb, 2021

VMware has fixed three critically-rated flaws across its virtualisation products that could be exploited by hackers to conduct remote code execution attacks against enterprise systems.

The firm has issued updates for three flaws present across its VMware ESXi bare-metal hypervisor and vSphere Client virtual infrastructure management platform, including a severe bug rated 9.8 out of ten on the CVSS scale.

This vulnerability, tracked as CVE-2021-21972, is embedded in a vCenter Server plugin in the vSphere Client. Attackers with network access to port 443 may exploit this to execute commands with unrestricted privileges on the underlying operating system that hosts vCenter Server.

Also patched is CVE-2021-21974, which is a heap buffer overflow vulnerability in the OpenSLP component of ESXi and is also rated a severe 8.8. Cyber criminals lying dormant within the same network segment as ESXi, also with access to port 427, may trigger the issue in OpenSLP which could also result in remote code execution. 

Finally, CVE-2021-21973 is a server-side request forgery (SSRF) flaw in vSphere Client which has arisen due to improper validation of URLs in a vCenter Server plugin. This is not as severe as the other two bugs, having only been rated 5.3, but can also be exploited by those with access to port 443 to leak information. 

There are workarounds that users can deploy for both CVE-2021-21972 and CVE-2021-21973 that are detailed here until a fix is deployed by the system administrator. 

Users can patch these flaws, however, by updating the products to the most recent versions. These include 7.0 U1c, 6.7U3I and 6.5 U3n of vCenter Server, 4.2 and 3.10.1.2 of Cloud Foundation, as well as ESXi70U1c-17325551, ESXi670-202102401-SG and ESXi650-202102101-SG of ESXi.

These vulnerabilities were privately brought to the attention of VMware and customers are urged to patch their systems immediately.

Red Hat closes purchase of multi-cloud container security firm StackRox


Rene Millman

24 Feb, 2021

Red Hat has finalised its acquisition of container security company StackRox. 

StackRox’s Kubernetes-native security technology will enable Red Hat customers to build, deploy, and secure applications across multiple hybrid clouds.

In a blog post, Ashesh Badani, senior vice president of cloud platforms at Red Hat, said over the past several years, the company has “paid close attention to how our customers are securing their workloads, as well as the growing importance of GitOps to organisations.”

“Both of these have reinforced how critically important it is for security to “shift left” – integrated within every part of the development and deployment lifecycle and not treated as an afterthought,” Badani said.

Badani said the acquisition would allow Red Hat to add security into container build and CI/CD processes. 

“This helps to more efficiently identify and address issues earlier in the development cycle while providing more cohesive security up and down the entire IT stack and throughout the application lifecycle.”

He added the company’s software provides visibility and consistency across all Kubernetes clusters, helping reduce the time and effort needed to implement security while streamlining security analysis, investigation, and remediation.

“StackRox helps to simplify DevSecOps, and by integrating this technology into Red Hat OpenShift, we hope to enable users to enhance cloud-native application security across every IT footprint,” added Badani. Red Hat initially announced the acquisition in January. The terms of the deal were not disclosed.

In the previous announcement, Red Hat CEO Paul Cormier said securing Kubernetes workloads and infrastructure “cannot be done in a piecemeal manner; security must be an integrated part of every deployment, not an afterthought.”

Red Hat said it would open source StackRox’s technology post-acquisition and continue supporting the KubeLinter community and new communities as Red Hat works to open source StackRox’s offerings. 

KubeLinter is an open-source project StackRox started in October 2020 that analyses Kubernetes YAML files and Helm charts for correct configurations, focusing on enabling production readiness and security earlier in the development process.

StackRox will continue supporting multiple Kubernetes platforms, including Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).

Kamal Shah, CEO of StackRox, said the deal was “a tremendous validation of our innovative approach to container and Kubernetes security.”