All posts by Keumars Afifi-Sabet

Red Hat launches OpenShift Platform Plus alongside new managed cloud services


Keumars Afifi-Sabet

28 Apr, 2021

Red Hat has launched an advanced tier of its OpenShift container application platform, with added tools designed to offer a complete Kubernetes stack out-of-the-box. This is in addition to launching three new managed cloud services. 

Red Hat’s OpenShift Kubernetes Engine is the foundational layer of OpenShift, allowing customers to run containers across hybrid cloud deployments on the Red Hat Enterprise Linux (RHEL) OS. The OpenShift Container Platform adds developer and operations services, as well as advanced features for app development and modernisation. 

The tertiary tier, OpenShift Platform Plus, builds on the OpenShift Container Platform to provide advanced security features, ‘day two’ management capabilities and a global container registry. It brings together all the aspects needed to build, deploy and run any application where OpenShift software runs, Red Hat claims.

Its launch has come alongside a set of managed cloud services tightly integrated with the Red Hat OpenShift platform to help organisations build, deploy and manage cloud-native apps across hybrid configurations. 

Red Hat OpenShift Streams for Apache Kafka, Red Hat OpenShift Data Science and OpenShift API Management are being launched to ease the complexities of modern IT environments, while not compromising on productivity. 

OpenShift Streams for Apache Kafka is designed to make it easier for customers to create, discover and connect to real-time data streams regardless of where they’re based.

OpenShift Data Science also offers organisations a way to develop, train and test machine learning models and export in a container-ready format.

OpenShift API management, meanwhile, reduces the operational cost of delivering API-first, microservices-based apps.

“To take full advantage of the open hybrid cloud, IT leaders need to be able to use the technologies that they need in whatever IT footprint makes sense for them,” said Red Hat’s executive vice president for products and technologies, Matt Hicks, at Red Hat Summit 2021. 

“Red Hat managed cloud services effectively drops many barriers that have kept organisations from harnessing the full potential of the hybrid cloud. We believe eliminating the traditional overhead of managing cloud-scale infrastructure will spark a genesis moment for customers and open up a future of possibility where those barriers once stood.”

Red Hat OpenShift Platform Plus adds Advanced Cluster Security for Kubernetes, a standalone product developed from the firm’s recent acquisition of StackRox. This offers built-in Kubernetes-native security tools to safeguard infrastructure and management workloads through an app’s development cycle. This is in addition to Advanced Cluster Management for Kubernetes and Red Hat Quay. The former brings end-to-end visibility and control of clusters, while the latter provides a secure registry for a consistent build pipeline.

“We believe this version addresses the need for a hybrid cloud solution that we hear from our customers, and we’ll be working lead with customer-managed OpenShift across data centre, public and private cloud,” said senior vice president for cloud platforms at Red Hat, Ashesh Badani.

“This version also becomes a landing point for additional capabilities, and we have worked hard to reduce costs compared to purchasing any of these capabilities a la carte, and we will continue to offer all three versions so customers can best decide what’s appropriate for their use case, and subscribe to the best available version.”

One of the key appeals is it grants businesses system-level data collection and analysis, as well as more than 60 security policies out-of-the-box that can be enforced from the time apps are built to when they’re deployed. 

RedHat OpenShift Platform Plus also lets organisations take a DevSecOps approach to security by integrating declarative security into developer tooling and workflows.

The three managed services, being launched in the coming months, build on Red Hat’s existing suite of OpenShift apps, allowing customers and partners to build an open Kubernetes-based hybrid cloud strategy.

Based on the open source Apache Kafka project, OpenShift Streams for Apache Kafka allows dev teams to more easily incorporate streaming data into their apps. Real-time data is critical to these apps, and provide more immediate digital experiences wherever a service is delivered.

OpenShift Data Science builds on Red Hat’s Open Data Hub project and provides faster development, training and testing of machine learning models without the expected infrastructure demands. 

Finally, the OpenShift API Management managed cloud service offers full API management to Red Hat Oepnshift Dedicated, as well as OpenShift on AWS. This combines managed operations with native OpenShift integration to let organisations focus on the innovation side of things as opposed to the infrastructure. 

Red Hat OpenShift API Management also enables customers to build their own API management program, with the capabilities to control access, monitor usage, share common APIs and evolve their overall application landscape through a single DevOps pipeline.

Red Hat bolsters Edge strategy with major RHEL platform update


Keumars Afifi-Sabet

28 Apr, 2021

Red Hat has added a host of features to its flagship Red Hat Enterprise Linux (RHEL) platform to refine the product as a lightweight, production-grade operating system for edge computing

With RHEL 8.4, which will launch in the next few weeks, the company is adding Linux container deployment and management capabilities scaled for the intensive demands of edge computing. The latest version of the flagship operating system adds container images, enhanced management of edge deployments at scale, and automatic container updates.

Together, these improvements comprise the foundational layer for the ‘Red Hat Edge’ initiative, which aims to extend the capabilities of the firm’s hybrid cloud portfolio to edge computing across various industries. The segments being targeted initially include telecoms, transport, as well as smart vehicles. 

“The vision of open hybrid cloud – that sort of build once and deploy anywhere – that gets extended to the edge as we have seen this incredible thirst for the capabilities that edge can bring,” said senior vice president and general manager for Red Hat Enterprise Linux, Stefanie Chiras, speaking on a panel at Red Hat Summit 2021. 

“As we look at what we have done with open hybrid cloud, edge becomes that next extension for us to broaden out that hybrid cloud, bringing that choice that our platform delivers out into edge use cases.”

“To me when we look at our value in edge, it fundamentally comes to our capabilities in Linux. What we’ve done in RHEL, how we’ve built that out to be the heart of an ecosystem, and a stable, secure platform, how we’ve brought that into the OpenShift platform to deliver Kubernetes and management around containerisation – all of that consistency is critically important when we get out to the edge.” 

With updates to Podman, RHEL’s open standards-based container engine, the platform will help to maintain standardisation and control across several Linux containers, which is critical to edge deployments. 

There’s also new functionality to Image Builder, a tool that creates customised deployable operating system images for a variety of users. This tool supports the creation of installation media tailored for bare metal, which helps IT teams maintain a common foundation across disconnected edge environments.

Finally, the Red Hat Universal Base Image (UBI), which allows containers to retain RHEL’s traits such as security at the application level, is now available in a lightweight micro image. This makes it ideal for building redistributable, cloud-native app standardisation on an enterprise-grade Linux foundation, minus the overhead of an entire operating system deployment. 

The latest version of RHEL also adds a few non-edge-oriented features, including greater flexibility for cloud-based applications, more simplified and automated system configuration and management, as well as greater security. 

IBM launches suite of hybrid cloud storage services


Keumars Afifi-Sabet

27 Apr, 2021

IBM has unveiled a set of improvements to its storage portfolio designed to give its customers greater access to and management of their data across their complex hybrid cloud environments

The technology giant, which has pivoted its operations towards hybrid cloud in recent months, will launch IBM Spectrum Fusion later this year, in addition to updating its IBM Elastic Storage System (ESS). 

IBM Spectrum Fusion is described as a container-native hyperconverged infrastructure (HCI) system that integrates compute, storage and networking functions into a single platform. It’s been designed to come equipped with Red Hat’s OpenShift to allow customers to support environments for both virtual machines (VMs) and containers, and provide software-defined storage for cloud, edge and containerised data centres. A software-defined storage (SDS) version will follow in 2022.

Updates to IBM’s ESS suite, meanwhile, include a revamped model ESS 5000 that delivers 10% greater capacity, as well as a new ESS 3200 which offers double the read performance of its predecessor. They’re designed to provide scalability at double the performance of previous models for faster access to enterprise data.

“It’s clear that to build, deploy and manage applications requires advanced capabilities that help provide rapid availability to data across the entire enterprise – from the edge to the data centre to the cloud,” said Denis Kennelly, general manager for IBM Storage Systems. 

“It’s not as easy as it sounds, but it starts with building a foundational data layer, a containerised information architecture and the right storage infrastructure.”

Spectrum Fusion integrates a fully containerised version of the parallel file system and data protection software to provide businesses with a streamlined way to discover data across the organisation. Customers can also use the system to virtualise and accelerate data sets more easily by using the most relevant storage tier. 

Businesses will also only need to manage a single copy of the data, no longer needing to create duplicate data when moving workloads across the business. This eases processing functions such as data analytics and artificial intelligence (AI).

With regards to IBM’s ESS updates, the IBM ESS 3200 is designed to provide data throughput of 80Gbps per node, which represents a 100% performance boost from its predecessor, the ESS 3000. The 3200 also offers up to eight InfiniBand HDR-200 or Ethernet-100 ports for high throughput and low latency.

The IBM ESS 5000 model has been updated to support 10% more density than the previously available, for a total storage capacity of 15.2PB. In addition, all ESS systems are now equipped with streamlined containerised deployment capabilities automated with the latest version of Red Hat Ansible. 

Both these models include containerised system software and support for Red Hat OpenShift and the Kubernetes Container Storage Interface (CSI), CSI snapshots and clones, Red Hat Ansible, Windows, Linux and bare metal environments. IBM Spectrum Scale is also built into them. The 3200 and 5000 units also work with IBM Cloud Pak for Data, its fully containerised platform of integrated data and AI services

Intel’s data centre business shrinks 20% year-on-year


Keumars Afifi-Sabet

23 Apr, 2021

Intel’s data centre division sustained a 20.5% year-on-year decline, according to the firm’s first-quarter financial results for 2021, with overall revenues falling by 1%.

The embattled US chipmaker earned $19.7 billion during the first three months of 2021, with its data centre business accruing $5.56 billion. This, however, represents a $1.43 billion drop in earnings for its data centre business compared with the first quarter of 2020.

Intel’s overall revenues remained roughly flat, falling 0.6% over this period, with a 54% surge in revenue for notebook CPUs, driven largely by COVID-19, offsetting declines elsewhere.

“We are in a slowdown of cloud consumption – the cloud market typically goes through two to three quarters of purchasing activity and then one to two quarters of digestion,” Gartner VP Analyst and Intel analyst Alan Priestley told IT Pro. “Throughout 2020 cloud was a buying period and now we are entering a digestion period.

“The overall enterprise market is still impacted by the pandemic, 1Q20 was relatively strong (pre-pandemic) but it has been in decline since. Both Intel’s average selling price and units sold were down in this report while in 1Q20 they were relatively strong.”

The company described its data centre revenue as “better than expected” in a call with analysts, with Intel’s chief financial officer, George Davis, suggesting he expects revenues to increase through the rest of the year. This is because he expects momentum to pick up in the firm’s dealings with enterprise and government customers.

Intel is also pinning its hopes of a data centre recovery on the launch of its third-gen Intel Xeon Scalable processor, codenamed Ice Lake, which claims to deliver 62% greater performance against the previous generation of data centre chips.

The company also claims that this chip is the only data centre CPU with built-in artificial intelligence (AI) and advanced security capabilities.

Intel faces competition, however, in the form of Nvidia’s newly-announced Arm-based data centre CPU, which combines Arm CPU cores with a low-power memory subsystem to help it analyse gigantic datasets.

Dubbed Nvidia Grace, the launch of this chip is an attempt to topple Intel in the data centre market, after its chief executive Jensen Huang outlined plans to go after the chipmaker in August 2020.

In an interview with the Financial Times (FT), he spoke of the firm’s intentions to supply “the full technology stack needed to run data centres”. This launch also comes a year after Nvidia’s $7 billion acquisition of Mellanox Technologies, which makes critical data centre components.

Intel recently announced plans to revitalise its business after a rocky year, with the firm investing $20 billion into building two Arizona-based factories, alongside the pursuit of its integrated device manufacturing (IDM) 2.0 strategy.

This strategy will also see the company launch a fully-fledged foundry service, which will involve making custom CPUs for tech firms and national governments.

Hackers exploit Pulse Secure VPN flaws in sophisticated global campaign


Keumars Afifi-Sabet

21 Apr, 2021

At least two major hacking groups have deployed a dozen malware families to exploit vulnerabilities in Pulse Connect Secure’s suite of virtual private network (VPN) devices to spy on the US defence sector.

Hackers infiltrated the Pulse Connect Secure (PCS) platform by exploiting CVE-2021-22893, a critical remote code execution flaw rated a maximum of ten on the threat severity scale, in combination with a number of previously discovered vulnerabilities

Ivanti, Pulse Secure’s parent company, has released mitigations for the flaw, as well as a tool to determine if customer’s systems have been compromised, although a patch won’t be available until May 2021.

The purpose of the hack, and the scale of the infiltration, isn’t yet clear, but researchers with FireEye have linked the attack to Chinese state-backed groups. Although the predominant focus of their investigation was infiltration against US defence companies, researchers detected samples across the US and Europe. 

They were first alerted to several intrusions at defence, government and financial organisations around the world earlier this year, based on the exploitation of Pulse Secure VPN devices. They weren’t able to determine how hackers obtained administrative rights to the appliances, although they now suspect Pulse Secure vulnerabilities from 2019 and 2020 were to blame, while other intrusions were due to CVE-2021-22893.

They identified two groups, referred to as UNC2630 and UNC2717, each conducting attacks during this period against US defence agencies and global government agencies respectively. They suspect that at least the former operates on behalf of the Chinese government, although there isn’t enough evidence to make a determination on the second.

FireEye has recommended that all Pulse Secure Connect customers should assess the impact of the available mitigations and apply them if possible. They should also use the most recent version of the Pulse Secure tool to detect whether their systems have been infiltrated. 

Scott Caveza, research engineering manager with Tenable, said that alongside the new flaw, attackers also seem to be leveraging three previously fixed flaws including CVE-2019-11510, CVE-2020-8243 and CVE-2020-8260. The first of the three, which has been routinely exploited in the wild since it was first disclosed in August 2019, was among Tenable’s top five most commonly exploited flaws last year. 

“Because it is a zero-day and the timetable for the release of a patch is not yet known, CVE-2021-22893 gives attackers a valuable tool to gain entry into a key resource used by many organizations, especially in the wake of the shift to the remote workforce over the last year,” said Caveza. 

“Attackers can utilise this flaw to further compromise the PCS device, implant backdoors and compromise credentials. While Pulse Secure has noted that the zero-day has seen limited use in targeted attacks, it’s just a matter of time before a proof-of-concept becomes publicly available, which we anticipate will lead to widespread exploitation, as we observed with CVE-2019-11510.”

Trend Micro research previously found that attackers were heavily targeting VPNs, including exploiting flaws present in Fortinet’s VPN and Pulse Connect Secure.

Google’s Project Zero trials 120 day disclosure window for new software flaws


Keumars Afifi-Sabet

16 Apr, 2021

Google’s Project Zero team has updated its vulnerability disclosure policies to introduce a 30-day cushion for businesses to apply patches to the flaws it discloses before revealing any precise exploit mechanisms.

Currently, the security research team adheres to a disclosure windows lasting 90 days, which lasts from the point a vulnerability is reported to a vendor to when they make it public, in order to give software vendors enough time to develop a patch behind the scenes.

Project Zero’s new trial, however, will see the team tack on an additional 30 days to the original window before publishing any technical details, including details behind zero-day vulnerabilities. This will be cut to a period of seven days for bugs that hackers are actively exploiting.

Project Zero is making these changes to encourage faster patch development, to ensure that each fix is correct and comprehensive, and to shorten the time between a patch being released and users installing it.

The team also wants to reduce the risk of opportunistic attacks immediately after technical details are revealed. Flaws in F5 Networks’ BIG-IP software suite serves as a recent example for this phenomenon, where hackers began scanning for vulnerability deployments shortly after technical details behind a handful of critically-rated flaws were published.

The trial is significant as many security research teams across the industry seek to mould their own disclosure policies around those adopted by Project Zero. The success of this trial, therefore, could pave the way for industry-wide changes.

For example, when Project Zero first introduced an automatic 90-day disclosure window in January 2020, a host of other teams shortly followed suit, including Facebook’s internal researchers in September that year.

“Much of the debate around vulnerability disclosure is caught up on the issue of whether rapidly releasing technical details benefits attackers or defenders more,” said Project Zero’s senior security engineering manager, Tim Willis.

“From our time in the defensive community, we’ve seen firsthand how the open and timely sharing of technical details helps protect users across the Internet. But we also have listened to the concerns from others around the much more visible “opportunistic” attacks that may come from quickly releasing technical details.”

He added that despite continuing to believe that quick disclosure outweighs the risks, Project Zero was willing to incorporate feedback into its policies. “Heated discussions” about the risk and benefits of releasing technical details, or proof-of-concept exploits, have also been a significant roadblock to cooperation between researchers and vendors.

Project Zero will, in future, explore reducing the initial 90-day disclosure window in order to encourage vendors to develop patches far quicker than they currently do, with the aim of one day adopting something closer to a 60+30 policy. Based on its data, the team is likely to reduce the disclosure window in 2022 from 90+30 to 84+28.

Although vendors often do release patches in a timely manner, one of the biggest challenges in cyber security is encouraging customers to actually apply these updates to protect themselves against potential exploitation.

There are countless examples of patched vulnerabilities that are still being actively exploited because organisations have failed to apply the relevant updates.

The Cybersecurity and Infrastructure Security Agency (CISA), for instance, revealed in 2020 that many of the top-ten most commonly exploited flaws were those for which patches have existed for years. As of December 2019, hackers were even exploiting a vulnerability in Windows common controls that Microsoft fixed in April 2012.

As the trial unfolds in the coming months, Project Zero has encouraged businesses keen to understand more about the vulnerabilities being disclosed to approach their vendors or suppliers for technical details.

The team won’t reveal any proofs-of-concept or technical details prior to the 30-day window elapsing unless there’s a mutual agreement between Project Zero and the vendor.

Android spyware disguised as ‘system update’ app discovered


Keumars Afifi-Sabet

29 Mar, 2021

A sophisticated strain of malware capable of stealing user data from infected Android devices is masquerading as the System Update application.

The malicious mobile app, which functions as a Remote Access Trojan (RAT), is part of a sophisticated spyware campaign that has the ability to record audio from devices, take photos, and access WhatsApp messages, according to Zimperium researchers.

Once installed, it registers with its own Firebase command and control (C&C) server, normally used by legitimate Android developers, as well as a second independent C&C server, to send across an initial cache of information. This includes information about whether WhatsApp is installed or not, battery percentage, storage stats, and other information. It can only be installed from a third party store and not the Google Play store.

The malware then receives commands to initiate various actions such as the recording of audio from the microphone or data exfiltration. Researchers have also discovered the malware is capable of inspecting web browsing data, stealing images and videos, monitoring GPS locations, stealing phone contacts and call logs, and exfiltrating device information.

The device also asks permission to enable accessibility services, and abuses this to collect conversations and message details from WhatsApp by scraping the content on the screen after detecting whether the user is accessing the messaging service.

It hides by concealing the icon from the device’s main menu or app drawer, while also posing as the legitimate System Update app to avoid suspicion. When the device’s screen is turned off, the spyware creates a ‘searching for updates’ notification using the Firebase messaging service which allows it to generate push notifications.

The spyware’s functionality is triggered under various conditions, including when a new contact is added, a new text message is received or a new application installed. It does so by exploiting Android’s receivers including ‘contentObserver’ and ‘Broadcast’, which allows communication between the device and the server.

The Firebase messaging service is only used to initiate malicious functions, such as audio recording or data exfiltration, by sending commands to infected devices. The data itself is then collected by the second dedicated C&C server.

The spyware also only collects up-to-date information, with a refresh rate of roughly five minutes for location and networking data. The same applies to photos taken using the device’s camera, but the value is instead set to 40 minutes.

Researchers have so far been unable to determine who is behind the campaign, or whether the hackers are trying to target specific users. Given this spyware can only be downloaded outside of the Google Play store, users are strongly advised not to download applications to their phones from unsafe third-party sources.

Home Office migrates key HR workloads to Oracle Cloud


Keumars Afifi-Sabet

23 Mar, 2021

The UK Home Office has successfully transferred a handful of critical functions to Oracle Cloud in order to modernise central back-office processes. 

The central government department, which employs more than 35,000 people, has migrated HR, payroll, finance, customer support and employee analytics services to Oracle Cloud to automate, standardise and integrate these processes. 

The adoption of Oracle’s Fusion Cloud Applications suite of business services will also see the Home Office modernise and improve its finance, HR, procurement, customer support and expense systems. 

Specifically, the department has adopted Oracle Fusion Cloud Human Capital Management for HR functions, including payroll, and Oracle Fusion Cloud Customer Experience (CX) for service and support. This builds on the department’s previous implementation of Oracle Fusion Cloud Enterprise Resource Planning (ERP) for finance.

These cloud migrations aim to boost productivity and reduce long-term costs at a time where there’s growing pressure on public finances due to the government’s coronavirus response. 

“The Home Office is one of the largest and most complex government departments in the UK to have successfully migrated all of its finance, commercial, HR and payroll footprint to the cloud,” said the Home Office’s chief people officer, Jill Hatcher. 

“This programme has charted the path for other departments to build on our collective experience. This go-live is a critical step in delivering business technology that is more user-centric and allows the Home Office to continually evolve.”

The Home Office had previously worked with the Government Shared Service (GSS) to develop a blueprint that other government departments could use to move their own key business processes to the cloud.

Developed with help from Fujitsu, SSCL and Accenture, the project dubbed Metis began by moving the Home Office’s finance, procurement and expense systems to Oracle Cloud ERP. 

“Recent disruptions and challenging economic forecasts have put pressure on many government departments,” said Oracle’s executive vice president for Applications Development, Steve Miranda. 

“We’re proud to help the Home Office of the UK standardise and modernise the way it works. Moving finance, HR, and customer support to the cloud will help the department to deliver more value to UK citizens.”

The government, last year, signed a string of deals with major cloud providers, including AWS, UKCloud and Google Cloud in order to offer public sector organisations a plethora of options for easy cloud migration.

In October last year, Oracle launched a next-gen dual-region government cloud for use by UK public sector organisations and their partners, including access to a host of cloud-based services such as Oracle Cloud VMware and Kubernetes.

Nokia agrees 5G cloud deals with AWS, Azure and Google


Keumars Afifi-Sabet

16 Mar, 2021

Nokia has struck partnerships with the three biggest public cloud providers to combine their respective technologies to develop packages suited to addressing 5G use cases for its customers. 

Partnering with Microsoft, Nokia hopes to integrate its mobile network technologies with Microsoft Azure cloud-based services and its developer ecosystem to build 4G and 5G private wireless use cases for enterprises. 

The firm will offer its cloud radio access network (Cloud RAN), Open RAN, Radio Access Controller (RIC) and multi-access edge cloud (MEC) with the Azure Private Edge Zone, which allows for data processing close to the user.  Nokia will also integrate its 5G RAN with Azure 4G/5G core to showcase how blending these technologies can support Microsoft’s enterprise customers.

An agreement with AWS will see Nokia research and enable its Cloud RAN and Open RAN to support the development of 5G products, with this partnership centred on developing proof of concepts to explore how the networking technologies can be deployed. 

The programme will see engineering teams from both firms delve into how Nokia’s RAN (Radio Access Network), Open RAN, Cloud RAN and edge computing systems can work seamlessly with AWS Outposts. This partnership will allow service providers and 5G-ready businesses to use AWS across the entirety of their mobile network. 

In working together with Google Cloud, meanwhile, Nokia hopes to develop new, cloud-based 5G radio systems. The two companies will collaborate on a joint product combining Nokia’s networking tech with Google’s edge computing platform and appliances ecosystem. The aim, ultimately, would be to develop use cases to solve 5G scenarios for businesses across the world. 

Work is already underway to focus on Cloud RAN, integrating Nokia’s 5G technologies with Google’s edge computing platform, which runs on Anthos. The 5G standalone network will also be tested on the Anthos platform as a cloud-native deployment.

With these partnerships, Nokia is aiming to move away from traditional infrastructure and towards the cloud, with network operators able to launch new services much more quickly by taking advantage of virtualisation and edge computing. 

The deals were announced shortly before Nokia revealed it would be cutting up to 10,000 jobs over the course of the next two years as part of wider efforts to restructure its business groups and make cost savings. The firm intends to invest heavily in R&D and future capabilities, including 5G, cloud and digital infrastructure as part of a wider package of long-term investments.  

Chinese hackers target Linux systems with RedXOR backdoor


Keumars Afifi-Sabet

11 Mar, 2021

Hackers are targeting legacy Linux systems with sophisticated malware believed to have been developed by cyber criminals backed by the Chinese state.

The malware, branded RedXOR, encodes its network data with a scheme based on the XOR Boolean logic operation used in cryptography, and is compiled with a legacy compiler on an older release of Red Hat Enterprise Linux (RHEL).

This, according to Intezer researchers, suggests RedXOR is being used in targeted attacks against legacy systems.

Its operators deploy RedXOR to infiltrate Linux endpoints and systems in order to browse files, steal data, upload or download data, as well as tunnel network traffic. The backdoor is also difficult to identify, disguising itself as a polkit daemon, which is a background process for managing a component that controls system-wide privileges.

“Based on victimology, as well as similar components and Tactics, Techniques, and Procedures (TTPs), we believe RedXOR was developed by high profile Chinese threat actors,” said Intezer researchers Avigayil Mechtinger and Joakim Kennedy.

“Linux systems are under constant attack given that Linux runs on most of the public cloud workload. Along with botnets and cryptominers, the Linux threat landscape is also home to sophisticated threats like RedXOR developed by nation-state actors.”

Upon installation, the malware moves its binaries to a hidden folder dubbed ‘po1kitd.thumb’, as part of its efforts to disguise itself as the polkit daemon. The malware then communicates with the command and control server in the guise of HTTP traffic, from where instructions are then sent.

Researchers have monitored the server issuing a total of 19 separate commands, including requesting system information and issuing updates to the malware. The presence of “on and off” availability in the command and control server also indicates the operation is still active, the researchers claim.

To build the backdoor, the hackers used the Red Hat 4.4.7 GNU Compiler Collection (GCC) compiler, which is the default GCC for RHEL 6. This was first released in 2010.

Mainstream support for RHEL 6 only ended recently, in November 2020, meaning a swathe of servers and endpoints are likely still running RHEL 6. Intezer, however, hasn’t disclosed the number of, or nature of, the victims it’s identified. According to Enlyft, roughly 50,000 companies use RHEL installations.

Although the discovery of Linux malware families has increased in recent times, backdoors attributed to advanced threat groups, such as nation state-backed attackers, are far rarer.

Researchers are confident in their attribution, however, identifying 11 distinct similarities between RedXOR and the PWNLNX backdoor, as well as parallels with the XOR.DDOS and Groundhog botnets – all associated with hackers supported by the Chinese state.

The samples discovered were also uploaded from Indonesia and Taiwan, countries known to be targeted by state-backed hackers operating from China.