All posts by Keumars Afifi-Sabet

Google Cloud and SpaceX partner on Starlink internet service

Keumars Afifi-Sabet

14 May, 2021

SpaceX and Google Cloud Platform (GCP) have struck a partnership that’ll see the two companies deliver data management, cloud services, and applications to enterprise customers across the world.

The agreement will combine SpaceX’s flagship Starlink low-orbit satellite system with Google Cloud’s data centres to provide high-speed broadband to customers on the network edge.

Starlink, a low latency broadband system comprising roughly 1,500 satellites, will base its ground stations within Google’s data centres, with GCP’s high capacity private network supporting the delivery of the global satellite internet service.

The aim is to connect businesses and consumers to the internet and to cloud computing services regardless of where they’re based, and with the highest possible levels of connectivity.

“Applications and services running in the cloud can be transformative for organisations, whether they’re operating in a highly networked or remote environment,” said senior vice president for infrastructure at Google Cloud, Urs Hölzle.

“We are delighted to partner with SpaceX to ensure that organizations with distributed footprints have seamless, secure, and fast access to the critical applications and services they need to keep their teams up and running.”

Combining Starlink’s broadband system with Google’s infrastructure will offer organisations across the world networking availability and speeds that they should expect in the modern age, SpaceX president and COO Gwynne Shotwell added.

SpaceX began developing Starlink in 2015, and the system has undergone deployment tests over the last few years. The objective has been to deploy roughly 1,500 satellites by 2021 in order to launch the networking service for enterprise customers, which SpaceX has almost achieved.

The US Federal Communications Commission (FCC) also submitted filings in 2019 for approval of up to 30,000 additional satellites to complement the 12,000 Starlink satellites that the FCC had already approved, according to Space News.

SpaceX previously struck a partnership with Microsoft in October 2020 to allow the computing giant to launch a fleet of satellites to host its Azure Space platform. This services the space industry’s mission needs while also claiming to offer high networking speeds with low latency for public and private organisations.

The networking service, powered by GCP, will be available from the second half of this year.

Can IBM buy its way to cloud success?

Keumars Afifi-Sabet

14 May, 2021

IBM has been a fixture of the computing industry almost since its inception, defining various eras with products such as the Model 5150 or Watson, the AI-powered suite of business services. One of the secrets to its longevity has been a powerful ability to reinvent itself when market shifts threaten the viability of its business model. As a result, the company is just as relevant today as it was when founded in 1911. 

While we may not readily associate IBM with cloud computing, this is where the company sees its future, alongside the twin pillars of AI and quantum computing. As such, the firm has launched itself into a radical shift in pursuit of a revenue model reliant on expanding its hybrid cloud business. This is a strategy that’s seen IBM plot to cleave off its managed services business as well as make ten acquisitions within the space of a year, comprising one of the computing giant’s most comprehensive reinventions yet. It’s a process, however, that its executives feel is essential to IBM’s long-term survival.

The ‘$1 trillion hybrid cloud opportunity’

IBM’s leadership has often referenced the “$1 trillion hybrid cloud opportunity” as a key driver for the strategy, and for good reason. The market has shown a long-term move towards cloud services, Gartner VP analyst Craig Lowery tells IT Pro, with many businesses changing their strategies to help their clients achieve their cloud objectives. “Customers have been making their requirements known for many years,” Lowery says. IBM has, like many other companies, eventually had to respond to that, he adds, saying that its leadership “has taken the appropriate actions, as they see it, to align with customer needs”.

This explosive cloud growth coincides with the continued success of businesses such as AWS, Google Cloud and Alibaba, with a wave of digital transformation projects triggering an acceleration in cloud adoption. “Overall, these trends have maintained growth in cloud spending,” says Blake Murray, research analyst at Canalys. “However, increased spending is now happening across almost all industries, with the need for digitalisation, app modernisation, content streaming, collaboration software, online learning and gaming. This is likely to continue, as an increasingly digital world becomes a ‘new normal’.”

State of decline

Just as the fortunes of major cloud giants have surged, the financial power of IBM as a wider entity has dwindled over the previous decade.

Delving into specific business units, we can see that performance declined on all fronts between 2011 and 2016, but especially the Systems and Technology segment. Like for like comparisons beyond this point are difficult, as IBM underwent two internal restructures, once in 2015 then again in 2018, but these moves failed to stem the long-term trend, and revenues continued to decline. At the same time, IBM’s cloud operations – spread across all divisions – began to spark into life, mirroring wider industry trends. 

Today, cloud computing is one of IBM’s most important revenue streams and will continue to grow in significance. The rising value of the firm’s cloud business is clear, and a key reason why its leadership sees cloud computing as a future moneymaker.

Sparking an internal revolution

In October, IBM announced it would carve away its managed services business into a separate entity by the end of 2021. This is a key part of the overall strategy, the company’s vice-president for Hybrid Cloud EMEA, Agnieszka Bruyère, tells IT Pro, with its AI, quantum computing and cloud operations being recast as the three main pillars of IBM’s operations. 

The origins of this strategy stretch back two or three years, she adds, when the company first pinpointed the key role cloud computing would play in its clients’ digital transformation journeys. At that stage, however, 80% of its customers’ workloads were still residing in the data centre. This is partially why IBM is pursuing hybrid cloud. The firm, Bruyère explains, doesn’t consider the public cloud alone to be a viable long-term solution for helping its customers modernise. “It cannot be only a purely public cloud transformation,” she says. “It does not meet the companies’ reality in terms of security, compliance, business model, whatever, and really the best way to respond to companies’ challenges is a hybrid cloud strategy.”

The foundational step on this path was IBM’s record $34 billion acquisition of Red Hat, with the open-source giant brought in to bolster the company’s technology portfolio. Playing a key role in driving this deal forward was Arvind Krishna, who at the time was VP for hybrid cloud but was named CEO in April 2020. His promotion coincided with the recruitment of Bank of America veteran Howard Boville as his replacement. Since then, Bruyère tells IT Pro, IBM has adopted much-needed “clarity” on its hybrid cloud strategy, with the business taking more aggressive steps since.

The pair have played a key role in making a set of strategic acquisitions while paving the way for the divestiture of its entire managed services business. This follows a long history of divestments, Krishna recently commented, with IBM divesting networking in the 90s, PCs back in the 2000s and semiconductors about five years ago.

“We want to make sure we are focusing our investment in this space, and we really want to do it only in this space – hybrid cloud and AI,” Bruyère says. “Another new aspect is about the industrial offerings with the new management, and this is really important because it’s not only about building technical capabilities, but also bringing the regulation layer; the specifics for every industry.” 

The key difference since the leadership reshuffle is a strategic focus on the logistics around hybrid cloud, rather than the technology. The company has made efforts to apply its technology to the needs and requirements of particular industries, taking into account unique security, data protection and regulatory requirements, among other considerations. This was signalled with the launch of IBM Cloud for Financial Services, with specific sector-based services set to follow. 

IBM’s cloud computing ‘shopping spree’ 

The changed approach has also been expressed in the nature of IBM’s ten acquisitions since the Red Hat deal closed in 2019, one of the most recent being Taos Mountain, a cloud consultancy firm. IBM is hoping the services of each business, largely small enterprises, can give its wider cloud offering an added edge. 

Reflecting Bruyère’s assessment of IBM’s new strategic direction, Lowery highlights the importance of professional services in making cloud adoption work as the reason the company has focused on acquiring consultancies. Of course, of the ten, five are involved in consultancy. “The expertise about how to build in the cloud, how to build across clouds, how to build from cloud to your on-premises data centre – which is hybrid – most of that requires skills and expertise that are not readily available for hire, except through a professional services company,” he says. 

Red Hat, meanwhile, fits into the equation perfectly thanks to its technology for containers and container orchestration, as well as its OpenShift family of software products. “That technology is well-suited to building hybrid and multi-cloud solutions where you have one standard way for building applications,” Lowery adds. “It’s not the only way to solve hybrid and multi-cloud scenarios, but it is a valid way, and Red Hat brings IBM the technology to solve that particular set of problems in that way.” 

The rocky road to cloud success

Although the opportunity for IBM is undeniable, so too is the need for urgency. While the size of the cloud market has certainly grown in recent years, the grip of the biggest cloud companies has also tightened; as time passes it becomes increasingly difficult for a challenger to make serious inroads. 

Looking at how prospective customers plan to spend in the coming year, we can also see that IBM faces more of an uphill struggle for business than any other player in this space. 

Turning the tide commercially will be IBM’s most pressing challenge, although we can start to see these efforts pay off with a turnaround in IBM’s financial results for the first quarter of 2021. As far as Murray is concerned, the company is certainly on the right track with the actions it’s taking, especially the decision to spin off its managed services business into an entity named Kyndryl.

“It allows IBM to become much more nimble and responsive,” he explains, “increasing its relevance in a multi-cloud, hybrid world, and reducing competition with the largest systems integrators that will be critical partners for its hybrid cloud and AI offerings. The most important move it has made recently is establishing a new, simplified global sales structure and go-to-market model, giving partners ownership of all but its largest enterprise customers and removing compensation for IBM sales selling into any other accounts.”

Success will very much depend on IBM’s commitment to its new ecosystem and channel model, with a need to reduce complexity and refresh its rules of engagement, he adds. “In the past, IBM has made similar promises but failed to follow through. It now has an opportunity to establish itself as a vendor of partner choice.”

For Gartner’s Craig Lowery, the first thing he’ll be looking for as signs of green shoots would be when his clients begin showing more interest.“We know when a company is making an impact,” he explains, “when Gartner clients start asking about them and are getting the message in the market that the company has made a significant change and that the change has some substance to it.” 

Given the long-term nature of this transition, Lowery advises IBM’s executives to remain consistent in their approach, but also not to shy away from the need to make tweaks as and when required. The fact IBM is making these structural changes, he notes, shows its executives understand the shift that’s required to stay relevant in the future. “It’s clear to me that IBM knows these changes are necessary and that it is willing to do the hard work to make it happen.”

Microsoft launches open source tool Counterfeit to prevent AI hacking

Keumars Afifi-Sabet

4 May, 2021

Microsoft has launched an open source tool to help developers assess the security of their machine learning systems.

The Counterfit project, now available on GitHub, comprises a command-line tool and generic automation layer to allow developers to simulate cyber attacks against AI systems.

Microsoft’s red team have used Counterfit to test its own AI models, while the wider company is also exploring using the tool in AI development.

Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment.

It can assess AI models hosted in various cloud environments, on-premises, or in the edge. Microsoft also promoted its flexibility by highlighting the fact that it’s agnostic to AI models and also supports a variety of data types, including text, images, or generic input.

“Our tool makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models,” Microsoft said.

“This tool is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems.”

The three key ways that security professionals can deploy Counterfit is by pen testing and red teaming AI systems, scanning AI systems for vulnerabilities, and logging attacks against AI models.

The tool comes preloaded with attack algorithms, while security professionals can also use the built-in cmd2 scripting engine to hook into Counterfit from existing offensive tools for testing purposes.

Optionally, businesses can scan AI systems with relevant attacks any number of times to create baselines, with continuous runs as vulnerabilities are addressed, helping to measure ongoing progress.

Microsoft developed the tool out of a need to assess its own systems for vulnerabilities. Counterfit began life as a handful of attack scripts written to target individual AI models, and gradually evolved into an automation tool to attack multiple systems at scale.

The company claims it’s engaged with a variety of its partners, customers, and government entities in testing the tool against machine learning models in their own environments.

Red Hat launches OpenShift Platform Plus alongside new managed cloud services

Keumars Afifi-Sabet

28 Apr, 2021

Red Hat has launched an advanced tier of its OpenShift container application platform, with added tools designed to offer a complete Kubernetes stack out-of-the-box. This is in addition to launching three new managed cloud services. 

Red Hat’s OpenShift Kubernetes Engine is the foundational layer of OpenShift, allowing customers to run containers across hybrid cloud deployments on the Red Hat Enterprise Linux (RHEL) OS. The OpenShift Container Platform adds developer and operations services, as well as advanced features for app development and modernisation. 

The tertiary tier, OpenShift Platform Plus, builds on the OpenShift Container Platform to provide advanced security features, ‘day two’ management capabilities and a global container registry. It brings together all the aspects needed to build, deploy and run any application where OpenShift software runs, Red Hat claims.

Its launch has come alongside a set of managed cloud services tightly integrated with the Red Hat OpenShift platform to help organisations build, deploy and manage cloud-native apps across hybrid configurations. 

Red Hat OpenShift Streams for Apache Kafka, Red Hat OpenShift Data Science and OpenShift API Management are being launched to ease the complexities of modern IT environments, while not compromising on productivity. 

OpenShift Streams for Apache Kafka is designed to make it easier for customers to create, discover and connect to real-time data streams regardless of where they’re based.

OpenShift Data Science also offers organisations a way to develop, train and test machine learning models and export in a container-ready format.

OpenShift API management, meanwhile, reduces the operational cost of delivering API-first, microservices-based apps.

“To take full advantage of the open hybrid cloud, IT leaders need to be able to use the technologies that they need in whatever IT footprint makes sense for them,” said Red Hat’s executive vice president for products and technologies, Matt Hicks, at Red Hat Summit 2021. 

“Red Hat managed cloud services effectively drops many barriers that have kept organisations from harnessing the full potential of the hybrid cloud. We believe eliminating the traditional overhead of managing cloud-scale infrastructure will spark a genesis moment for customers and open up a future of possibility where those barriers once stood.”

Red Hat OpenShift Platform Plus adds Advanced Cluster Security for Kubernetes, a standalone product developed from the firm’s recent acquisition of StackRox. This offers built-in Kubernetes-native security tools to safeguard infrastructure and management workloads through an app’s development cycle. This is in addition to Advanced Cluster Management for Kubernetes and Red Hat Quay. The former brings end-to-end visibility and control of clusters, while the latter provides a secure registry for a consistent build pipeline.

“We believe this version addresses the need for a hybrid cloud solution that we hear from our customers, and we’ll be working lead with customer-managed OpenShift across data centre, public and private cloud,” said senior vice president for cloud platforms at Red Hat, Ashesh Badani.

“This version also becomes a landing point for additional capabilities, and we have worked hard to reduce costs compared to purchasing any of these capabilities a la carte, and we will continue to offer all three versions so customers can best decide what’s appropriate for their use case, and subscribe to the best available version.”

One of the key appeals is it grants businesses system-level data collection and analysis, as well as more than 60 security policies out-of-the-box that can be enforced from the time apps are built to when they’re deployed. 

RedHat OpenShift Platform Plus also lets organisations take a DevSecOps approach to security by integrating declarative security into developer tooling and workflows.

The three managed services, being launched in the coming months, build on Red Hat’s existing suite of OpenShift apps, allowing customers and partners to build an open Kubernetes-based hybrid cloud strategy.

Based on the open source Apache Kafka project, OpenShift Streams for Apache Kafka allows dev teams to more easily incorporate streaming data into their apps. Real-time data is critical to these apps, and provide more immediate digital experiences wherever a service is delivered.

OpenShift Data Science builds on Red Hat’s Open Data Hub project and provides faster development, training and testing of machine learning models without the expected infrastructure demands. 

Finally, the OpenShift API Management managed cloud service offers full API management to Red Hat Oepnshift Dedicated, as well as OpenShift on AWS. This combines managed operations with native OpenShift integration to let organisations focus on the innovation side of things as opposed to the infrastructure. 

Red Hat OpenShift API Management also enables customers to build their own API management program, with the capabilities to control access, monitor usage, share common APIs and evolve their overall application landscape through a single DevOps pipeline.

Red Hat bolsters Edge strategy with major RHEL platform update

Keumars Afifi-Sabet

28 Apr, 2021

Red Hat has added a host of features to its flagship Red Hat Enterprise Linux (RHEL) platform to refine the product as a lightweight, production-grade operating system for edge computing

With RHEL 8.4, which will launch in the next few weeks, the company is adding Linux container deployment and management capabilities scaled for the intensive demands of edge computing. The latest version of the flagship operating system adds container images, enhanced management of edge deployments at scale, and automatic container updates.

Together, these improvements comprise the foundational layer for the ‘Red Hat Edge’ initiative, which aims to extend the capabilities of the firm’s hybrid cloud portfolio to edge computing across various industries. The segments being targeted initially include telecoms, transport, as well as smart vehicles. 

“The vision of open hybrid cloud – that sort of build once and deploy anywhere – that gets extended to the edge as we have seen this incredible thirst for the capabilities that edge can bring,” said senior vice president and general manager for Red Hat Enterprise Linux, Stefanie Chiras, speaking on a panel at Red Hat Summit 2021. 

“As we look at what we have done with open hybrid cloud, edge becomes that next extension for us to broaden out that hybrid cloud, bringing that choice that our platform delivers out into edge use cases.”

“To me when we look at our value in edge, it fundamentally comes to our capabilities in Linux. What we’ve done in RHEL, how we’ve built that out to be the heart of an ecosystem, and a stable, secure platform, how we’ve brought that into the OpenShift platform to deliver Kubernetes and management around containerisation – all of that consistency is critically important when we get out to the edge.” 

With updates to Podman, RHEL’s open standards-based container engine, the platform will help to maintain standardisation and control across several Linux containers, which is critical to edge deployments. 

There’s also new functionality to Image Builder, a tool that creates customised deployable operating system images for a variety of users. This tool supports the creation of installation media tailored for bare metal, which helps IT teams maintain a common foundation across disconnected edge environments.

Finally, the Red Hat Universal Base Image (UBI), which allows containers to retain RHEL’s traits such as security at the application level, is now available in a lightweight micro image. This makes it ideal for building redistributable, cloud-native app standardisation on an enterprise-grade Linux foundation, minus the overhead of an entire operating system deployment. 

The latest version of RHEL also adds a few non-edge-oriented features, including greater flexibility for cloud-based applications, more simplified and automated system configuration and management, as well as greater security. 

IBM launches suite of hybrid cloud storage services

Keumars Afifi-Sabet

27 Apr, 2021

IBM has unveiled a set of improvements to its storage portfolio designed to give its customers greater access to and management of their data across their complex hybrid cloud environments

The technology giant, which has pivoted its operations towards hybrid cloud in recent months, will launch IBM Spectrum Fusion later this year, in addition to updating its IBM Elastic Storage System (ESS). 

IBM Spectrum Fusion is described as a container-native hyperconverged infrastructure (HCI) system that integrates compute, storage and networking functions into a single platform. It’s been designed to come equipped with Red Hat’s OpenShift to allow customers to support environments for both virtual machines (VMs) and containers, and provide software-defined storage for cloud, edge and containerised data centres. A software-defined storage (SDS) version will follow in 2022.

Updates to IBM’s ESS suite, meanwhile, include a revamped model ESS 5000 that delivers 10% greater capacity, as well as a new ESS 3200 which offers double the read performance of its predecessor. They’re designed to provide scalability at double the performance of previous models for faster access to enterprise data.

“It’s clear that to build, deploy and manage applications requires advanced capabilities that help provide rapid availability to data across the entire enterprise – from the edge to the data centre to the cloud,” said Denis Kennelly, general manager for IBM Storage Systems. 

“It’s not as easy as it sounds, but it starts with building a foundational data layer, a containerised information architecture and the right storage infrastructure.”

Spectrum Fusion integrates a fully containerised version of the parallel file system and data protection software to provide businesses with a streamlined way to discover data across the organisation. Customers can also use the system to virtualise and accelerate data sets more easily by using the most relevant storage tier. 

Businesses will also only need to manage a single copy of the data, no longer needing to create duplicate data when moving workloads across the business. This eases processing functions such as data analytics and artificial intelligence (AI).

With regards to IBM’s ESS updates, the IBM ESS 3200 is designed to provide data throughput of 80Gbps per node, which represents a 100% performance boost from its predecessor, the ESS 3000. The 3200 also offers up to eight InfiniBand HDR-200 or Ethernet-100 ports for high throughput and low latency.

The IBM ESS 5000 model has been updated to support 10% more density than the previously available, for a total storage capacity of 15.2PB. In addition, all ESS systems are now equipped with streamlined containerised deployment capabilities automated with the latest version of Red Hat Ansible. 

Both these models include containerised system software and support for Red Hat OpenShift and the Kubernetes Container Storage Interface (CSI), CSI snapshots and clones, Red Hat Ansible, Windows, Linux and bare metal environments. IBM Spectrum Scale is also built into them. The 3200 and 5000 units also work with IBM Cloud Pak for Data, its fully containerised platform of integrated data and AI services

Intel’s data centre business shrinks 20% year-on-year

Keumars Afifi-Sabet

23 Apr, 2021

Intel’s data centre division sustained a 20.5% year-on-year decline, according to the firm’s first-quarter financial results for 2021, with overall revenues falling by 1%.

The embattled US chipmaker earned $19.7 billion during the first three months of 2021, with its data centre business accruing $5.56 billion. This, however, represents a $1.43 billion drop in earnings for its data centre business compared with the first quarter of 2020.

Intel’s overall revenues remained roughly flat, falling 0.6% over this period, with a 54% surge in revenue for notebook CPUs, driven largely by COVID-19, offsetting declines elsewhere.

“We are in a slowdown of cloud consumption – the cloud market typically goes through two to three quarters of purchasing activity and then one to two quarters of digestion,” Gartner VP Analyst and Intel analyst Alan Priestley told IT Pro. “Throughout 2020 cloud was a buying period and now we are entering a digestion period.

“The overall enterprise market is still impacted by the pandemic, 1Q20 was relatively strong (pre-pandemic) but it has been in decline since. Both Intel’s average selling price and units sold were down in this report while in 1Q20 they were relatively strong.”

The company described its data centre revenue as “better than expected” in a call with analysts, with Intel’s chief financial officer, George Davis, suggesting he expects revenues to increase through the rest of the year. This is because he expects momentum to pick up in the firm’s dealings with enterprise and government customers.

Intel is also pinning its hopes of a data centre recovery on the launch of its third-gen Intel Xeon Scalable processor, codenamed Ice Lake, which claims to deliver 62% greater performance against the previous generation of data centre chips.

The company also claims that this chip is the only data centre CPU with built-in artificial intelligence (AI) and advanced security capabilities.

Intel faces competition, however, in the form of Nvidia’s newly-announced Arm-based data centre CPU, which combines Arm CPU cores with a low-power memory subsystem to help it analyse gigantic datasets.

Dubbed Nvidia Grace, the launch of this chip is an attempt to topple Intel in the data centre market, after its chief executive Jensen Huang outlined plans to go after the chipmaker in August 2020.

In an interview with the Financial Times (FT), he spoke of the firm’s intentions to supply “the full technology stack needed to run data centres”. This launch also comes a year after Nvidia’s $7 billion acquisition of Mellanox Technologies, which makes critical data centre components.

Intel recently announced plans to revitalise its business after a rocky year, with the firm investing $20 billion into building two Arizona-based factories, alongside the pursuit of its integrated device manufacturing (IDM) 2.0 strategy.

This strategy will also see the company launch a fully-fledged foundry service, which will involve making custom CPUs for tech firms and national governments.

Hackers exploit Pulse Secure VPN flaws in sophisticated global campaign

Keumars Afifi-Sabet

21 Apr, 2021

At least two major hacking groups have deployed a dozen malware families to exploit vulnerabilities in Pulse Connect Secure’s suite of virtual private network (VPN) devices to spy on the US defence sector.

Hackers infiltrated the Pulse Connect Secure (PCS) platform by exploiting CVE-2021-22893, a critical remote code execution flaw rated a maximum of ten on the threat severity scale, in combination with a number of previously discovered vulnerabilities

Ivanti, Pulse Secure’s parent company, has released mitigations for the flaw, as well as a tool to determine if customer’s systems have been compromised, although a patch won’t be available until May 2021.

The purpose of the hack, and the scale of the infiltration, isn’t yet clear, but researchers with FireEye have linked the attack to Chinese state-backed groups. Although the predominant focus of their investigation was infiltration against US defence companies, researchers detected samples across the US and Europe. 

They were first alerted to several intrusions at defence, government and financial organisations around the world earlier this year, based on the exploitation of Pulse Secure VPN devices. They weren’t able to determine how hackers obtained administrative rights to the appliances, although they now suspect Pulse Secure vulnerabilities from 2019 and 2020 were to blame, while other intrusions were due to CVE-2021-22893.

They identified two groups, referred to as UNC2630 and UNC2717, each conducting attacks during this period against US defence agencies and global government agencies respectively. They suspect that at least the former operates on behalf of the Chinese government, although there isn’t enough evidence to make a determination on the second.

FireEye has recommended that all Pulse Secure Connect customers should assess the impact of the available mitigations and apply them if possible. They should also use the most recent version of the Pulse Secure tool to detect whether their systems have been infiltrated. 

Scott Caveza, research engineering manager with Tenable, said that alongside the new flaw, attackers also seem to be leveraging three previously fixed flaws including CVE-2019-11510, CVE-2020-8243 and CVE-2020-8260. The first of the three, which has been routinely exploited in the wild since it was first disclosed in August 2019, was among Tenable’s top five most commonly exploited flaws last year. 

“Because it is a zero-day and the timetable for the release of a patch is not yet known, CVE-2021-22893 gives attackers a valuable tool to gain entry into a key resource used by many organizations, especially in the wake of the shift to the remote workforce over the last year,” said Caveza. 

“Attackers can utilise this flaw to further compromise the PCS device, implant backdoors and compromise credentials. While Pulse Secure has noted that the zero-day has seen limited use in targeted attacks, it’s just a matter of time before a proof-of-concept becomes publicly available, which we anticipate will lead to widespread exploitation, as we observed with CVE-2019-11510.”

Trend Micro research previously found that attackers were heavily targeting VPNs, including exploiting flaws present in Fortinet’s VPN and Pulse Connect Secure.

Google’s Project Zero trials 120 day disclosure window for new software flaws

Keumars Afifi-Sabet

16 Apr, 2021

Google’s Project Zero team has updated its vulnerability disclosure policies to introduce a 30-day cushion for businesses to apply patches to the flaws it discloses before revealing any precise exploit mechanisms.

Currently, the security research team adheres to a disclosure windows lasting 90 days, which lasts from the point a vulnerability is reported to a vendor to when they make it public, in order to give software vendors enough time to develop a patch behind the scenes.

Project Zero’s new trial, however, will see the team tack on an additional 30 days to the original window before publishing any technical details, including details behind zero-day vulnerabilities. This will be cut to a period of seven days for bugs that hackers are actively exploiting.

Project Zero is making these changes to encourage faster patch development, to ensure that each fix is correct and comprehensive, and to shorten the time between a patch being released and users installing it.

The team also wants to reduce the risk of opportunistic attacks immediately after technical details are revealed. Flaws in F5 Networks’ BIG-IP software suite serves as a recent example for this phenomenon, where hackers began scanning for vulnerability deployments shortly after technical details behind a handful of critically-rated flaws were published.

The trial is significant as many security research teams across the industry seek to mould their own disclosure policies around those adopted by Project Zero. The success of this trial, therefore, could pave the way for industry-wide changes.

For example, when Project Zero first introduced an automatic 90-day disclosure window in January 2020, a host of other teams shortly followed suit, including Facebook’s internal researchers in September that year.

“Much of the debate around vulnerability disclosure is caught up on the issue of whether rapidly releasing technical details benefits attackers or defenders more,” said Project Zero’s senior security engineering manager, Tim Willis.

“From our time in the defensive community, we’ve seen firsthand how the open and timely sharing of technical details helps protect users across the Internet. But we also have listened to the concerns from others around the much more visible “opportunistic” attacks that may come from quickly releasing technical details.”

He added that despite continuing to believe that quick disclosure outweighs the risks, Project Zero was willing to incorporate feedback into its policies. “Heated discussions” about the risk and benefits of releasing technical details, or proof-of-concept exploits, have also been a significant roadblock to cooperation between researchers and vendors.

Project Zero will, in future, explore reducing the initial 90-day disclosure window in order to encourage vendors to develop patches far quicker than they currently do, with the aim of one day adopting something closer to a 60+30 policy. Based on its data, the team is likely to reduce the disclosure window in 2022 from 90+30 to 84+28.

Although vendors often do release patches in a timely manner, one of the biggest challenges in cyber security is encouraging customers to actually apply these updates to protect themselves against potential exploitation.

There are countless examples of patched vulnerabilities that are still being actively exploited because organisations have failed to apply the relevant updates.

The Cybersecurity and Infrastructure Security Agency (CISA), for instance, revealed in 2020 that many of the top-ten most commonly exploited flaws were those for which patches have existed for years. As of December 2019, hackers were even exploiting a vulnerability in Windows common controls that Microsoft fixed in April 2012.

As the trial unfolds in the coming months, Project Zero has encouraged businesses keen to understand more about the vulnerabilities being disclosed to approach their vendors or suppliers for technical details.

The team won’t reveal any proofs-of-concept or technical details prior to the 30-day window elapsing unless there’s a mutual agreement between Project Zero and the vendor.

Android spyware disguised as ‘system update’ app discovered

Keumars Afifi-Sabet

29 Mar, 2021

A sophisticated strain of malware capable of stealing user data from infected Android devices is masquerading as the System Update application.

The malicious mobile app, which functions as a Remote Access Trojan (RAT), is part of a sophisticated spyware campaign that has the ability to record audio from devices, take photos, and access WhatsApp messages, according to Zimperium researchers.

Once installed, it registers with its own Firebase command and control (C&C) server, normally used by legitimate Android developers, as well as a second independent C&C server, to send across an initial cache of information. This includes information about whether WhatsApp is installed or not, battery percentage, storage stats, and other information. It can only be installed from a third party store and not the Google Play store.

The malware then receives commands to initiate various actions such as the recording of audio from the microphone or data exfiltration. Researchers have also discovered the malware is capable of inspecting web browsing data, stealing images and videos, monitoring GPS locations, stealing phone contacts and call logs, and exfiltrating device information.

The device also asks permission to enable accessibility services, and abuses this to collect conversations and message details from WhatsApp by scraping the content on the screen after detecting whether the user is accessing the messaging service.

It hides by concealing the icon from the device’s main menu or app drawer, while also posing as the legitimate System Update app to avoid suspicion. When the device’s screen is turned off, the spyware creates a ‘searching for updates’ notification using the Firebase messaging service which allows it to generate push notifications.

The spyware’s functionality is triggered under various conditions, including when a new contact is added, a new text message is received or a new application installed. It does so by exploiting Android’s receivers including ‘contentObserver’ and ‘Broadcast’, which allows communication between the device and the server.

The Firebase messaging service is only used to initiate malicious functions, such as audio recording or data exfiltration, by sending commands to infected devices. The data itself is then collected by the second dedicated C&C server.

The spyware also only collects up-to-date information, with a refresh rate of roughly five minutes for location and networking data. The same applies to photos taken using the device’s camera, but the value is instead set to 40 minutes.

Researchers have so far been unable to determine who is behind the campaign, or whether the hackers are trying to target specific users. Given this spyware can only be downloaded outside of the Google Play store, users are strongly advised not to download applications to their phones from unsafe third-party sources.