Todas las entradas hechas por Rene Millman

Education and government most at risk from email threats


Rene Millman

26 Nov, 2021

Organizations in the education sector and local and state government are most at risk from email threats, according to a new report.

The report, published by IT security firm Cyren, also found that phishing remains the dominant form of attack against all industries.

Based on data gathered from nearly 45,000 incidents, researchers found that the education sector received over five threats per thousand emails received. State and local government bodies received just over two threats per thousand emails received, nearly double the amount received by the next most targeted industry, software.

The report also looked at the number of attacks per 100 users across a wide range of industries. It found that there were nearly 400 per 100 users in education compared to just over 150 in the construction industry.

Researchers said there was a surprisingly low rate for manufacturing, especially when compared to the construction industry, which is closely related.

“We observed 20 confirmed threats per 100 users in the manufacturing vertical. Without solid detection and automated incident response, a manufacturer with 100 Office 365 users would spend at least 16 hours manually investigating and remediating emails,” they added.

In a blog post, security researchers found that the data supported a widely held theory that phishing is a precursor to more damaging attacks such as business email compromise (BEC) and ransomware.

The report looked at phishing compared with malware and BEC attacks across four industries. Phishing remained the dominant threat in healthcare (76%), finance and insurance (76%), manufacturing (85%), and real estate (93%).

In healthcare, BEC attacks made up the remaining 24%. Researchers said that robust malware detection capabilities in the healthcare industry explains the high rate of BEC attempts. 

“Attackers understand that they can’t easily slip malware past automated defenses, so they have shifted to social engineering tactics,” said researchers.

Researchers said that when it comes to solving the email threat problem, user education is an important component, but several organizations have “over-rotated” on the idea that users are responsible for keeping sophisticated email threats at bay.

“The predominant trend is to use an email hygiene technology such as Microsoft Defender for Office 365 to catch 80% of threats, deploy a specialized add-on to catch and contain zero-day phishing and most BEC attempts, enable employees to perform initial analysis on the small percentage of emails that are classified as suspicious (rather than malicious or clean), and automate incident response workflows to save time and reduce exposure,” added researchers.

Hackers use SquirrelWaffle malware to hack Exchange servers in new campaign


Rene Millman

23 Nov, 2021

Hackers are using ProxyShell and ProxyLogon exploits to break into Microsoft Exchange servers in a new campaign to infect systems with malware, bypassing security measures by replying to pre-existing email chains.

Security researchers at Trend Micro said investigations into several intrusions related to Squirrelwaffle led to a deeper examination into the initial access of these attacks, according to a blog post.

Researchers said that Squirrelwaffle first emerged as a new loader spreading through spam campaigns in September. The malware is known for sending its malicious emails as replies to pre-existing email chains.

The intrusions observed by researchers originated from on-premise Microsoft Exchange Servers that appeared to be vulnerable to ProxyLogon and ProxyShell. According to researchers, there was evidence of the exploits on the vulnerabilities CVE-2021-26855CVE-2021-34473, and CVE-2021-34523 in the IIS Logs on three of the Exchange servers that were compromised in different intrusions.

“The same CVEs were used in ProxyLogon (CVE-2021-26855) and ProxyShell (CVE-2021-34473 and CVE-2021-34523) intrusions. Microsoft released a patch for ProxyLogon in March; those who have applied the May or July updates are protected from ProxyShell vulnerabilities,” said researchers.

In one case, all the internal users in the affected network received spam emails sent as legitimate replies to existing email threads.

“All of the observed emails were written in English for this spam campaign in the Middle East. While other languages were used in different regions, most were written in English. More notably, true account names from the victim’s domain were used as sender and recipient, which raises the chance that a recipient will click the link and open the malicious Microsoft Excel spreadsheets,” they said.

In the same intrusion, researchers analyzed the email headers for the received malicious emails and found that the mail path was internal, indicating that the emails did not originate from an external sender, open mail relay, or any message transfer agent (MTA).

“Delivering the malicious spam using this technique to reach all the internal domain users will decrease the possibility of detecting or stopping the attack, as the mail getaways will not be able to filter or quarantine any of these internal emails,” they added.

Researchers said that the hackers also did not drop or use tools for lateral movement after gaining access to the vulnerable Exchange servers in order to avoid detection. Additionally, no malware was executed on the Exchange servers to avoid triggering alerts before the malicious email could be spread across the environment.

According to researchers, the recent Squirrelwaffle campaigns should make users wary of the different tactics used to mask malicious emails and files.

“Emails that come from trusted contacts may not be enough of an indicator that whatever link or file included in the email is safe,” they warned.

Optimising the management of hybrid cloud


Rene Millman

29 Nov, 2021

Many organisations have now adopted a hybrid cloud strategy to ensure their workloads run as efficiently as possible and reside where it makes the most sense. Indeed, according to Flexera research, 82% of enterprises have a hybrid cloud strategy, while businesses are increasing their spending with vendors across the board.

Having workloads and applications running in either the public cloud or private presents challenges, however. One of the biggest is how to manage hybrid cloud environments efficiently and to ensure these configurations are as optimised as possible to ensure value for money, and maintain cyber security. To optimise hybrid cloud infrastructure, the right foundations must be in place.

What causes hybrid cloud inefficiency?

If you were starting a business from scratch in today’s age, you’d probably opt to design it to be cloud-native and compatible with the public cloud. This is alongside including lightweight apps, scalable systems and the security and compliance that now comes as standard from the hyperscalers like AWS.

Most companies, however, particularly in regulated industries, are dealing with heavy, legacy, mainframe-based systems, according to Anthony Drake, operations director for North Europe with research and advisory firm ISG. In some cases, these environments host thousands of applications that are completely bespoke to that business. They might have been designed decades ago, and simply aren’t built to be suitable for public cloud environments, he says.

“Companies want the advantages that come from public cloud environments: the ability to scale; the shift from CapEx to OpEx and the ability to manage costs; testing new apps; and the potential benefits that the hyperscalers might bring to growing areas like edge computing,” he explains. “They, therefore, opt for a hybrid environment, putting some workloads into public cloud and retaining private cloud for legacy systems.”

By its nature, a hybrid cloud environment is going to be complex as you get, with public and private cloud environments talking to each other, transferring data, and demanding a level of interoperability that needs to be managed. “A big bang approach of moving everything to the cloud won’t work,” he adds. “It’s a gradual process, split into phases.”

Building suitable hybrid cloud resources 

Establishing the right foundations for hybrid cloud management is essential to optimising operations. There are two main issues that could come into play if businesses fail to do so, according to Guy Warren, CEO of ITRS Group. “You could end up paying excessively for your cloud estate, or you could underestimate your capacity needs and throttle the throughput on that application,” he says. “Securing the foundations requires knowing what size to buy, which, with the complexity of systems today, can only be achieved with effective analysis by a capacity management tool.”

Working to a cloud adoption framework gives a clear line of the best practice, taking into consideration the service infrastructure that’s trying to be replicated into the cloud environment. Each cloud platform has its own framework of excellence and is consumed in different ways, which demands a review of the reasons why a business is moving to the cloud in the first place.

Performing a detailed cost-benefit analysis is key to deciding whether an application would be better in a cloud environment, or situated on-premise, according to George McKenna, head of cloud sales at Ultima Business Solutions. “For example,” he asks, “do the desired benefits come down to revenue generation, internal infrastructure, employee productivity or another reason? It’s the ecosystem of the platform that differentiates where you place the workload.”

Managing director at BCG Platinion, Andreas Rindler, meanwhile, says businesses should be developing the skill, talent and IT organisations that will build “critical mass» to enable a multi-cloud journey. “A company’s ability to build a critical mass of skill for each cloud service provider may impact its multi-cloud strategy,” he continues. “Firms should initially centralise all key scarce competencies in a ‘Cloud CoE’ to maximise efficiency and efficacy. Multi-cloud requires investment in cloud engineering, containerisation, and DevOps tooling to ensure application portability and to avoid vendor lock-in. Many challenges come with this due to the digital skills shortage.”

Drake adds that migration isn’t the end of the hybrid cloud journey, suggesting you need people who can support you as you scale and can take advantage of different features that are introduced by the cloud provider. “Think about how you want to do that,” he says. “Do you want to build a 30-person team in-house with limited career progression, for example?”

Overcoming pitfalls of hybrid cloud optimisation

Managing cloud environments consistently is a complex task. Rindler says that as enterprises continue to migrate apps to multiple clouds, a growing challenge is to manage and understand how company assets are being deployed, used, or exploited. 

“This calls on creating a central portal to view and manage our multi-cloud environment – an agnostic single pane of glass into the various clouds,” he says. “Organisations can also consider implementing a hybrid and multi-cloud management platform.”

Another pain point is portability between clouds. Applications can be migrated following a value-based approach, while maximally leveraging open source technologies can also help to enhance the portability of applications, Rindler continues.

How would your business know, however, when its hybrid cloud management is fully optimised? For Drake, one indication is probably when your business resides exclusively in the public cloud. “The use of private cloud will decrease in the future,” he projects. “The more organisations move to the public cloud, the more they can take advantage of the benefits brought by developing technologies such as 5G, the Internet of Things (IoT), and artificial intelligence (AI),” he adds.

For Rindler, meanwhile, it’s down to the IT leadership to decide if the benefits of multi-cloud configurations outweigh the costs and risks. “The benefits are simple and very effective; access to best-in-class offerings (and faster time-to-market as a result), reduced vendor lock-in and cost optimisation of workload placement and resiliency across multiple cloud platforms,” he says. However, he adds, with that comes the very real threat of a step-change in the complexity of processes and governance related to cloud, ongoing investments, increased risks and longer roadmaps to get them right.

Saving red squirrels with AI and cloud computing


Rene Millman

18 Nov, 2021

Red squirrels are close to becoming extinct in the UK. Over the last 13 years, the species has lost roughly 60% of its habitat in England and Wales, and it’s estimated that fewer than 290,000 red squirrels remain. 

Over the last 100 years, or so, red squirrel populations have been declining largely due to the introduction of grey squirrels from North America, Dr Stephanie Wray, chair of the Mammal Society, tells IT Pro. Not only are their trans-Atlantic cousins much larger, but they also carry a disease called the parapox virus, which is harmless to them, but fatal to red squirrels.

Monitoring these animals has been a critical part of conservation efforts, as it enables scientists to better understand the habitats they live in, their behaviours and how they relate with other species. Monitoring, however, has traditionally been very labour intensive, and involves sending people into the wild to look for drays – the characteristic nests that red squirrels make – which can often be made in difficult-to-find places, such as high up on trees.

To that end, the Mammal Society has teamed up with the University of Bristol, the Rainforest Connection (RFCx), and networking giant Huawei, to preserve red squirrels. This landmark project combines bioacoustics with cloud computing and artificial intelligence (AI) to help conservationists assess and monitor squirrel populations, with a view to preventing their extinction and restoring their numbers.

Listening in

The project uses custom-built Guardian and Audiomoth monitoring devices in combination with Huawei software to analyse the natural noise of the environment; it’s the first time this equipment has been used this way in the UK.

“What this technology does is it allows us to put up recording devices that are picking up the whole soundscape in real-time,” Wray explains. “Then, by using the AI to recognise the cause of the different species of squirrels, we can just sort of filter out the bits that were interested in.”

A Guardian is essentially a minicomputer with solar panels, an antenna, and a microphone. It continuously records audio and connects through a mobile network to upload these sound files to a cloud repository. These devices use edge computer processing to perform AI-based processes themselves, then send out alerts through satellite or mobile signals. An Audiomoth, meanwhile, is an offline low-cost, open-source acoustic monitoring device used for monitoring wildlife. 

Chrissy Durkin, RFCx director of international expansions, says the University of Bristol, in collaboration with the Mammal Society, is deploying these offline devices to collect some valuable initial data on red squirrels.

Guardian devices will be installed once initial data from Audiomoth devices are analysed

“The first step is we really need to have a lot of examples of the calls red squirrels make to start to understand, in detail, where they’re present, and to be able to recognise them and understand their distribution,” she says.

By understanding the calls of the red squirrel, the project can produce an initial distribution map of where red squirrels are really based. This helps to determine where the long-term Guardian monitoring devices can be positioned. These devices can then give the project year-round data as well as an understanding of which species are present at any given time. 

The data is then fed into RFCx’s platform, Arbimon, a biodiversity monitoring system that, according to Durkin, is useful for understanding and analysing the species present. It eliminates traditional sound analysis, which has involved office-based workers listening to audio files one by one, trying to interpret the species they’re interested in by ear. Arbimon, instead, serves as a central hub for sound data funnelled into it from various recording devices. It can also perform several types of analysis, such as pattern matching, soundscape analysis, and random forest model analysis – a machine learning technique that’s used to solve regression and classification problems (it doesn’t have anything to do with actual forests). 

No time to die

Perhaps the most important tool is pattern matching, which drastically reduces the time taken for scientists to label species calls, from up to two years to just a few hours. The system works by running red squirrels’ calls from an audio file, for example, as a template against all the data that’s been collected. “It’s essentially visual analysis within the spectrogram,” Durkin explains.

It then finds all the matches within a certain threshold for the species call. “Using that tool, alone, you can figure out where the red squirrel has been present across all of your data insights,” she adds. A scientist can then examine all matches and cross-reference, by ear, which calls correctly match with a red squirrel’s, and which don’t. 

Engineers installing bioacoustic devices in the wild

One possible expansion of this technology is to label a dataset in order to build out an AI model for the red squirrel. In the meantime, the data collected can easily produce distribution maps to reveal where red squirrels live. With these maps, scientists can create conservation plans to preserve, and bolster, the population of red squirrels and make sure their species is sustainable. 

The red squirrel conservation project is still in its early stages, so the plans haven’t been finalised. Durkin says that in other projects she’s been involved with, however, data gathered has allowed conservationists to run a breeding programme, such as in the campaign to preserve Puerto Rican parrots. In this project, conservationists used distribution maps to find the best places to release birds bred in captivity back into the wild for the best chances of survival.

The initial analysis of audio recordings, in addition to the generation of population distribution maps, will help determine where the University of Bristol and the Mammal Society will position their permanent monitoring stations. Once this is accomplished, RFCx will physically travel to these locations to install the more advanced Guardian devices.

Looking ahead, Huawei’s technology could also be used in a host of further conservation projects. Although the initial layout has been made in the interests of protecting red squirrel populations, Wray suggests these same systems can be used to recognise the sounds of any species in order to help conservationists understand the extent of biodiversity in the UK, and the threats that British wildlife faces.

IBM and Verizon expand Texas lab to test new 5G use cases


Rene Millman

12 Aug, 2021

IBM and Verizon have expanded facilities at their Industry Solution Lab in Coppell, TX to include an environment for developing and testing 5G-enabled use cases for Industry 4.0 applications.

The new capabilities will enable enterprise customers to develop and test how 5G Ultra-Wideband can combine with hybrid cloud, edge, and artificial intelligence (AI) technologies to enhance next-gen use cases like robotics, guided vehicles, manufacturing process automation, visual quality inspection, data analytics, and more.

Verizon has installed 5G ultra-wideband and multi-access edge computing (MEC) to trial use cases, alongside IBM’s hybrid cloud and AI technologies, which run on Red Hat OpenShift.

The lab will offer customers a  pre-commercial, standalone 5G network and all the technical resources needed to test and optimize products. Customers can co-create business-specific use cases and jointly work with IBM Global Business Services and ecosystem partners to leverage these technologies in solving current business challenges and bring new solutions and services to market.

The lab will focus on three priority areas that take advantage of 5G networks. 

The first area is asset monitoring and optimization. IBM and Verizon said major shipping companies with ground and package-handling facilities could use the IBM Maximo Application Suite and IBM Acoustic Insights to trial how they can use ultrasonic technology to anticipate and prevent their package handling machinery from malfunctioning.

The second area is in field worker productivity and safety. By using Maximo Mobile on devices on the Verizon 5G Network, a utility company could trial scenarios where it uses AI, remote human assistance, and real-time data to “improve the on-the-job safety and enhance the quality and efficiency of fieldwork with guided workflows, reducing multiple repeat inspections and repairs of the same equipment.”

The third area is visual inspection. IBM said industrial product manufacturers could leverage IBM’s suite of visual inspection products, including IBM Maximo Visual Inspection.

“Mobile devices running the suite could be mounted on assembly lines, robotic arms, or even held or worn by the user to inspect components and finished goods for defects using near real-time insights to improve manufacturing processes,” said Steve Canepa, global GM & managing director at IBM’s Communications Sector.

He added that the joint 5G test bed with Verizon “serves as a signal of IBM’s ongoing investment in capabilities that include, among others, centers of excellence and labs around the world.”

Data breach exposes millions of seniors’ data


Rene Millman

10 Aug, 2021

Security researchers have found a major breach that exposed the details of over three million US seniors.

According to WizCase, the data breach affected SeniorAdvisor, “one of the largest consumer ratings and reviews websites for senior care and services across the US and Canada.” Among the exposed details were users’ names, surnames, phone numbers, and more.

Researchers at WizCase discovered a misconfigured Amazon S3 bucket belonging to the website containing over 1 million files and 182GB of data. Contact dates from the files suggest they are from 2002 to 2013, though the files had a 2017 timestamp.

“The majority of data exposed was in the form of leads, a list of potential customers whose details were collected by SeniorAdvisor presumably via their email or phone call campaigns,” said researchers.

Researchers also unearthed  2,000 “scrubbed” reviews. These are reviews where the user’s sensitive information has been wiped or redacted.

“However, this scrubbing process is useless if you have the corresponding information. The scrubbed reviews had a lead id which could be used to trace the review back to who originally wrote it,” researchers said. As both lead data and these scrubbed reviews were in the same database, supposedly anonymous reviewers could have their identity revealed with a simple search operation.

WizCase researchers said since the breach contained data from a section of the public more vulnerable to scams, the risks were higher. In a 2018-2019 report, the Federal Trade Commission (FTC) noted that people who filed a fraud complaint between 60 and 69 years old lost $600 per scam on average. The amount rose in older groups, culminating in $1700 on average per scam for people between 80 and 89.

“In particular, the report found senior citizens were more likely to fall for digital scams such as tech support scams, prize/sweepstakes scams, online shopping scams, and especially phone scams,” said researchers. “As shown, senior citizens are at greater risk for online fraud than the rest of the population, and therefore should be even more careful in their online behavior.”

Researchers urged people using such services to input the bare minimum of information when making a purchase or setting up an online account.

“The less information hackers have to work with, the less vulnerable you are,” warned researchers. Researchers have since contacted the company, and the bucket has since been secured.

Microsoft adds more services to its Azure Arc multi-cloud management stack


Rene Millman

26 May, 2021

Microsoft has launched a set of new Azure services that organisations can now run on any CNCF-conformant Kubernetes cluster using its Azure Arc multiple-cloud service.

At its virtual Build 2021 event, Microsoft said its cloud services, such as Azure App Service, Functions, Logic Apps, API Management, and Event Grid, would now all be Arc-enabled (in preview form). Azure Arc, launched in 2019, is Microsoft’s tool to help firms manage Kubernetes container clusters across clouds and on-premises data centres.

The firm said that these Azure application services can be deployed to any Cloud Native Computing Foundation (CNCF)-conformant Kubernetes cluster connected via Azure Arc.

The services now enabled includes Azure App Service for creating and managing web apps and APIs with a fully managed platform and features like autoscaling, deployment slots, and integrated web authentication; Azure Functions for event-driven programming with autoscaling, and triggers and bindings to integrate with other Azure services; Azure Logic Apps for creating automated workflows for integrating apps, data, services, and backend systems, as well as Azure API Management for dealing with internal and external APIs.

«The app services are now Azure Arc-enabled, which means customers can deploy Web Apps, Functions, API gateways, Logic Apps and Event Grid services on pre-provisioned Kubernetes clusters,» the firm said in a statement.

«This takes advantage of features including deployment slots for A/B testing, storage queue triggers, and out-of-box connectors from the app services, regardless of run location. With these portable turnkey services, customers can save time building apps, then manage them consistently across hybrid and multi-cloud environments using Azure Arc.»

Microsoft added that with this capability now in preview, customers don’t have to choose between the productivity of platform as a service (PaaS) and the control of Kubernetes, as the same app services can run with either model.

Gabe Monroy, vice president for Azure Developer Experience at Microsoft, said in a blog post that one of the challenges he heard from customers was that despite the enhanced control and ecosystem benefits of Kubernetes, Kubernetes is difficult for developers to use directly. Developers must learn many advanced concepts and APIs, which can hurt their productivity.

«With today’s announcement, developers no longer have to choose between the productivity of Azure application services and the control of Kubernetes,» he added.

IBM brings its hybrid cloud to the edge


Rene Millman

1 Mar, 2021

IBM has announced it’ll make its hybrid cloud available on any cloud, on-premises, or at the edge via its IBM Cloud Satellite.

Big Blue said it’s worked with Lumen Technologies to integrate its Cloud Satellite service with the Lumen edge platform to enable customers to use hybrid cloud services in edge computing environments. The firm also said it will collaborate with 65 ecosystem partners, including Cisco, Dell Technologies, and Intel, to build hybrid cloud services.

It said that IBM Cloud Satellite is now generally available to customers and can bring a secured, unifying layer of cloud services to clients across environments, regardless of where their data resides. IBM added that this technology would address critical data privacy and data sovereignty requirements. 

IBM said customers using the Lumen platform and IBM Cloud Satellite would be able to deploy data-intensive applications, such as video analytics, across highly distributed environments and take advantage of infrastructure designed for single-digit millisecond latency.

The collaboration will enable customers to deploy applications across more than 180,000 connected enterprise locations on the Lumen network to provide a low latency experience. They can also create cloud-enabled solutions at the edge that leverage application management and orchestration via IBM Cloud Satellite and build open, interoperable platforms that give customers greater deployment flexibility and more seamless access to cloud-native services like artificial intelligence (AI)internet of things (IoT), and edge computing.

One example given of how this would benefit customers is using cameras to detect the last time surfaces were cleaned or flag potential worker safety concerns. Using an application hosted on Red Hat OpenShift via IBM Cloud Satellite from the proximity of a Lumen edge location, such cameras and sensors can function in near real-time to help improve quality and safety, IBM claimed.

IBM added that customers across geographies can better address data sovereignty by deploying this processing power closer to where the data is created.

“With the Lumen Platform’s broad reach, we are giving our enterprise customers access to IBM Cloud Satellite to help them drive innovation more rapidly at the edge,” said Paul Savill, SVP enterprise product management and services at Lumen. 

“Our enterprise customers can now extend IBM Cloud services across Lumen’s robust global network, enabling them to deploy data-heavy edge applications that demand high security and ultra-low latency. By bringing secure and open hybrid cloud capabilities to the edge, our customers can propel their businesses forward and take advantage of the emerging applications of the 4th Industrial Revolution.”

IBM is also extending its Watson Anywhere strategy with the availability of IBM Cloud Pak for Data as a Service with IBM Cloud Satellite. IBM said this would give customers a “flexible, secure way to run their AI and analytics workloads as services across any environment – without having to manage it themselves.”

Service partners also plan to offer migration and deployment services to help customers manage solutions as-a-service anywhere. IBM Cloud Satellite customers can also access certified software offerings on Red Hat Marketplace, which they can deploy to run on Red Hat OpenShift via IBM Cloud Satellite.

Red Hat closes purchase of multi-cloud container security firm StackRox


Rene Millman

24 Feb, 2021

Red Hat has finalised its acquisition of container security company StackRox. 

StackRox’s Kubernetes-native security technology will enable Red Hat customers to build, deploy, and secure applications across multiple hybrid clouds.

In a blog post, Ashesh Badani, senior vice president of cloud platforms at Red Hat, said over the past several years, the company has “paid close attention to how our customers are securing their workloads, as well as the growing importance of GitOps to organisations.”

“Both of these have reinforced how critically important it is for security to «shift left» – integrated within every part of the development and deployment lifecycle and not treated as an afterthought,” Badani said.

Badani said the acquisition would allow Red Hat to add security into container build and CI/CD processes. 

“This helps to more efficiently identify and address issues earlier in the development cycle while providing more cohesive security up and down the entire IT stack and throughout the application lifecycle.”

He added the company’s software provides visibility and consistency across all Kubernetes clusters, helping reduce the time and effort needed to implement security while streamlining security analysis, investigation, and remediation.

“StackRox helps to simplify DevSecOps, and by integrating this technology into Red Hat OpenShift, we hope to enable users to enhance cloud-native application security across every IT footprint,” added Badani. Red Hat initially announced the acquisition in January. The terms of the deal were not disclosed.

In the previous announcement, Red Hat CEO Paul Cormier said securing Kubernetes workloads and infrastructure “cannot be done in a piecemeal manner; security must be an integrated part of every deployment, not an afterthought.”

Red Hat said it would open source StackRox’s technology post-acquisition and continue supporting the KubeLinter community and new communities as Red Hat works to open source StackRox’s offerings. 

KubeLinter is an open-source project StackRox started in October 2020 that analyses Kubernetes YAML files and Helm charts for correct configurations, focusing on enabling production readiness and security earlier in the development process.

StackRox will continue supporting multiple Kubernetes platforms, including Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).

Kamal Shah, CEO of StackRox, said the deal was “a tremendous validation of our innovative approach to container and Kubernetes security.”

Red Hat acquires Kubernetes security firm StackRox


Rene Millman

11 Jan, 2021

Red Hat has announced it’ll acquire container and Kubernetes-native security provider StackRox in a bid to boost the security of its OpenShift Kubernetes platform. 

StackRox offers native security solutions to Kubernetes containers by directly deploying components for enforcement and deep data collection into the Kubernetes cluster infrastructure. The StackRox policy engine includes hundreds of built-in controls to enforce security best practices; industry standards, such as CIS Benchmarks and NIST; configuration management of containers and Kubernetes; and runtime security. 

Red Hat said the purchase would help it focus on securing cloud-native workloads by expanding and refining Kubernetes’ native controls and shifting security left into the container build and CI/CD phase. This will help provide a cohesive solution for enhanced security up and down the entire IT stack and throughout the lifecycle.

«Securing Kubernetes workloads and infrastructure cannot be done in a piecemeal manner; security must be an integrated part of every deployment, not an afterthought,» said Red Hat CEO Paul Cormier. 

«Red Hat adds StackRox’s Kubernetes-native capabilities to OpenShift’s layered security approach, furthering our mission to bring product-ready open innovation to every organization across the open hybrid cloud across IT footprints.»

Red Hat said it plans to open source StackRox’s technology post-acquisition. It’ll also continue to support the KubeLinter community and new communities as Red Hat works to open source StackRox’s offerings.

In addition to Red Hat OpenShift, StackRox will continue supporting multiple Kubernetes platforms, including Amazon Elastic Kubernetes Service (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).

In a company blog post announcing the acquisition, StackRox CEO Kamal Shah said his company made a strategic decision to focus exclusively on Kubernetes and pivoted its entire product to be Kubernetes-native.

“Over two and a half years ago, we made a strategic decision to focus exclusively on Kubernetes and pivoted our entire product to be Kubernetes-native. While this seems obvious today; it wasn’t so then. Fast forward to 2020 and Kubernetes has emerged as the de facto operating system for cloud-native applications and hybrid cloud environments,” Shah said.