Cisco acquires container security startup Banzai Cloud


Daniel Todd

18 Nov, 2020

Cisco has announced plans to acquire Hungarian container security startup Banzai Cloud, as the networking giant looks to further expand its portfolio of cloud-native technologies. 

The deal is expected to close at the end of this quarter for an undisclosed sum and follows hot on the heels of Cisco’s takeover of cloud-native security company Portshift back in October.

Founded in 2017, Budapest-based Banzai Cloud offers a Kubernetes-based platform that is designed to help businesses develop and deploy cloud-native applications.

The firm’s assets and employees will now become part of Cisco’s Emerging Technologies and Incubation group, which focuses on incubating new projects for cloud-native networking, security and edge computing environments for modern distributed applications. 

In a blog post, Liz Centoni, SVP of Cisco’s Emerging Technologies and Incubation group, explained that the move would help the company address the challenges presented by modern cloud-native applications and their environments. 

“This team has demonstrated experience with complete end-to-end cloud-native application development, deployment, runtime and security workflows,” Centoni said. 

“They have built and deployed software tools that solve critical real-world pain points and are active participants in the open-source community as sponsors, contributors and maintainers of several open-source projects.”

The acquisition is the latest move in Cisco’s push to grow its cloud security portfolio, following its acquisition of Israeli startup Portshift last month for a reported $100 million.

The Tel-Aviv-based business provides a Kubernetes-based platform to secure containers and serverless applications and will also fall under Cisco’s Emerging Technologies and Incubation umbrella. 

“These two cross-border acquisitions are a testament to the globalisation of the cloud-native ecosystem and underscore our commitment to hybrid, multi-cloud application-first infrastructure as the de facto mode of operating IT,” Centoni added. 

“The Emerging Technologies and Incubation team’s mission is to incubate impactful technologies and to attract, foster and grow the global talent needed to drive innovation and support our customers’ digital transformation initiatives.”

Red Hat pushes hybrid cloud to the edge


Rene Millman

18 Nov, 2020

Red Hat has unveiled new edge capabilities for Red Hat Enterprise Linux. The firm has also expanded the number of supported environments for Red Hat OpenShift, including leading public clouds and multiple data centre architectures, like IBM Z and Power Systems.

At this year’s KubeCon + CloudNativeCon, Red Hat launched several edge-focused updates to Red Hat Enterprise Linux, including the rapid creation of operating system images for the edge through the Image Builder capability. 

The firm said this would enable IT organisations to create purpose-built images optimized for architectural challenges inherent to edge computing but customized for the needs of a given deployment.

Red Hat also unveiled remote device update mirroring to stage and apply updates at the next device reboot or power cycle, helping limit downtime and manual intervention from IT response teams.

The edge update sports over-the-air updates that transfer less data while still pushing necessary code. Red Hat aims this update at sites with limited or intermittent connectivity. 

Another feature announced is Intelligent rollback built on OSTree capabilities, enabling users to provide workload-specific health checks to detect conflicts or code issues. When it detects a problem, it automatically reverts the image to the last good update to prevent unnecessary downtime at the edge.

Red Hat also announced updates to Red Hat OpenShift 4.6 intended to help enterprises accelerate cloud-native application development. The latest update to OpenShift Serverless with Red Hat OpenShift Serverless 1.11 brings full support for Knative eventing, enabling containerized applications to consume only the resources they need at a given time, which prevents over- or under-consumption.

There is also a Red Hat build of Quarkus, a Kubernetes-native Java stack fully supported by Red Hat. With a single Red Hat OpenShift subscription, customers now have full access to Quarkus, enabling developers to repurpose mission-critical Java applications on Kubernetes, backed by Red Hat’s enterprise support.

Red Hat OpenShift 4.6 now includes new edge computing features with remote worker nodes, extending processing power to space-constrained environments. This enables IT organizations to scale remotely while maintaining centralized operations and management.

OpenShift 4.6 will also extend capabilities for public-sector Kubernetes deployments, including availability on AWS GovCloud and Azure Government Cloud, extended OpenSCAP support and more. 

Further extending OpenShift’s reach into the public cloud domain is Azure Red Hat OpenShift, a jointly-managed, engineered and supported offering on Microsoft Azure backed by Microsoft and Red Hat’s expertise. A similar service is expected to launch on AWS with joint management and support from Red Hat and Amazon.

Cisco patch notes ‘left out’ details of RCE flaws


Keumars Afifi-Sabet

17 Nov, 2020

The recently patched Cisco Security Manager (CSM) platform did not initially include details of 12 severe security vulnerabilities that could, if exploited, lead to remote code execution (RCE).

Although these 12 flaws in CSM, an enterprise-class management console that offers insight into the control of Cisco security and network devices, were recently fixed, its developers failed to mention these at all, according to security researcher Florian Hauser

Hauser claims to have reported these 12 bugs to the networking giant in July this year and was under the impression they were due to be fixed when CSM was updated to version 4.22 earlier this month.

The researcher claims, however, that despite patching the vulnerabilities last week, the company didn’t mention them at all in the release notes for CSM and did not issue security advisories for businesses that may be potentially affected.

As a result, Hauser has published the proof-of-concept for all 12 flaws that he submitted via GitHub, including a host of RCE exploits that cyber criminals could use if targeting an unpatched system. 

“120 days ago, I disclosed 12 vulnerabilities to Cisco affecting the web interface of Cisco Security Manager. All unauthenticated, almost all directly giving RCE,” Hauser posted on Twitter on 11 November, following this up overnight with: “Since Cisco PSIRT became unresponsive and the published release 4.22 still doesn’t mention any of the vulnerabilities, here are 12 PoCs in 1 gist.”

The CSM 4.22 release notes outlined several improvements to security and functionality, including support for AnyConnect Web Security WSO. The company has subsequently released advisories for three vulnerabilities that were reported in July, crediting Florian Hauser for discovery.

The first, a path traversal vulnerability, tagged CVE-2020-27130 and assigned a CVSS score of 9.1, could allow an unauthenticated remote attacker to gain access to sensitive information, upon successful exploitation. This is due to improper validation of traversal character sequences within requests to affected devices.

The second, a Java deserialisation flaw, is tagged CVE-2020-27131 and assigned a severity score of 8.1, could also allow a remote attacker to execute arbitrary commands on an affected device. The final flaw, a static credential vulnerability tagged CVE-2020-27125 and assigned a severity score of 7.4, could also allow a remote attacker to access sensitive information on a targeted system.

IT Pro approached Cisco to clarify why it had first failed to mention these flaws in the patch notes for CSM version 4.22.

Zoom tackles ‘Zoom-bombing’ with new security features


Bobby Hellard

17 Nov, 2020

Video conferencing service Zoom has added a set of security features to help users combat ‘Zoom-bombing’ attacks. 

The new controls will help account holders remove unwanted guests and also spot if their meeting’s ID number has been shared online.

Zoom-booming has been an issue for the company throughout the year with hackers exploiting its mass adoption. This has affected both personal and professional meetings, including legal proceedings, and many will see this fix as long overdue. 

Starting this week, hosts and co-hosts will be given an option to temporarily pause their meeting and remove unwanted guests. Users can click a new “Suspend Participant Activities” button, which stops all video, audio, chat functions, screen sharing and recording. 

Hosts and co-hosts will then be asked if they want to report a user from their meeting, with the option to share a screenshot of them. They will then be removed once ‘Submit’ is clicked. Zoom’s security team will be notified and hosts can continue with their meeting by individually restarting all the features. This service will be set as the default for all free and paid Zoom users. 

Hosts and co-hosts can already report users with the security icon in the top corner, but this can also be enabled for non-hosts by account owners and admins. The option is available via the web browser on Mac, PC, Linux and on Zoom’s mobile apps. 

Soon, users will also be able to see if their meeting has been compromised with an ‘At-Risk Meeting Notifier’ which scans public social media posts and other websites for publicly shared meeting links. When the tool spots a meeting that’s potentially at risk of disruption, it automatically alerts the account owner by email with advice. This will most likely be to delete the vulnerable meeting and create a new one with a different ID.

macOS Big Sur is bricking some older MacBooks


Sabina Weston

16 Nov, 2020

A new macOS update released last week is reportedly bricking older MacBook Pro laptops, according to a number of dissatisfied Apple customers.

Big Sur, which was first unveiled during the Worldwide Developers Conference (WWDC) last June, is rendering some devices unresponsive, causing them to display a static black screen without any way of bypassing or resolving the issue.

The problem with Apple’s latest operating system update is said to be affecting mostly 13-inch MacBook Pros released between late 2013 and mid-2014, according to MacRumors. However, the models have been listed as compatible with the update.

Apple’s engineering team is reportedly aware of the issue and Big Sur has become a popular topic of discussion on the Apple Support forum, with users describing how their MacBooks are stuck on a black screen with keyboards “completely disabled”.

Apple is reportedly telling users to bring their laptops in for repair, according to a discussion on forum site Reddit. However, this might not be possible for many living in regions under government-imposed lockdowns, such as England.

This is not the only issue facing Big Sur. On 14 November, when the macOS update was released, Apple users reported server outages that caused iMessage and Apple Pay to go down and performance issues for users running macOS Catalina and earlier, according to 9to5Mac. The issue also caused Big Sur downloads and installations to fail, as well as security and privacy concerns.

IT Pro has contacted Apple for comment but has yet to hear back from the company.

Last week, the Cupertino-based tech giant announced a new lineup of its flagship laptops powered by its all-new M1 chip. Nearly one month after launching the iPhone 12, the company held another “One More Thing” event to show off the new hardware, which includes updates to the MacBook Air, MacBook Pro, and Mac Mini.

The Apple-built M1 chip is the first-ever personal computer chip built by the company in-house, and the announcement marks the first time since 2006 that new Macs will be powered by anything other than Intel processors.

AWS ditches Nvidia for in-house ‘Inferentia’ silicon


Bobby Hellard

13 Nov, 2020

Amazon Web Services (AWS) will ditch Nvidia chips responsible for the processing of Alexa queries and will instead use its own in-house silicon, the company confirmed on Friday.

The cloud giant will also be shifting data processing for its cloud-based facial recognition system, ‘Rekognition‘, over to these in-house chips, according to Reuters.

Alexa queries, issued through Amazon’s Echo line of smart speakers, are sent through the company’s data centres where they undergo several stages of processing before coming back to users with an answer, including translating the processed text into audible speech.

The company said that the “majority” of this processing will now be handled using Amazon’s own “Inferentia” computing chips. These were first launched in 2018 as Amazon’s first custom silicon-designed chips for accelerating deep learning workloads.

Amazon has said that the shift to Inferentia for Alexa processing had resulted in a 25% latency boost and 30% lower cost. The firm hopes the same will happen with its Rekognition system, which has also started to adopt the Inferentia chip.

The cloud giant didn’t specify which company previously handled Rekognition processing, but the service has come under some scrutiny from civil rights groups for its involvement with law enforcement. Police were temporarily banned from using it earlier in the year, following the Black Lives Matter protests.

Nvidia and Intel are two of the biggest providers of computing chips, often for data centres, with companies like Amazon and Microsoft included in their clientele. However, a number of firms have begun to move away from vendors and are bringing the technology in-house. For example, Apple has recently moved away from Intel chips in favour of the A14 Bionic processors, which will be used going forward.

Salesforce UK to create 100,000 new digital roles by 2024


Bobby Hellard

12 Nov, 2020

Salesforce has said it aims to add over 100,000 new skilled jobs to the UK market over the next four years through a partnership with training provider QA.

Three apprenticeship programmes and a developer Bootcamp will be created to boost the country’s digital skills and produce a cohort of graduates trained in Salesforce certifications, the company announced on Thursday.

The initiatives could potentially add over 100,000 skilled jobs in the UK over the next four years, according to IDC, with Salesforce boosting its own ecosystem of customers and partners.

Demand for digital skills has been high for years but it has become particularly acute during the pandemic and the greater use of cloud platforms. QA, which is an established provider of Salesforce training, is aiming to bridge the gap by expanding its work with the company.

In the UK, there is a growing demand for Salesforce technology, according to QA, which is fuelling the need for businesses to quickly find and hire new skilled Salesforce talent.

The Developer Bootcamp is a 12-week intensive course that provides specialist skills required to design data models, user interfaces and security for custom applications as well as the ability to customise them for mobile use. The first boot camp is expected to start in March 2021.

The apprenticeship programmes will provide practical learning closely aligned to specific career paths, namely service desk engineering, marketing professional, and business analytics roles within the Salesforce ecosystem.

All three apprenticeships are available for immediate starts and both initiatives will be complemented by content from a Salesforce’s online learning platform Trailhead, which allows participants to continue to develop their Salesforce skills after they have completed their programmes.

“We care passionately about developing the next generation of skilled professionals for our industry,” said Adam Spearing, Salesforce’s EMEA CTO. “The Salesforce ecosystem represents a growing opportunity and urgent need for talent within our customers, marketplace, partners and developers, and we want to kick-start the careers and pathways for young adults. We are excited to be partnering with QA to launch the Developer Bootcamp and Apprenticeship programmes.”

Google slashes free Drive storage to 15GB


Keumars Afifi-Sabet

12 Nov, 2020

Google will restrict the online cloud storage capacity for high-quality photos and videos to 15GB from next year as the firm looks to capitalise on the millions of users who have come to rely on the service.

From June 2021, new high-quality content uploaded to Google Photos will count towards a free 15GB storage capacity, with the company making several pricing tiers available to those who need to store more data. The limit will also apply to files that users keep on Drive, specifically Google Docs, Sheets, Slides, Drawings, Forms, and Jamboard files.

Google is framing these plans as a way to be able to continue to provide everybody with a great storage experience while keeping pace with the growing demand for its free services.

Currently, files created through Google’s productivity apps, as well as photos smaller than 2,048 x 2,048 pixels, and videos shorter than 15 minutes, don’t count towards the cap. High quality, under the new storage calculations, will include photos larger than 16Mp or videos larger than 1080p, all of which will be optionally compressed.

“For many, this will come as a disappointment. We know. We wrestled with this decision for a long time, but we think it’s the right one to make,” said the firm’s product lead for Google Photos, David Lieb.

“Since so many of you rely on Google Photos as the home of your life’s memories, we believe it’s important that it’s not just a great product, but that it is able to serve you over the long haul. To ensure this is possible not just now, but for the long term, we’ve decided to align the primary cost of providing the service (storage of your content) with the primary value users enjoy (having a universally accessible and useful record of your life).”

More than one billion people rely on Google Photos and Google Drive, Lieb added, uploading more than 28 billion photos and videos every week on top of more than four trillion already uploaded onto the service.

The change will only apply to newly uploaded content staring on 1 June next year, with all existing high-quality content remaining exempt from the storage quota. This includes all content uploaded between now and then.

Users who wish to upgrade to a larger storage plan will have to sign up to the company’s paid-for cloud storage platform Google One, with packages beginning at 100GB, alongside other features including access to Google experts and shared family plans.

Currently, Google One is priced at $1.99 per month for 100GB of storage, $2.99 per month for 200GB, and $9.99 per month for 1TB.

Google is also rolling out of a host of new tools, which the firm hopes will go towards justifying the additional cost for those who need to pay for a higher tier.

Among these tools is software that can make it easier to identify and delete unwanted content, such as blurry photos and long videos, though the firm is set to make more announcements in the coming months. Google has in the last few years leant on AI to improve the functionality of its flagship products, including Gmail and Google Docs.

The firm is also introducing new policies for users who are inactive or over their storage limit across Google’s cloud-based services. Those who are inactive in one or more of these services for two years may see their content deleted in those specific products, while users over their storage limit for two years may see their content deleted across the board.

AWS launches visual data preparation tool DataBrew


Sabina Weston

12 Nov, 2020

Amazon Web Services (AWS) has announced the general availability of its new visual data preparation tool that lets users clean and normalise data without having to write code.

Built as part of its AWS Glue service, the new DataBrew tool aims to make visual data preparation more accessible for a greater number of users.

According to AWS, DataBrew facilitates data exploration and experimentation directly from AWS data lakes, data warehouses, and databases. Its users will be able to choose from over 250 built-in functions to combine, pivot, and transpose the data, with the tool also providing transformations that use advanced machine learning techniques such as natural language processing.

DataBrew is serverless and fully-managed, the claim being that users will never need to configure, provision, or manage any compute resources directly.

“AWS customers are using data for analytics and machine learning at an unprecedented pace”, commented Raju Gulabani, AWS vice president of Database and Analytics. “However, these customers regularly tell us that their teams spend too much time on the undifferentiated, repetitive, and mundane tasks associated with data preparation. Customers love the scalability and flexibility of code-based data preparation services like AWS Glue, but they could also benefit from allowing business users, data analysts, and data scientists to visually explore and experiment with data independently, without writing code.

“AWS Glue DataBrew features an easy-to-use visual interface that helps data analysts and data scientists of all technical levels understand, combine, clean, and transform data,” he added.

AWS Glue DataBrew is generally available starting today in Ireland and Frankfurt, Germany, as well as select parts of the United States, including Ohio and Oregon, and the Asia Pacific Region. AWS said that it will announce the availability in additional regions “soon” but has yet to confirm when the tool will arrive in the UK.

When it comes to pricing, AWS said that the DataBrew users will not be faced with any “upfront commitments or costs” to use the tool, but will be expected to pay for the ability to create and run transformations on datasets. AWS did not immediately respond to IT Pro’s query regarding specific pricing.

The role of cloud native at the edge


Keri Allan

12 Nov, 2020

By 2025 analyst firm Gartner predicts that three quarters of enterprise-generated data will be created and processed at the edge – meaning outside of a traditional data centre or cloud and closer to end users.

The Linux Foundation defines edge computing as the delivery of computing capabilities to the logical extremes of a network in order to improve the performance, security, operating cost and reliability of applications and services. “By shortening the distance between devices and the cloud resources that serve them, edge computing mitigates the latency and bandwidth constraints of today’s internet, ushering in new classes of applications,” the foundation explains in its Open Glossary of Edge Computing.

Edge computing has been on a large hype cycle for several years now and many consider it “the Wild West”. This is because there’s a high volume of chaotic activity in this area, resulting in duplicated efforts, as technologists all vie to find the best solutions.

“It’s early doors,” says Brian Partridge, research director at 451 Research. “Vendors and service providers are throwing stuff at the wall to see what sticks. Enterprises are experimenting, investors are making large bets. In short, the market is thrashing, crowded and there’s a lot of confusion.” 

A synergy between cloud native and the edge 

Edge computing opens up many possibilities for organisations looking to scale their infrastructure and support more latency-sensitive applications. As cloud native infrastructures were created to improve flexibility, scalability and reliability, many developers are looking to replicate these benefits close to the data’s source, at the edge. 

“Cloud native can help organisations fully leverage edge computing by providing the same operational consistency at the edge as it does in the cloud,” notes Priyanka Sharma, general manager of the Cloud Native Computing Foundation (CNCF). 

“It offers high levels of interoperability and compatibility through the use of open standards and serves as a launchpad for innovation based on the flexible nature of its container orchestration engine. It also enables remote devops teams to work faster and more efficiently,” she points out. 

Benefits of using cloud native at the edge

Benefits of using cloud native at the edge include the ability to complete faster rollbacks. Therefore, edge deployments that break or have bugs can be rapidly returned to a working state, says William Fellows, co-founder and research director of 451 Research. 

“We’re also seeing more granular, layered container support whereby updates are portioned into smaller chunks or targeted at limited environments and thus don’t require an entire container image update. Cloud native microservices provide an immensely flexible way of developing and delivering fine-grain service and control,” he adds.

There are also financial benefits to taking the cloud native path. The reduction in bandwidth and streamlined data that cloud native provides can reduce costs, making it an incredibly efficient tool for businesses. 

“It can also allow consumption-based pricing approach to edge computing without a large upfront CapEx spend,” notes Andrew Buss, IDC research director for European enterprise infrastructure.

However, it wouldn’t be the “Wild West” out there right now if cloud native was the perfect solution. There are still several challenges still to work on, including security concerns. 

Containers are very appealing due to them being lightweight but they’re actually very bad at ‘containing’,” points out Ildikó Vanska, ecosystem technical lead at the Open Infrastructure Foundation (formerly the OpenStack Foundation).

“This means they don’t provide the same level of isolation as virtual machines, which can lead to every container running on the same kernel being compromised. That’s unacceptable from a security perspective. We should see this as a challenge that we still need to work on, not a downside to applying cloud native principles to edge computing,” she explains. 

There’s also the complexity of dealing with highly modular systems, so for those interested in moving towards cloud native edge computing, you need to prepare by investing the time and resources necessary to effectively implement it. 

What should businesses be thinking about when embarking on cloud native edge computing? 

Cloud native edge solutions are still relatively rare; IDC’s European Enterprise Infrastructure and Multicloud survey from May 2020 showed that the biggest edge investments are still on-premise. 

“However, we expect this to shift in the coming years as cloud native edge solutions become more widely available and mature and we have more use cases that take advantage of cloud as part of their design,” says Gabriele Roberti, Research Manager for IDC’s European Vertical Markets, Customer Insights and Analysis team, and Lead for IDC’s European Edge Computing Launchpad.

For those businesses eager to take the leap, Partridge recommends starting with the application vision, requirements and expected outcomes. After targeting edge use cases that can support a desired business objective – such as lowering operations costs – you can then turn your attention to the system required. 

Laura Foster, programme manager for tech and innovation at techUK, reiterates that it’s important to build a use case that works for your business needs. 

“There’s an exciting ecosystem of service providers, innovators and collaboration networks that can help build the right path for you, but the journey towards cloud native edge computing also needs to go hand in hand with cultural change,” she points out. 

“Emerging technologies, including edge computing, will pioneer innovation, but only if businesses push for change. Retraining and reskilling workforces is a fundamental part of an innovation journey and can often be the key to getting it right,” she concludes.