Automate your software builds with Jenkins


Danny Bradbury

3 Mar, 2021

Software developers can work well alone, if they’re in control of all their software assets and tests. Things get trickier, however, when they have to work as part of a team on a fast-moving project with lots of releases. A group of developers can contribute their code to the same source repository, like Git, but they then have to run all the necessary tests to ensure things are working smoothly. Assuming the tests pass, they must build those source files into executable binaries, and then deploy them. That’s a daunting task that takes a lot of time and organisation on larger software projects.

This is what Jenkins is for. It’s an open-source tool that co-ordinates those stages into a pipeline. This makes it a useful tool for DevOps, a development and deployment approach that automates the various stages of building software, creating an efficient conveyor belt system. Teams that get DevOps right with tools like Jenkins can move from version roll-outs every few months to every few days (or even hours), confident that all their tests have been passed.

Jenkins used to be called Hudson, but its development team renamed it after Oracle forked the project and claimed the original name. It’s free,  and runs on operating systems including Windows, Mac, and Linux, and it can also run as a Docker image.

You can get Jenkins as a downloadable from the Jenkins.io website, but you’ll need to run the Java runtime environment to support it. Alternatively, you can install it as a Docker container by following the instructions on the official Jenkins site, which is what we’ll do here. Docker takes a little extra work to set up, but the advantage here is twofold: it solves some dependency problems you might run into with Java, and it also enables you to easily recreate your Jenkins install on any server by copying your Docker file and the Docker run command to run it, which we put into a shell script for increased convenience. Jenkins’ Docker instructions also install a souped-up user interface called Blue Ocean. If you don’t use the Jenkins Docker instructions you can also install Blue Ocean separately as a plugin.

First, we must create a Python program for Jenkins to work with. We created a simple file called test-myapp.py, stored on our Linux system in /home/$USER/python/myapp. It includes a basic test using the Python PyTest utility:

#test_capitalization

def capitalize_word(word):

    return word.capitalize()

 def test_capitalize_word():

    assert capitalize_word(‘python’) == ‘Python’

Create a Github repository for it using git init. Commit the file to your repo using git add ., and then git commit -m “first commit”.

Now it’s time to start Jenkins using the docker run command in the Jenkins teams’ Docker instructions. Once Jenkins is running, you can access it at localhost:8080. It will initially show you a screen with a directory path to a file containing your secure first-time access password. Copy the contents of the file to get logged into the administration screen, and from there it will set up the basic plugins you need to work with the software. Then, you can create a new user account for yourself.

 

If you’re not already in the Blue Ocean interface, click on that option in the left sidebar. It will ask you to create a project. Call it myapp and then select Git as the project type in the Blue Ocean interface.

Blue Ocean will now ask you to create a pipeline. Click yes. We’re going to write this pipeline ourselves in a ‘Jenkinsfile’, which we’ll store in our myapp folder.

A Jenkinsfile is a test file describing your pipeline. It contains instructions for the stages of each build. It looks like this:

pipeline {

    agent any

     stages {

        stage(‘Build’) {

            steps {

                <steps for this stage go here>

            }

        }

        stage(‘Test’) {

            steps {

                <steps for this stage go here>

            }

        }

        stage(‘Deploy’) {

            steps {

                <steps for this stage go here>

            }

        }

    }

}

Each stage reflects a step in the build pipeline and we can have as many as we like. Let’s flesh out this template.

Python programs don’t need building and deploying in the same way that, say, C++ programs do, because they’re interpreted. Nevertheless, Jenkins is useful in other ways. We can test our code automatically, and we can also check its formatting to ensure that it’s easy for other developers to read.

To do this, we need to compile the Python program into bytecode, which is an intermediate stage that happens when you run Python programs. We’ll call that our build stage. Here’s the Jenkinsfile for that step:

pipeline {

    agent none

    stages {

        stage(‘Build’) {

            agent {

                docker {

                    image ‘python:2-alpine’

                }

            }

            steps {

                sh ‘python -m py_compile test_myapp.py’

                stash(name: ‘compiled-results’, includes: ‘*.py*’)

            }

        }

    }

}

The agent is the external program that runs this stage. We don’t define a global one, but we do define one for the individual stage. In this case, because we’re using Docker, it’s a lightweight Alpine container with a Python implementation.

For our step, we run a shell command that compiles our python file.

Save this in your project folder as ‘Jenkinsfile’ and then commit it using git add . and git commit -m “add Jenkinsfile”.

Back in the UI, ignore Blue Ocean’s prompt to create a pipeline. Once it spots the Jenkinsfile in your repo, it’ll build it from that automatically. Go into the Jenkins dashboard by clicking the exit icon next to the Logout option on the top right, or clicking the Jenkins name on the top left of the screen, to get to the main Jenkins screen. Look for your new project in the dashboard and on the left, select Scan Multibranch Pipeline Now.

Wait for a few seconds and Jenkins will scan your Git repo and run the build. Go back into the Blue Ocean interface, and all being well you’ll see a sunny icon underneath the HEALTH entry that shows the build succeeded. Click on myapp, then Branch indexing, and it’ll give you a picture of your pipeline and a detailed log.

Now we will add a test stage. Update your code to look like this:

pipeline {

    agent none

    stages {

        stage(‘Build’) {

            agent {

                docker {

                    image ‘python:2-alpine’

                }

            }

            steps {

                sh ‘python -m py_compile test_myapp.py’

                stash(name: ‘compiled-results’, includes: ‘*.py*’)

            }

        }

        stage(‘Test’) {

            agent {

                docker {

                    image ‘qnib/pytest’

                }

            }

            steps {

                sh ‘py.test –verbose –junit-xml test-results/results.xml test_myapp.py’

            }

            post {

                always {

                    junit ‘test-results/results.xml’

                }

            }

    }

    }

}

We’re using another Docker container to run a simple PyTest (which we included in the code of our myapp.py file). Save this file and update your repo with another git add a . and git commit -m “add test stage to Jenkinsfile”. Then, scan the multibranch pipeline as before. When you drop into Blue Ocean, you’ll hopefully see success once again. Note that Docker stores everything it runs in its own volume, along with the results. Although you can work some command line magic to access those files directly, you don’t need to; Jenkins shows you those assets in its UI. Click on the latest stage to open the build details, and find the entry that says py.test –verbose –junit-xml test-results/results.xml testmyapp.py. Clicking on that shows you the results of your test:

Everything passed! Now we’re going to bring it home with the final stage in our demo pipeline: checking the code formatting. There are specific rules for formatting Python code as outlined in the language’s PEP-8 specifications. We’ll update our Jenkins file to use a tool called PyLint that will check our code. Here’s the full Jenkinsfile for all three stages of our pipeline:

pipeline {

    agent none

    stages {

        stage(‘Build’) {

            agent {

                docker {

                    image ‘python:2-alpine’

                }

            }

            steps {

                sh ‘python -m py_compile test_myapp.py’

                stash(name: ‘compiled-results’, includes: ‘*.py*’)

            }

        }

        stage(‘Test’) {

            agent {

                docker {

                    image ‘qnib/pytest’

                }

            }

            steps {

                sh ‘py.test –verbose –junit-xml test-results/results.xml test_myapp.py’

            }

            post {

                always {

                    junit ‘test-results/results.xml’

                }

            }

    }

        stage(‘Lint’) { 

            agent {

            docker {

                    image ‘eeacms/pylint’

        }

        }

            environment { 

                VOLUME = ‘$(pwd)/test_myapp.py’

                IMAGE = ‘eeacms/pylint’

            }

            steps {

            withEnv([‘PYLINTHOME=.’]) {

                    sh “pylint ${VOLUME}”

        }

        }

    }

    }

}

Follow the same steps as before: save the file, commit it to your Git repo so that Jenkins sees it, and then rescan the multi-branch pipeline. Then go into Blue Ocean and look at the result. Oh no!

The pipeline stage failed! That’s because our code is badly formatted, and PyLint tells us why. We’ll update our test_myapp.py file to make the code compliant:

“””

Program to capitalize input

“””

#test_capitalization

def capitalize_word(word):

    “”” Capitalize a word”””

    return word.capitalize()

 def test_capitalize_word():

    “””Test to ensure it capitalizes a word correctly”””

    assert capitalize_word(‘python’) == ‘Python’

Now, save, commit to your repo, and rescan. Blue Ocean shows that we fixed it (note that in our demo it took us a couple of runs at the Python code to get the formatting right).

You could run all these steps manually yourself, but the beauty of Jenkins is that it automates them all for faster development. That makes the tool invaluable for developers working on a fast cadence as part of a team, but even a single freelance dev, or a hobbyist working on open-source projects, can use this to refine their practice.

Microsoft doubles down on zero trust security policies


Keumars Afifi-Sabet

2 Mar, 2021

Microsoft has launched new functionality across its Azure Active Directory (AD) authentication portal and Microsoft 365 to advance its zero trust security strategy and protect its customers against insider threats. 

‘Zero trust’ is a security strategy based on the need for businesses to adapt to increasingly sophisticated threats, and is based on the assumption that nothing within the corporate network can be trusted. 

Microsoft is among a handful of tech companies to adopt these policies in a meaningful way over the past few years, with features revealed at its Ignite 2021 conference in Azure AD and Microsoft 365 bolstering the firm’s zero trust capabilities. 

Passwordless authentication is now generally available in AD across all cloud and hybrid environments, with users able to use biometrics, Windows Hello for Business, the Microsoft Authenticator app or FIDO2 security key to log-in.

The policy engine Azure AD Conditional Access now uses authentication context to enforce more granular policies based on user interactions within an app, also taking into account the sensitivity of data they’re trying to access. 

Verifiable credentials, which lets organisations confirm pieces of information on their employees such as education or professional certificates, is also entering public preview within the next few weeks. This verifies claims made without collecting any personal data. The government of Flanders and the NHS are already piloting this service.

“As defenders ourselves, we are passionate proponents of a Zero Trust mindset, encompassing all types of threats – both outside in and inside out,” said Microsoft’s corporate VP for security, compliance and identity, Vasu Jakkal.

“We believe the right approach is to address security, compliance, identity, and device management as an interdependent whole, and to extend protection to all data, devices, identities, platforms, and clouds – whether those things are from Microsoft, or not.”

Changes in Microsoft 365 are largely based on trying to eliminate the insider threat, both malicious and unwitting, with the firm investing in creating inside-out protection by extending its capabilities to third parties.

Improvements in compliance include co-authoring documents protected with Microsoft Information Protection, which allows multiple users to work simultaneously on documents while benefitting from the extensive protection for documents and emails across Microsoft 365 apps.

Microsoft 365’s Insider Risk Management Analytics will allow customers to identify potential insider risk activity within an organisation, which will then inform policy configurations. Tools include daily scans of tenant audit logs, including historical activities, with machine learning used to identify any risky activity.

Azure Pureview, Microsoft’s unified government platform for on-premises, multi-cloud and software as a service (Saas) data, can also be used to scan and classify data residing in AWS S3 buckets, SAP EEC, SAP S4/HANA and Oracle Database.

“Adopting a Zero Trust strategy is a journey,” Jakkal continued. “Every single step you take will make you more secure. In today’s world, with disappearing corporate network perimeters, identity is your first line of defence. 

“While your Zero Trust journey will be unique, if you are wondering where to start, our recommendation is to start with a strong cloud identity foundation. The most fundamental steps like strong authentication, protecting user credentials, and protecting devices are the most essential.”

Microsoft is also launching what it calls an “assume breach” toolset, which comprises tools and features that can help customers adopt the assume breach mentality without being hampered by the complexity that it can often entail. This is a critical component of the overall zero trust umbrella. 

Among the improvements, Microsoft Defender for Endpoint and Defender for Office 365 customers can now probe threats directly from the Microsoft 365 Defender portal, which provides alerts and in-depth investigation pages. A Threat Analytics section also provides a set of reports from Microsoft security researchers that help customers understand, prevent and mitigate active threats.

Microsoft takes on Slack with new Teams Connect feature


Bobby Hellard

2 Mar, 2021

Microsoft has announced a breadth of new capabilities for Microsoft Teams, including a cross-organisation channel sharing feature that’s uncannily similar to a service recently launched by Slack. 

The updates to Teams include various new modes for presenters, with Microsoft also showcasing new hardware dedicated to the video conferencing service. 

Of all the new functions announced at Ignite, ‘Microsoft Teams Connect’ is the one that many further the flames of the company’s heated rivalry with Slack. The service, which lets users share channels with both internal and external collaborators, seems to have almost the same word-for-word description as ‘Slack Connect‘.  

Slack has previously called out Microsoft for copying its work and recently filed an antitrust complaint with European Commission over Microsoft’s “anti-competitive” conduct.

Beyond Connect, the most eye-catching update is the ability to create interactive webinars – for internal or external purposes – that can accommodate up to 1,000 attendees. This includes a number of presentation options and host controls, such as the ability to disable attendee chat and video, and post-event reporting.

What’s more, until 30 June 2021, the webinars can be switched to a ‘view-only’ broadcast of up to 20,000 people in order to accommodate higher demand for virtual events. The same capabilities have been available for general meetings on Teams since August

Microsoft has also made steps to alleviate stress and video call fatigue with new functions for speakers. These are aimed at creating more impactful, dynamic presentations but also at keeping a more ‘natural’ connection with the participants. Presenters will be able to use ‘Microsoft PowerPoint Live’ which will enable hosts to deliver more engaging presentations with notes, slides, meeting chat, and participants all in a single view. 

There is also a dedicated ‘Presenter mode’ that allows hosts to customise how their video feed and content appears to the audience. These include Standout and Reporter modes that put the host’s video feed in different positions to visual aids or content. All three will launch in the coming months along with new Teams-focused hardware, such as ‘Intelligent Speakers’ that can identify and differentiate up to 10 people talking in a Microsoft Teams Room. 

On the hardware front, there is is also Microsoft Teams-certified video conferencing monitors from Dell and Poly 21, a new P15 video bar from Poly, and a new Cam 130 by Aver that allows users to present their best selves in lighted video meetings. 

Microsoft Azure Percept promises to make edge computing a doddle


Dale Walker

2 Mar, 2021

Microsoft has announced a new platform designed to make it easy to build and operate artificial intelligence-powered technology for use in low-power edge devices, such as cameras and audio equipment.

The Azure Percept Development Kit (DK), which is available in public preview from today, promises to provide a single, end-to-end system that enables customers without coding knowledge to develop an AI product from the ground up.

The hope is that this new platform will help create a Microsoft-powered ecosystem of edge devices designed for low-power implementations, in essence replicating its success with the Windows operating system in the PC market.

The platform, announced at Microsoft Ignite, will run alongside Azure Percept Vision and Azure Percept Audio, two bolt-on services that can connect to Azure cloud services such as Azure AI, Azure Machine Learning, Azure Live Video Analytics, and Microsoft’s various IoT services.

Early concepts suggest the platform is initially aimed at use-cases involving retail and warehousing, where customers can take advantage of services like object detection, shelf analytics, anomaly detection and keyword spotting, among others.

Microsoft explained that the DK “significantly” lowers the bar for what is required to build edge technology, particularly as most implementations require some degree of engineering and data science expertise to make them a success.

“With Azure Percept, we broke that barrier,” said Moe Tanabian, Microsoft vice president and general manager of the Azure edge and devices group. “For many use cases, we significantly lowered the technical bar needed to develop edge AI-based solutions, and citizen developers can build these without needing deep embedded engineering or data science skills.”

Customers signing up to the platform will also be provided with a range of edge-enabled hardware that allows for processes like speech and image recognition to take place without requiring a connection to the cloud. Initially, this will be built by Microsoft, however, the company also confirmed that third-party manufacturers will be able to build equipment that’s certified to run on the Azure Percept platform.

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, corporate vice president of Microsoft’s edge and platform group. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Microsoft’s own hardware also uses the industry-standard 80/20 T-slot framing architecture, which it claims will make it easier for customers to run pilots of their ideas with existing edge housing and infrastructure.

Elevators that are able to respond to custom voice commands, cameras that notify managers when shelves have low stock, and video streams that monitor for availability in car parks are just a few examples of how the technology could be deployed, Microsoft explained.

Azure Percept Studio, another bolt-on service, will provide step by step guides taking customers through the entire lifecycle of an edge tool, from design to implementation. Perhaps most importantly, customers using Percept Studio will also have access to AI models created by the open source community.

What is green cloud?


Sandra Vogel

3 Mar, 2021

The world’s fossil fuel stocks are depleting, and the cost of using them is rising. Meanwhile countries and corporations alike are making commitments to achieve zero carbon status, and individuals are increasingly aware of – and seeking to reduce – their personal environmental footprint. One way tech firms can do their bit to help the steady march to zero carbon is to use what’s known as green cloud. 

The complexities of green cloud

As the name would suggest, green cloud is about being low-carbon, but there are a lot of components in the mix beyond just the power that is used to keep facilities like data centres going. It’s also about how the infrastructure used in cloud facilities is produced and sourced. Some building materials are produced in more environmentally damaging ways, while others are more environmentally damaging to maintain and dispose of at a later date – the whole lifecycle matters. The same lifecycle issues exist for the technology components used to provide cloud services.

Even when we look at energy usage the picture is complex. Renewables are key as an energy source, but they are not the only factor; how energy is distributed around a data centre and energy efficiency are also important considerations. 

All of this means that understanding how green a green cloud provider is can be complex. Subhankar Pal, assistant vice president of technology and innovation at Capgemini company Altran, tells Cloud Pro: “Firms can discover their green-ness by establishing a way to evaluate green KPIs during tests and in production. Green KPIs will be derived from metrics like energy efficiency, cooling efficiency, computing infrastructure performance, thermal and air management metrics.”

Prioritising sustainability matters

Consumers are increasingly adopting sustainability models in their everyday life, and are often prepared to pay more for items that meet environmental credentials. The principles that lead people to look for green energy providers, reduced and recyclable packing in purchased goods, and ethically sourced clothing are the same principles that drive them away from companies that lack green credentials. Firms that add green cloud into their mix stand to gain in reputation, regardless of whether their clients are consumers or other businesses.

As Emma Roscow, Intelligent cloud infrastructure lead for Accenture UKI explains: “Business strategy is increasingly focused on sustainability, with many companies making commitments to reduce their carbon footprint. In fact, 99% of CEOs from large companies now agree that sustainability issues are important to the future success of their businesses.” 

It’s not just about good PR, though – the real environmental gains matter too. Roscow adds: “Accenture recently found that public cloud migrations could reduce global carbon emissions by 59 million tons of CO2 per year, the equivalent to taking 22 million cars off the road.” 

Lead the way and keep a clear head

Given the depletion of fossil fuels and push towards renewables, the move to green cloud is ultimately inevitable. For Pal, the time is right to make the move. “The reason to do this now is because some firms are already experiencing higher cost pressures, and they could be more cost implications in the long run. There could be stricter government regulations, penalties, and higher operational costs for managing non-green data centres,” he says.

But the move should be made with a clear head. Nick Mcquire, vice president, enterprise research at CCS Insight tells Cloud Pro: “We are seeing the cloud providers posturing over ‘my cloud is cleaner than your cloud’ as they commit on the one hand, to massive infrastructure build-outs to sustain demand and differentiate their platforms, and the ability for this infrastructure to be friendly on the planet on the other.” He advises, “customers should push their cloud providers hard on providing energy data around where they place their cloud workloads and for innovation in areas that can cut their energy emissions.”

Roscow highlights some additional factors to be taken into account including, within any firm, “the current hardware’s lifecycle, approach to application development, their sustainable business models and processes, and how they use the cloud to create circular operations”.

There is no easy, off the shelf, method of deciding when and how to make the move to green cloud. Each firm will need to do its own research on both cloud services and its wider use of IT – and that research might be much more in-depth than you think. Roscow puts this plainly – if surprisingly – when she reveals that “Accenture Labs’ research found that even the choice of coding language can impact energy consumption by as much as fifty times, depending on the programming technique”. 

Despite these twists and turns, there is no doubt the move to green cloud is coming for tech firms as for all others. Making the move sooner rather than later might be better financially and reputationally – as well as for the planet.

Google Workspace updates take aim at hybrid working


Bobby Hellard

2 Mar, 2021

Google has unloaded a raft of new updates for Google Workspace that regear it towards hybrid working strategies. 

The updates include greater support for frontline workers, time management capabilities and tools for strengthening collaboration. 

“We’re excited to announce new innovations in Google Workspace that will further empower all the ways work happens – and deepen its impact – in an ever-changing world,” the VP and GM of Google Workspace, Javier Soltero, wrote in a blog post

For frontline workers, the update takes into account the use of personal devices, such as the use of smartphones and tablets by hospital workers or retail staff. Google Workspace Frontline comes with all the necessary apps – Gmail, Chat, Docs, Drive and so on – but it also includes business-level support and security features like advanced endpoint management to keep sensitive company data secure.

There are also features within Google Workspace to build custom apps directly from Google Sheets and Drive, so frontline workers can collect data in the field, report safety risks, manage customer requests and streamline their work.

This bleeds into other feature updates for managing schedules and workflows as they evolve in hybrid strategies. In the coming months, Google will be releasing new calendar-based features that will help users specify working ‘blocks’ – indications in the calendar that allow teammates to see when they’re online and available for meetings.

This includes ‘recurring out-of-office events’ which will automatically decline invites, and there will also be a setting to let co-workers know where a user will be during work hours, whether at home or in-office. 

For engagement and productivity, Google is also adding features to minimise distractions, which will include basic limitations on notifications and ‘Time Insights’ that will provide information to employees – but not managers – about project schedules and completion times. Google Assistant will also be available to provide calendar details and also be used to join meetings.

Meetings themselves will be getting some new features, including a ‘second screen’ function for Google Meet where users can host a meeting across a mix of devices – hosting on one and presenting data or documents with another. This will also include a split-screen update for the mobile version on Meet, Q&A features, pools and live captions in English, German, Spanish, Portuguese and French. 

Citrix rebuffs PM’s claims that remote working will come to an end


Bobby Hellard

2 Mar, 2021

Remote working will play a “huge role” in post-pandemic life and is very much going to be the new normal, according to Citrix. 

The cloud giant said that UK employees want hybrid working models, despite a strong desire to meet in person again. The option to work remotely, it said, makes for “happier” workers that stay committed for longer. 

The comments came in response to remarks from prime minister Boris Johnson, who dismissed the notion that remote working will become the new normal for British businesses. Instead, he suggested that people will have the desire to get back into the workplace and resume in-person meetings. 

His statement echos similar comments the government made in the summer after the first lockdown ended, which saw it urge people to get back to the workplace. The aim was to increase footfall traffic for shops and restaurants in city centres and on popular commuter routes and came after the CBI warned that workers must return to the office or risk urban centres becoming ghost towns.

 “While the prime minister is undoubtedly right that many office workers may have a strong desire for face-to-face meetings once again, this does not mean remote working will not play a huge role in post-pandemic life,” Mark Sweeney, Citrix’s regional VP of UK and Ireland, told CloudPro.
 
Research conducted by Citrix found flexible working initiatives have improved both the professional and personal lives of many UK employees. Around 46% surveyed by Citrix said they would only accept a role that offered flexible work options if they were to change jobs, highlighting a clear desire for remote working to remain, rather than Johnson’s notion of a return to the old ways. 
 
“A key learning we should take from the past year is that work is not dictated by a particular place, and should companies use flexible technologies – such as cloud-based virtual desktops and apps – to offer employees a hybrid model of working, then they are likely to see happier and more engaged workers that stay committed for longer,” Sweeney added.

There is significant evidence to back Sweeney’s comments beyond Citrix’s research. Recent studies have shown that the majority of people that can work remotely wish to continue doing so in some capacity beyond lockdown, and there are reports highlighting how businesses are changing their office space with hybrid models in mind. 

Salesforce is perhaps the biggest promotor of remote and hybrid work. The tech giant recently announced the ‘death’ of the 9 to 5, with sweeping changes to its office space and work policies. Even Googleand Microsoft, which have both offered more negative comment on remote working, have accepted that hybrid office strategies need to be looked at. 

Even the CBI, which warned of ghost towns in the summer, has tweaked its stance; it released a report in November called ‘No Turning Back‘ that also suggested hybrid work was here to stay. 

IT Pro 20/20: Keeping the lights on


Dale Walker

2 Mar, 2021

Welcome to the 14th issue of IT Pro 20/20, our sister title’s digital magazine.

Now that we have a better idea about when the lockdown will finally end, many of us will naturally be thinking about our return to the office. It’s likely that, having grown accustomed to remote working, for most of us this return will be phased and, depending on your role, you may find yourself able to negotiate how often you make the commute in. Some will be desperate to get moving again, while others will have taken cues from the past year to take advantage of new-found flexibility.

However, before the conversation shifts towards life after lockdown, we’ve taken the opportunity to highlight areas of our industry that have played crucial, yet often overlooked roles in this great remote working experiment.

In this issue, we look at how data centres have coped with immense pressure from customers, the benefits and pitfalls of onboarding new staff remotely, how smart cities will underpin life post-pandemic, and much more.

DOWNLOAD THE 14TH ISSUE OF IT PRO 20/20 HERE

The next IT Pro 20/20 will be available on 31 March – previous issues can be found here. If you would like to receive each issue in your inbox as they release, you can subscribe to our mailing list here.

IBM brings its hybrid cloud to the edge


Rene Millman

1 Mar, 2021

IBM has announced it’ll make its hybrid cloud available on any cloud, on-premises, or at the edge via its IBM Cloud Satellite.

Big Blue said it’s worked with Lumen Technologies to integrate its Cloud Satellite service with the Lumen edge platform to enable customers to use hybrid cloud services in edge computing environments. The firm also said it will collaborate with 65 ecosystem partners, including Cisco, Dell Technologies, and Intel, to build hybrid cloud services.

It said that IBM Cloud Satellite is now generally available to customers and can bring a secured, unifying layer of cloud services to clients across environments, regardless of where their data resides. IBM added that this technology would address critical data privacy and data sovereignty requirements. 

IBM said customers using the Lumen platform and IBM Cloud Satellite would be able to deploy data-intensive applications, such as video analytics, across highly distributed environments and take advantage of infrastructure designed for single-digit millisecond latency.

The collaboration will enable customers to deploy applications across more than 180,000 connected enterprise locations on the Lumen network to provide a low latency experience. They can also create cloud-enabled solutions at the edge that leverage application management and orchestration via IBM Cloud Satellite and build open, interoperable platforms that give customers greater deployment flexibility and more seamless access to cloud-native services like artificial intelligence (AI)internet of things (IoT), and edge computing.

One example given of how this would benefit customers is using cameras to detect the last time surfaces were cleaned or flag potential worker safety concerns. Using an application hosted on Red Hat OpenShift via IBM Cloud Satellite from the proximity of a Lumen edge location, such cameras and sensors can function in near real-time to help improve quality and safety, IBM claimed.

IBM added that customers across geographies can better address data sovereignty by deploying this processing power closer to where the data is created.

“With the Lumen Platform’s broad reach, we are giving our enterprise customers access to IBM Cloud Satellite to help them drive innovation more rapidly at the edge,” said Paul Savill, SVP enterprise product management and services at Lumen. 

“Our enterprise customers can now extend IBM Cloud services across Lumen’s robust global network, enabling them to deploy data-heavy edge applications that demand high security and ultra-low latency. By bringing secure and open hybrid cloud capabilities to the edge, our customers can propel their businesses forward and take advantage of the emerging applications of the 4th Industrial Revolution.”

IBM is also extending its Watson Anywhere strategy with the availability of IBM Cloud Pak for Data as a Service with IBM Cloud Satellite. IBM said this would give customers a “flexible, secure way to run their AI and analytics workloads as services across any environment – without having to manage it themselves.”

Service partners also plan to offer migration and deployment services to help customers manage solutions as-a-service anywhere. IBM Cloud Satellite customers can also access certified software offerings on Red Hat Marketplace, which they can deploy to run on Red Hat OpenShift via IBM Cloud Satellite.

Ransomware operators are exploiting VMware ESXi flaws


Keumars Afifi-Sabet

1 Mar, 2021

Two ransomware strains have retooled to exploit vulnerabilities in the VMware ESXi hypervisor system publicised last week and encrypt virtual machines (VMs).

The company patched three critical flaws across its virtualisation products last week. These included a heap buffer overflow bug in the ESXi bare-metal hypervisor, as well as a flaw that could have allowed hackers to execute commands on the underlying operating system that hosts the vCenter Server.

Researchers with CrowdStrike have since learned that two groups, known as ‘Carbon Spider’ and ‘Sprite Spider’, have updated their weapons to target the ESXi hypervisor specifically in the wake of these revelations. These groups have historically targeted Windows systems, as opposed to Linux installations, in large-scale ransomware campaigns also known as big game hunting (BGH).

The attacks have been successful, with affected victims including organisations that have used virtualisation to host many of their corporate systems on just a few ESXi servers. The nature of ESXi means these served as a “virtual jackpot” for hackers, as they were able to compromise a wide variety of enterprise systems with relatively little effort.

This follows news that cyber criminals last week were actively scanning for vulnerable businesses with unpatched VMware vCenter servers, only days after VMware issued fixes for the three flaws.

“By deploying ransomware on ESXi, Sprite Spider and Carbon Spider likely intend to impose greater harm on victims than could be achieved by their respective Windows ransomware families alone,” said CrowdStrike researchers Eric Loui and Sergei Frankoff. 

“Encrypting one ESXi server inflicts the same amount of damage as individually deploying ransomware on each VM hosted on a given server. Consequently, targeting ESXi hosts can also improve the speed of BGH operations.

“If these ransomware attacks on ESXi servers continue to be successful, it is likely that more adversaries will begin to target virtualization infrastructure in the medium term.”

Sprite Spider has conventionally launched low-volume BGH campaigns using the Defray777 strain, first attempting to compromise domain controllers before exfiltrating victim data and encrypting files. 

Carbon Spider, meanwhile, has traditionally targeted companies operating point-of-sale (POS) devices, with initial access granted through phishing campaigns. The group abruptly shifted its operational model in April last year, however, to instead undertake broad and opportunistic attacks against large numbers of victims. It launched its own strain, dubbed Darkside, in August 2020.

Both strains have compromised ESXI systems by harvesting credentials that can be used to authenticate to the vCenter web interface, which is a centralised server admin tool that can control multiple ESXi devices. 

After connecting to vCenter, Sprite Spider enables SSH to allow persistent access to ESXi devices, and in some cases changes the root password or the host’s SSH keys. Carbon Spider, meanwhile, accesses vCenter using legitimate credentials but also logged in over SSH using the Plink tool to drop its Darkside ransomware.