How to secure your multi-cloud deployments


Zach Marzouk

4 Mar, 2021

Multi-cloud environments have evolved over the years and as the digital landscape has changed, so too have enterprises.

This new way of managing online services has provided a number of benefits for many organisations. Using more than one cloud provider allows businesses greater flexibility in how they set up their digital environment as well as giving them the option to select the services and capabilities that best fit their needs.

By mixing and matching services from different providers, businesses can provide a better service to their customers and ensure they stay competitive in an ever-changing market.

However, with this new form of data management and sharing, there are always new challenges to keep in mind. If an organisation does choose to have a multi-cloud deployment, it must ensure it’s safe and secure.

According to IBM, 85% of organisations operate in a multi-cloud environment, so it’s paramount that security considerations are taken into account and the right controls are in place.

The challenges

Utilising a multi-cloud deployment comes with its own set of challenges. Storing data in multiple cloud platforms means there is a large environment to secure and different security issues to tackle from one provider to the next.

It’s useful to synchronise security policies across the different vendors that host the data so the policies are consistent across the board. Businesses also need to have complete visibility across all products, which can be complicated if they all have different security features. 

In addition, if businesses can’t monitor the whole scope of their deployments it may give attackers more space to attack or infiltrate them.

With this in mind, the security tools need to be able to view and share information across all deployments to reduce the complexity and increase efficiency. It’s also extremely important to maintain data compliance rules and have uninterrupted protection among the workloads at all times.

Identity and access management

One way of securing your multi-cloud deployment is through identity and access management (IAM). This helps to keep track of users and control access to certain data and services. It also makes life easier for IT managers to control user access to specific information across an organisation.

It essentially enables IT managers to allow users to access specific online resources like networks, storage systems, devices and more. it’s central to any directory service and helps strengthen the security of a deployment. 

Thanks to the wide variety of IT resources available, it has never been more important to have competent user management in a multi-cloud deployment to ensure the right people are accessing the right materials. Plus, by having greater control of user access, companies are able to operate more efficiently as many processes are automated instead of having to manually manage access to networks.

Regulations like GDPR also mean there is increased pressure to monitor and protect access to certain sensitive data. IAM is a great way to manage this risk and relatively low cost, meaning it’is accessible to companies of all sizes.

Identity as a service

When using IAM in the cloud, it can be complemented by using identity as a service (IDaaS) usually carried out by a third-party service provider. 

By opting to use these kinds of third party solutions, enterprises are able to manage security risks and meet legal requirements with a service that can be scaled fairly simply and extensively if needed.

IDaaS has a number of core features which are common across many providers:

Multi-factor authentication

Multi-factor authentication (MFA), also known as two-factor authentication, is a way of confirming a user’s identity by requiring two or more verification factors to gain access to an online resource. It’s more secure than just having a username or password as it requires extra identifying information.

Users must have a combination of a password and something in their physical possession like a mobile phone, token keyring, or a form of biometric technology in order to gain access to an online resource.

This is a core component of IAM and helps decrease the risk of successful cyber attacks and is essential for multi-cloud deployments.

Biometrics

Biometrics uses a person’s physical attributes for identity confirmation. The most common real-world examples of this would be fingerprint unlocking on smartphones and laptops, as well as facial recognition tech like Apple’s FaceID. This also applies to retina recognition, full facial recognition, hand or even DNA usage.

This gives your deployment additional security as a user has to be physically present to gain access to the system using their biometrics and underlines the reliability of this form of authentication as it’s based on unique data.

Single Sign-On (SSO)

Single sign-on (SSO) allows users to log into one application and then be given access to other designated applications. It helps provide a seamless experience to users as they don’t have to constantly log into different services or applications and reduces the friction involved.

An example of this is Google services where by signing into your Google account you can then access Gmail, Youtube, Drive and more without having to sign in each time.

If a cloud deployment is located on different platforms, it will help users if they don’t have to use too many usernames and passwords to access certain services. By having SSO, username and password sprawl can be reduced while improving the security at the same time. As it’s harder for credentials to be compromised, there will be less of a need for multiple usernames and passwords across the services a business provides.

Making the right choice

The way businesses choose to secure their multi-cloud deployments will vary depending on how the organisation’s environment is set out and who should be able to access different parts of it.

Having the right tools in place ensures the correct security is implemented within an organisation with multiple cloud platforms. Plus, it’s a good way to reduce complexity across a deployment and by centralising the security, businesses can ensure employees or users only have access to the information they are supposed to and respect data compliance rules.

WhatsApp adds voice and video calling to desktop app


Sabina Weston

4 Mar, 2021

WhatsApp has updated its desktop app with new video and voice calling features, adapting its offering to the age of remote working.

The new tools will make it easier for users to answer or make calls directly from their computers, without having to reach for their mobile phones.

In order to enable desktop calling, users will be asked to grant WhatsApp permission to access their computer’s microphone and camera. They will also need to own an operating system no older than Windows 10 64-bit version 1903 or macOS 10.13.

Desktop video calls will also work “seamlessly for both portrait and landscape orientation”, appearing in a resizable standalone window.

A company spokesperson told us that the desktop app will be linked to users’ mobile app, with conversations still being based on phone numbers and requiring an active internet connection on the computer and phone alike. Although the call won’t go through the user’s phone, it will need the phone to be online to establish the call.

WhatsApp Business app users will also be able to make desktop calls, the company’s spokesperson added. However, group calls will not be supported on WhatsApp Desktop for the time being.

The company also announced that, on New Year’s Eve, it had broken “the record for the most calls ever made in a single day with 1.4 billion voice and video calls”.

The announcement comes almost a year after reports that the company was trialling a beta version of WhatsApp Web, which would let users create a group video call using Facebook Messenger Rooms. The feature, which was spotted as one of the shortcuts in the beta’s WhatsApp menus in the 2.2019.6 Web update, would allow users to make calls with up to 50 people – a significant increase from the eight-way video calls which came with WhatsApp’s April 2020 update.

April of last year also saw the release of Facebook’s Messenger Rooms, which was launched in an attempt to capitalise on the heightened demand for video conferencing during the pandemic lockdown.

Xero to acquire Planday for £159 million


Sabina Weston

4 Mar, 2021

Cloud-based accounting software firm Xero has announced the acquisition of workforce management platform Planday for €183.5 million (£158.5 million), considered the biggest deal for the company yet.

The sum includes an upfront payment of €155.7 million (£134.5 million), as well as a subsequent earnout payment estimated at €27.8 million (£24 million), based on product development and revenue milestones. Approximately half of the payment is to be settled in Xero Limited shares, with the remainder being “settled in cash”.

The acquisition is expected to be finalised before the end of the month, and is subject to the satisfaction of closing conditions.

Denmark-based Planday, which is a cloud-based open platform, provides businesses with a real-time view of staffing needs and payroll costs, alongside key business performance metrics. As a mobile app, it lets employers and employees communicate, collaborate on scheduling, track time, and attendance, as well as manage payroll, vacation, absence, and other labour-related compliance needs.

When integrated with accounting solutions such as Xero, it also provides additional insights which facilitate the adjustment of staffing levels to match trading conditions and control labour costs. With the acquisition, Planday is expected to expand its presence into other markets where Xero operates.

Commenting on the announcement, Xero CEO Steve Vamos said that the acquisition “aligns with [Xero’s] purpose to make life better for people in small businesses and their advisors”.

Planday CEO Christian Brøndum said that the company is “beyond excited for this next step in [its] journey”.

“Our mission is to make our customers’ day work, and make life easier for both employers and employees,” said Brøndum. “This mission fits perfectly with Xero’s passion for small businesses, for people, for growth and for communities.

“We’re looking forward to working within the Xero family to build a strong launchpad for businesses and employees to manage their time and joint potential,” he added.

The news comes less than a year after Xero announced plans to acquire Waddle, a lending platform that specialises in helping small businesses gain access to capital, for around £44 million.

Google launches Flutter 2 for cross-platform app development


Keumars Afifi-Sabet

4 Mar, 2021

Google has upgraded its Flutter toolkit to allow mobile developers to seamlessly port native apps across a breadth of operating systems and web browsers, as well as devices such as TVs, cars, and smart home appliances.

Flutter, an open source software development kit (SDK) launched by Google in December 2018, allows developers to build mobile apps across both Android and iOS devices from a single codebase using the Dart language.

The next generation, dubbed Flutter 2, is a logical extension of this principle, with developers able to programme native apps across not just Android and iOS but also Windows, macOS, and Linux systems too. This is alongside web-based experiences across Chrome, Firefox, Safari, and Edge, as well as the operating systems powering IoT and smart devices.

“Our goal is to fundamentally shift how developers think about building apps, starting not with the platform you’re targeting but rather with the experience you want to create,” Google said in its Developer blog.

“In Flutter 2, released today, we’ve broadened Flutter from a mobile framework to a portable framework, unleashing your apps to run on a wide variety of different platforms with little or no change.”

Developing for Android devices with Android Studio, an integrated development environment (IDE), differs from developing with Flutter, in that it’s a Java-incorporated development workbench for creators to develop and debug their source code for one platform.

Using Android Studio means developers can’t build apps native to iOS as well as Android – and must jump through hoops to convert their codebase to be compatible with iPhones, or rewrite them from scratch.

Flutter, by contrast, was launched with native cross-platform development in mind, with app creators able to build applications for both iOS and Android using a single codebase. Features such as platform APIs, third-party SDKs and reusable user interface (UI) blocks lend themselves to this aim.

Google also touts Flutter as allowing you to build aesthetically-pleasing apps at-pace, in addition to making changes as your app’s running in real-time with the ‘hot reload’ feature. The ecosystem of Flutter-developed apps includes roughly 150,000 services including apps such as WeChat and Yandex Go.

Google Pay even switched to Flutter in September last year to achieve improvements in productivity and quality. By unifying the iOS and Android codebases, the development team removed roughly 500,000 lines of code. There’s also been a reported increase in the efficiency for engineers, with a reduction in work needed around releases such as security reviews and experimentation, given two codebases have been contracted to one.

Desktop support was added to an earlier alpha release of Flutter, but this has just been moved into the toolkit’s ‘stable’ channel, meaning it’s now generally available.

To make it happen, Google partnered with Canonical, the company that publishes Ubuntu, with the organisation’s engineers contributing code to support development and deployment on Linux installations.

Google has also expanded its partnership with Microsoft, with the Windows developer releasing contributions to the Flutter engine to support foldable Android devices, including new design patterns and functionality.

With Flutter 2, app developers will also find added support for the web with a focus on progressive web apps (PWAs) as well as single-page apps (SPAs) and bringing existing Flutter mobile apps to the web with shared code.

Finally, a partnership with Toyota paves the way for writing in-vehicle software using Flutter, with the car manufacturer using Flutter’s embedder API to tailor Flutter for the unique needs of its vehicles.

Okta agrees to buy rival Auth0 for $6.5 billion


Keumars Afifi-Sabet

4 Mar, 2021

Identity access management firm Okta has agreed to purchase its main industry competitor Auth0 in a deal worth $6.5 billion (roughly £5.6 billion).

This merger will eventually see the two businesses’ expertise and products unify under a single brand, with Okta’s cloud-based platform expected to combine with Auth0’s device and app-based identity management suite.

Auth0 was founded in 2013, four years after Okta was established, and recently attracted $120 million (£85.9 million) of funding in its Series F round in July last year. In doing so, it attained an overall valuation of approximately $2 billion (£1.4 billion).

Okta hopes that the merger will allow the two companies to jointly address more identity management problems and use cases than they each could alone. Both platforms will be supported, invested in, and eventually integrated with one another over time.

“Combining Auth0’s developer-centric identity solution with the Okta Identity Cloud will drive tremendous value for both current and future customers,” said Okta CEO and co-founder Todd McKinnon.

“Okta’s and Auth0’s shared vision for the identity market, rooted in customer success, will accelerate our innovation, opening up new ways for our customers to leverage identity to meet their business needs. We are thrilled to join forces with the Auth0 team, as they are ideal allies in building identity for the internet and establishing identity as a primary cloud.”

The company describes its own and Auth0’s services as being complementary, with customers able to opt for one or another depending on their particular needs. While this has traditionally been true, in recent years both companies have expanded their offerings to such an extent they’ve begun to encroach on each other’s customer base.

Okta had initially aimed to be a single sign-on (SSO) platform for web applications, while Auth0 carved out a reputation for providing backend user management. Product expansion has seen the lines blur, however, and the rivalry between the companies intensify.

“Okta and Auth0 have an incredible opportunity to build the identity platform of the future,” said CEO and co-founder of Auth0, Eugenio Pace.

“We founded Auth0 to enable product builders to innovate with a secure, easy-to-use, and extensible customer identity platform. Together, we can offer our customers workforce and customer identity solutions with exceptional speed, simplicity, security, reliability and scalability. By joining forces, we will accelerate our customers’ innovation and ability to meet the needs and demands of consumers, businesses and employees everywhere.”

The boards of both companies have approved the transaction, with the acquisition expected to finalise before the end of July 2021.

Automate your software builds with Jenkins


Danny Bradbury

3 Mar, 2021

Software developers can work well alone, if they’re in control of all their software assets and tests. Things get trickier, however, when they have to work as part of a team on a fast-moving project with lots of releases. A group of developers can contribute their code to the same source repository, like Git, but they then have to run all the necessary tests to ensure things are working smoothly. Assuming the tests pass, they must build those source files into executable binaries, and then deploy them. That’s a daunting task that takes a lot of time and organisation on larger software projects.

This is what Jenkins is for. It’s an open-source tool that co-ordinates those stages into a pipeline. This makes it a useful tool for DevOps, a development and deployment approach that automates the various stages of building software, creating an efficient conveyor belt system. Teams that get DevOps right with tools like Jenkins can move from version roll-outs every few months to every few days (or even hours), confident that all their tests have been passed.

Jenkins used to be called Hudson, but its development team renamed it after Oracle forked the project and claimed the original name. It’s free,  and runs on operating systems including Windows, Mac, and Linux, and it can also run as a Docker image.

You can get Jenkins as a downloadable from the Jenkins.io website, but you’ll need to run the Java runtime environment to support it. Alternatively, you can install it as a Docker container by following the instructions on the official Jenkins site, which is what we’ll do here. Docker takes a little extra work to set up, but the advantage here is twofold: it solves some dependency problems you might run into with Java, and it also enables you to easily recreate your Jenkins install on any server by copying your Docker file and the Docker run command to run it, which we put into a shell script for increased convenience. Jenkins’ Docker instructions also install a souped-up user interface called Blue Ocean. If you don’t use the Jenkins Docker instructions you can also install Blue Ocean separately as a plugin.

First, we must create a Python program for Jenkins to work with. We created a simple file called test-myapp.py, stored on our Linux system in /home/$USER/python/myapp. It includes a basic test using the Python PyTest utility:

#test_capitalization

def capitalize_word(word):

    return word.capitalize()

 def test_capitalize_word():

    assert capitalize_word(‘python’) == ‘Python’

Create a Github repository for it using git init. Commit the file to your repo using git add ., and then git commit -m “first commit”.

Now it’s time to start Jenkins using the docker run command in the Jenkins teams’ Docker instructions. Once Jenkins is running, you can access it at localhost:8080. It will initially show you a screen with a directory path to a file containing your secure first-time access password. Copy the contents of the file to get logged into the administration screen, and from there it will set up the basic plugins you need to work with the software. Then, you can create a new user account for yourself.

 

If you’re not already in the Blue Ocean interface, click on that option in the left sidebar. It will ask you to create a project. Call it myapp and then select Git as the project type in the Blue Ocean interface.

Blue Ocean will now ask you to create a pipeline. Click yes. We’re going to write this pipeline ourselves in a ‘Jenkinsfile’, which we’ll store in our myapp folder.

A Jenkinsfile is a test file describing your pipeline. It contains instructions for the stages of each build. It looks like this:

pipeline {

    agent any

     stages {

        stage(‘Build’) {

            steps {

                <steps for this stage go here>

            }

        }

        stage(‘Test’) {

            steps {

                <steps for this stage go here>

            }

        }

        stage(‘Deploy’) {

            steps {

                <steps for this stage go here>

            }

        }

    }

}

Each stage reflects a step in the build pipeline and we can have as many as we like. Let’s flesh out this template.

Python programs don’t need building and deploying in the same way that, say, C++ programs do, because they’re interpreted. Nevertheless, Jenkins is useful in other ways. We can test our code automatically, and we can also check its formatting to ensure that it’s easy for other developers to read.

To do this, we need to compile the Python program into bytecode, which is an intermediate stage that happens when you run Python programs. We’ll call that our build stage. Here’s the Jenkinsfile for that step:

pipeline {

    agent none

    stages {

        stage(‘Build’) {

            agent {

                docker {

                    image ‘python:2-alpine’

                }

            }

            steps {

                sh ‘python -m py_compile test_myapp.py’

                stash(name: ‘compiled-results’, includes: ‘*.py*’)

            }

        }

    }

}

The agent is the external program that runs this stage. We don’t define a global one, but we do define one for the individual stage. In this case, because we’re using Docker, it’s a lightweight Alpine container with a Python implementation.

For our step, we run a shell command that compiles our python file.

Save this in your project folder as ‘Jenkinsfile’ and then commit it using git add . and git commit -m “add Jenkinsfile”.

Back in the UI, ignore Blue Ocean’s prompt to create a pipeline. Once it spots the Jenkinsfile in your repo, it’ll build it from that automatically. Go into the Jenkins dashboard by clicking the exit icon next to the Logout option on the top right, or clicking the Jenkins name on the top left of the screen, to get to the main Jenkins screen. Look for your new project in the dashboard and on the left, select Scan Multibranch Pipeline Now.

Wait for a few seconds and Jenkins will scan your Git repo and run the build. Go back into the Blue Ocean interface, and all being well you’ll see a sunny icon underneath the HEALTH entry that shows the build succeeded. Click on myapp, then Branch indexing, and it’ll give you a picture of your pipeline and a detailed log.

Now we will add a test stage. Update your code to look like this:

pipeline {

    agent none

    stages {

        stage(‘Build’) {

            agent {

                docker {

                    image ‘python:2-alpine’

                }

            }

            steps {

                sh ‘python -m py_compile test_myapp.py’

                stash(name: ‘compiled-results’, includes: ‘*.py*’)

            }

        }

        stage(‘Test’) {

            agent {

                docker {

                    image ‘qnib/pytest’

                }

            }

            steps {

                sh ‘py.test –verbose –junit-xml test-results/results.xml test_myapp.py’

            }

            post {

                always {

                    junit ‘test-results/results.xml’

                }

            }

    }

    }

}

We’re using another Docker container to run a simple PyTest (which we included in the code of our myapp.py file). Save this file and update your repo with another git add a . and git commit -m “add test stage to Jenkinsfile”. Then, scan the multibranch pipeline as before. When you drop into Blue Ocean, you’ll hopefully see success once again. Note that Docker stores everything it runs in its own volume, along with the results. Although you can work some command line magic to access those files directly, you don’t need to; Jenkins shows you those assets in its UI. Click on the latest stage to open the build details, and find the entry that says py.test –verbose –junit-xml test-results/results.xml testmyapp.py. Clicking on that shows you the results of your test:

Everything passed! Now we’re going to bring it home with the final stage in our demo pipeline: checking the code formatting. There are specific rules for formatting Python code as outlined in the language’s PEP-8 specifications. We’ll update our Jenkins file to use a tool called PyLint that will check our code. Here’s the full Jenkinsfile for all three stages of our pipeline:

pipeline {

    agent none

    stages {

        stage(‘Build’) {

            agent {

                docker {

                    image ‘python:2-alpine’

                }

            }

            steps {

                sh ‘python -m py_compile test_myapp.py’

                stash(name: ‘compiled-results’, includes: ‘*.py*’)

            }

        }

        stage(‘Test’) {

            agent {

                docker {

                    image ‘qnib/pytest’

                }

            }

            steps {

                sh ‘py.test –verbose –junit-xml test-results/results.xml test_myapp.py’

            }

            post {

                always {

                    junit ‘test-results/results.xml’

                }

            }

    }

        stage(‘Lint’) { 

            agent {

            docker {

                    image ‘eeacms/pylint’

        }

        }

            environment { 

                VOLUME = ‘$(pwd)/test_myapp.py’

                IMAGE = ‘eeacms/pylint’

            }

            steps {

            withEnv([‘PYLINTHOME=.’]) {

                    sh “pylint ${VOLUME}”

        }

        }

    }

    }

}

Follow the same steps as before: save the file, commit it to your Git repo so that Jenkins sees it, and then rescan the multi-branch pipeline. Then go into Blue Ocean and look at the result. Oh no!

The pipeline stage failed! That’s because our code is badly formatted, and PyLint tells us why. We’ll update our test_myapp.py file to make the code compliant:

“””

Program to capitalize input

“””

#test_capitalization

def capitalize_word(word):

    “”” Capitalize a word”””

    return word.capitalize()

 def test_capitalize_word():

    “””Test to ensure it capitalizes a word correctly”””

    assert capitalize_word(‘python’) == ‘Python’

Now, save, commit to your repo, and rescan. Blue Ocean shows that we fixed it (note that in our demo it took us a couple of runs at the Python code to get the formatting right).

You could run all these steps manually yourself, but the beauty of Jenkins is that it automates them all for faster development. That makes the tool invaluable for developers working on a fast cadence as part of a team, but even a single freelance dev, or a hobbyist working on open-source projects, can use this to refine their practice.

Microsoft doubles down on zero trust security policies


Keumars Afifi-Sabet

2 Mar, 2021

Microsoft has launched new functionality across its Azure Active Directory (AD) authentication portal and Microsoft 365 to advance its zero trust security strategy and protect its customers against insider threats. 

‘Zero trust’ is a security strategy based on the need for businesses to adapt to increasingly sophisticated threats, and is based on the assumption that nothing within the corporate network can be trusted. 

Microsoft is among a handful of tech companies to adopt these policies in a meaningful way over the past few years, with features revealed at its Ignite 2021 conference in Azure AD and Microsoft 365 bolstering the firm’s zero trust capabilities. 

Passwordless authentication is now generally available in AD across all cloud and hybrid environments, with users able to use biometrics, Windows Hello for Business, the Microsoft Authenticator app or FIDO2 security key to log-in.

The policy engine Azure AD Conditional Access now uses authentication context to enforce more granular policies based on user interactions within an app, also taking into account the sensitivity of data they’re trying to access. 

Verifiable credentials, which lets organisations confirm pieces of information on their employees such as education or professional certificates, is also entering public preview within the next few weeks. This verifies claims made without collecting any personal data. The government of Flanders and the NHS are already piloting this service.

“As defenders ourselves, we are passionate proponents of a Zero Trust mindset, encompassing all types of threats – both outside in and inside out,” said Microsoft’s corporate VP for security, compliance and identity, Vasu Jakkal.

“We believe the right approach is to address security, compliance, identity, and device management as an interdependent whole, and to extend protection to all data, devices, identities, platforms, and clouds – whether those things are from Microsoft, or not.”

Changes in Microsoft 365 are largely based on trying to eliminate the insider threat, both malicious and unwitting, with the firm investing in creating inside-out protection by extending its capabilities to third parties.

Improvements in compliance include co-authoring documents protected with Microsoft Information Protection, which allows multiple users to work simultaneously on documents while benefitting from the extensive protection for documents and emails across Microsoft 365 apps.

Microsoft 365’s Insider Risk Management Analytics will allow customers to identify potential insider risk activity within an organisation, which will then inform policy configurations. Tools include daily scans of tenant audit logs, including historical activities, with machine learning used to identify any risky activity.

Azure Pureview, Microsoft’s unified government platform for on-premises, multi-cloud and software as a service (Saas) data, can also be used to scan and classify data residing in AWS S3 buckets, SAP EEC, SAP S4/HANA and Oracle Database.

“Adopting a Zero Trust strategy is a journey,” Jakkal continued. “Every single step you take will make you more secure. In today’s world, with disappearing corporate network perimeters, identity is your first line of defence. 

“While your Zero Trust journey will be unique, if you are wondering where to start, our recommendation is to start with a strong cloud identity foundation. The most fundamental steps like strong authentication, protecting user credentials, and protecting devices are the most essential.”

Microsoft is also launching what it calls an “assume breach” toolset, which comprises tools and features that can help customers adopt the assume breach mentality without being hampered by the complexity that it can often entail. This is a critical component of the overall zero trust umbrella. 

Among the improvements, Microsoft Defender for Endpoint and Defender for Office 365 customers can now probe threats directly from the Microsoft 365 Defender portal, which provides alerts and in-depth investigation pages. A Threat Analytics section also provides a set of reports from Microsoft security researchers that help customers understand, prevent and mitigate active threats.

Microsoft takes on Slack with new Teams Connect feature


Bobby Hellard

2 Mar, 2021

Microsoft has announced a breadth of new capabilities for Microsoft Teams, including a cross-organisation channel sharing feature that’s uncannily similar to a service recently launched by Slack. 

The updates to Teams include various new modes for presenters, with Microsoft also showcasing new hardware dedicated to the video conferencing service. 

Of all the new functions announced at Ignite, ‘Microsoft Teams Connect’ is the one that many further the flames of the company’s heated rivalry with Slack. The service, which lets users share channels with both internal and external collaborators, seems to have almost the same word-for-word description as ‘Slack Connect‘.  

Slack has previously called out Microsoft for copying its work and recently filed an antitrust complaint with European Commission over Microsoft’s “anti-competitive” conduct.

Beyond Connect, the most eye-catching update is the ability to create interactive webinars – for internal or external purposes – that can accommodate up to 1,000 attendees. This includes a number of presentation options and host controls, such as the ability to disable attendee chat and video, and post-event reporting.

What’s more, until 30 June 2021, the webinars can be switched to a ‘view-only’ broadcast of up to 20,000 people in order to accommodate higher demand for virtual events. The same capabilities have been available for general meetings on Teams since August

Microsoft has also made steps to alleviate stress and video call fatigue with new functions for speakers. These are aimed at creating more impactful, dynamic presentations but also at keeping a more ‘natural’ connection with the participants. Presenters will be able to use ‘Microsoft PowerPoint Live’ which will enable hosts to deliver more engaging presentations with notes, slides, meeting chat, and participants all in a single view. 

There is also a dedicated ‘Presenter mode’ that allows hosts to customise how their video feed and content appears to the audience. These include Standout and Reporter modes that put the host’s video feed in different positions to visual aids or content. All three will launch in the coming months along with new Teams-focused hardware, such as ‘Intelligent Speakers’ that can identify and differentiate up to 10 people talking in a Microsoft Teams Room. 

On the hardware front, there is is also Microsoft Teams-certified video conferencing monitors from Dell and Poly 21, a new P15 video bar from Poly, and a new Cam 130 by Aver that allows users to present their best selves in lighted video meetings. 

Microsoft Azure Percept promises to make edge computing a doddle


Dale Walker

2 Mar, 2021

Microsoft has announced a new platform designed to make it easy to build and operate artificial intelligence-powered technology for use in low-power edge devices, such as cameras and audio equipment.

The Azure Percept Development Kit (DK), which is available in public preview from today, promises to provide a single, end-to-end system that enables customers without coding knowledge to develop an AI product from the ground up.

The hope is that this new platform will help create a Microsoft-powered ecosystem of edge devices designed for low-power implementations, in essence replicating its success with the Windows operating system in the PC market.

The platform, announced at Microsoft Ignite, will run alongside Azure Percept Vision and Azure Percept Audio, two bolt-on services that can connect to Azure cloud services such as Azure AI, Azure Machine Learning, Azure Live Video Analytics, and Microsoft’s various IoT services.

Early concepts suggest the platform is initially aimed at use-cases involving retail and warehousing, where customers can take advantage of services like object detection, shelf analytics, anomaly detection and keyword spotting, among others.

Microsoft explained that the DK “significantly” lowers the bar for what is required to build edge technology, particularly as most implementations require some degree of engineering and data science expertise to make them a success.

“With Azure Percept, we broke that barrier,” said Moe Tanabian, Microsoft vice president and general manager of the Azure edge and devices group. “For many use cases, we significantly lowered the technical bar needed to develop edge AI-based solutions, and citizen developers can build these without needing deep embedded engineering or data science skills.”

Customers signing up to the platform will also be provided with a range of edge-enabled hardware that allows for processes like speech and image recognition to take place without requiring a connection to the cloud. Initially, this will be built by Microsoft, however, the company also confirmed that third-party manufacturers will be able to build equipment that’s certified to run on the Azure Percept platform.

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, corporate vice president of Microsoft’s edge and platform group. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Microsoft’s own hardware also uses the industry-standard 80/20 T-slot framing architecture, which it claims will make it easier for customers to run pilots of their ideas with existing edge housing and infrastructure.

Elevators that are able to respond to custom voice commands, cameras that notify managers when shelves have low stock, and video streams that monitor for availability in car parks are just a few examples of how the technology could be deployed, Microsoft explained.

Azure Percept Studio, another bolt-on service, will provide step by step guides taking customers through the entire lifecycle of an edge tool, from design to implementation. Perhaps most importantly, customers using Percept Studio will also have access to AI models created by the open source community.

What is green cloud?


Sandra Vogel

3 Mar, 2021

The world’s fossil fuel stocks are depleting, and the cost of using them is rising. Meanwhile countries and corporations alike are making commitments to achieve zero carbon status, and individuals are increasingly aware of – and seeking to reduce – their personal environmental footprint. One way tech firms can do their bit to help the steady march to zero carbon is to use what’s known as green cloud. 

The complexities of green cloud

As the name would suggest, green cloud is about being low-carbon, but there are a lot of components in the mix beyond just the power that is used to keep facilities like data centres going. It’s also about how the infrastructure used in cloud facilities is produced and sourced. Some building materials are produced in more environmentally damaging ways, while others are more environmentally damaging to maintain and dispose of at a later date – the whole lifecycle matters. The same lifecycle issues exist for the technology components used to provide cloud services.

Even when we look at energy usage the picture is complex. Renewables are key as an energy source, but they are not the only factor; how energy is distributed around a data centre and energy efficiency are also important considerations. 

All of this means that understanding how green a green cloud provider is can be complex. Subhankar Pal, assistant vice president of technology and innovation at Capgemini company Altran, tells Cloud Pro: “Firms can discover their green-ness by establishing a way to evaluate green KPIs during tests and in production. Green KPIs will be derived from metrics like energy efficiency, cooling efficiency, computing infrastructure performance, thermal and air management metrics.”

Prioritising sustainability matters

Consumers are increasingly adopting sustainability models in their everyday life, and are often prepared to pay more for items that meet environmental credentials. The principles that lead people to look for green energy providers, reduced and recyclable packing in purchased goods, and ethically sourced clothing are the same principles that drive them away from companies that lack green credentials. Firms that add green cloud into their mix stand to gain in reputation, regardless of whether their clients are consumers or other businesses.

As Emma Roscow, Intelligent cloud infrastructure lead for Accenture UKI explains: “Business strategy is increasingly focused on sustainability, with many companies making commitments to reduce their carbon footprint. In fact, 99% of CEOs from large companies now agree that sustainability issues are important to the future success of their businesses.” 

It’s not just about good PR, though – the real environmental gains matter too. Roscow adds: “Accenture recently found that public cloud migrations could reduce global carbon emissions by 59 million tons of CO2 per year, the equivalent to taking 22 million cars off the road.” 

Lead the way and keep a clear head

Given the depletion of fossil fuels and push towards renewables, the move to green cloud is ultimately inevitable. For Pal, the time is right to make the move. “The reason to do this now is because some firms are already experiencing higher cost pressures, and they could be more cost implications in the long run. There could be stricter government regulations, penalties, and higher operational costs for managing non-green data centres,” he says.

But the move should be made with a clear head. Nick Mcquire, vice president, enterprise research at CCS Insight tells Cloud Pro: “We are seeing the cloud providers posturing over ‘my cloud is cleaner than your cloud’ as they commit on the one hand, to massive infrastructure build-outs to sustain demand and differentiate their platforms, and the ability for this infrastructure to be friendly on the planet on the other.” He advises, “customers should push their cloud providers hard on providing energy data around where they place their cloud workloads and for innovation in areas that can cut their energy emissions.”

Roscow highlights some additional factors to be taken into account including, within any firm, “the current hardware’s lifecycle, approach to application development, their sustainable business models and processes, and how they use the cloud to create circular operations”.

There is no easy, off the shelf, method of deciding when and how to make the move to green cloud. Each firm will need to do its own research on both cloud services and its wider use of IT – and that research might be much more in-depth than you think. Roscow puts this plainly – if surprisingly – when she reveals that “Accenture Labs’ research found that even the choice of coding language can impact energy consumption by as much as fifty times, depending on the programming technique”. 

Despite these twists and turns, there is no doubt the move to green cloud is coming for tech firms as for all others. Making the move sooner rather than later might be better financially and reputationally – as well as for the planet.