Microsoft Azure Percept promises to make edge computing a doddle


Dale Walker

2 Mar, 2021

Microsoft has announced a new platform designed to make it easy to build and operate artificial intelligence-powered technology for use in low-power edge devices, such as cameras and audio equipment.

The Azure Percept Development Kit (DK), which is available in public preview from today, promises to provide a single, end-to-end system that enables customers without coding knowledge to develop an AI product from the ground up.

The hope is that this new platform will help create a Microsoft-powered ecosystem of edge devices designed for low-power implementations, in essence replicating its success with the Windows operating system in the PC market.

The platform, announced at Microsoft Ignite, will run alongside Azure Percept Vision and Azure Percept Audio, two bolt-on services that can connect to Azure cloud services such as Azure AI, Azure Machine Learning, Azure Live Video Analytics, and Microsoft’s various IoT services.

Early concepts suggest the platform is initially aimed at use-cases involving retail and warehousing, where customers can take advantage of services like object detection, shelf analytics, anomaly detection and keyword spotting, among others.

Microsoft explained that the DK “significantly” lowers the bar for what is required to build edge technology, particularly as most implementations require some degree of engineering and data science expertise to make them a success.

“With Azure Percept, we broke that barrier,” said Moe Tanabian, Microsoft vice president and general manager of the Azure edge and devices group. “For many use cases, we significantly lowered the technical bar needed to develop edge AI-based solutions, and citizen developers can build these without needing deep embedded engineering or data science skills.”

Customers signing up to the platform will also be provided with a range of edge-enabled hardware that allows for processes like speech and image recognition to take place without requiring a connection to the cloud. Initially, this will be built by Microsoft, however, the company also confirmed that third-party manufacturers will be able to build equipment that’s certified to run on the Azure Percept platform.

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, corporate vice president of Microsoft’s edge and platform group. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Microsoft’s own hardware also uses the industry-standard 80/20 T-slot framing architecture, which it claims will make it easier for customers to run pilots of their ideas with existing edge housing and infrastructure.

Elevators that are able to respond to custom voice commands, cameras that notify managers when shelves have low stock, and video streams that monitor for availability in car parks are just a few examples of how the technology could be deployed, Microsoft explained.

Azure Percept Studio, another bolt-on service, will provide step by step guides taking customers through the entire lifecycle of an edge tool, from design to implementation. Perhaps most importantly, customers using Percept Studio will also have access to AI models created by the open source community.