Why digital transformation spending will reach $1.1 trillion – and what happens from here

Across the globe, CEOs continue to promote business technology-enabled growth. Worldwide spending on the technologies and services that enable digital transformation (DX) is forecast to be more than $1.1 trillion in 2018 – that's an increase of 16.8 percent over the $958 billion spent in 2017, according to the latest market study by International Data Corporation (IDC).

DX spending will be led by the discrete and process manufacturing industries, which will not only spend the most on DX solutions but also set the agenda for many DX priorities, programs, and use cases.

Discrete manufacturing and process manufacturing are expected to spend more than $333 billion combined on DX solutions in 2018. This represents nearly 30 percent of all DX spending worldwide this year.

From a technology perspective, the largest categories of spending will be applications, connectivity services, and IT services as manufacturers build out their digital platforms to compete in the digital economy.

The main objective and top spending priority of DX in both industries is smart manufacturing, which includes programs that focus on material optimization, smart asset management, and autonomic operations.

IDC expects the two industries to invest more than $115 billion in smart manufacturing initiatives this year. Both industries will also invest heavily in innovation acceleration ($33 billion) and digital supply chain optimization ($28 billion).

Driven in part by investments from the manufacturing industries, smart manufacturing ($161 billion) and digital supply chain optimization ($101 billion) are the DX strategic priorities that will see the most spending in 2018.

Other strategic priorities that will receive significant funding this year include digital grid, omni-experience engagement, omnichannel commerce, and innovation acceleration.

The strategic priorities that are forecast to see the fastest spending growth over the 2016-2021 forecast period are omni-experience engagement (38.1 percent compound annual growth rate (CAGR)), financial and clinical risk management (31.8 percent CAGR), and smart construction (25.4 percent CAGR).

"Some of the strategic priority areas with lower levels of spending this year include building cognitive capabilities, data-driven services and benefits, operationalizing data and information, and digital trust and stewardship," said Craig Simpson, research manager at IDC.

To achieve its DX strategic priorities, every business will develop programs that represent a long-term plan of action toward these goals. The DX programs that will receive the most funding in 2018 are digital supply chain and logistics automation ($93 billion) and smart asset management ($91 billion), followed by predictive grid and manufacturing operations (each more than $40 billion).

The programs that IDC expects will see the most spending growth over the five-year forecast are construction operations (38.4 percent CAGR), connected automated vehicles (37.6 percent CAGR), and clinical outcomes management (30.7 percent CAGR).

Each strategic priority includes a number of programs which are then comprised of use cases. These use cases are discretely funded efforts that support a program objective, and the overall strategic goals of an organization.

Outlook for new DX use cases

Use cases can be thought of as specific projects that employ line-of-business and IT resources, including hardware, software, and IT services. The use cases that will receive the most funding this year include freight management ($56 billion), robotic manufacturing ($43 billion), asset instrumentation ($43 billion), and autonomic operations ($35 billion).

The use cases that will see the fastest spending growth over the forecast period include robotic construction (38.4 percent CAGR), autonomous vehicles – mining (37.6 percent CAGR), and robotic process automation-based claims processing (35.5 percent CAGR) within the insurance industry.

In the construction industry, DX spending is expected to grow at a compound annual rate of 31.4 percent while retail, the third largest industry overall, is forecast to grow its DX spending at a faster pace (20.2 percent CAGR) than overall DX spending (18.1 percent CAGR).

Microsoft Azure Stack underpins Thales’ military Nexium Defence Cloud


Roland Moore-Colyer

14 Jun, 2018

Defence firm Thales has taken Microsoft’s Azure Stack and repurposed it for use in military field operations, to enable armies to keep their data secure while benefitting from working within a cloud environment.

Microsoft worked with the French contractor to integrate its ‘cloud-in-box’ platform with Thales’ own connectivity, encryption, and end-to-end cyber security products, allowing armed forces to keep sensitive data within their own infrastructure.

“Together with Thales, we will be able to provide a flexible cloud platform with an unequalled level of security that will help overcome challenges within the defence industry,” said Jean-Philippe Courtois, Microsoft’s executive vice president and president of global sales, marketing and operations.

Providing a form of cloud connectivity in the field is not the most complex of operations, given it’s fairly straightforward to package servers, network connectivity, compute and storage components into a portable and durable package – SAP has done this with onshore data harvesting and processing systems for the Extreme Sailing series.

But doing it in a secure fashion that keeps data within a military infrastructure but still enables developers to build apps and services on top of the cloud system, is more of a challenge.

Thales’ Nexium Defence Cloud works around this by creating a “highly secluded” private cloud that uses the Azure Stack as a baseline system onto which Thales adds its security and connectivity technology. This keeps the data within the system either when it’s at military headquarters or deployed to forward operating bases.

Given connectivity can be disrupted in conflict zones and operations theatres, the cloud can operate offline, giving it a degree of autonomy from the infrastructure based back at a military headquarters.

This level of flexibility is something Thales claims other secured defence clouds cannot currently offer.

In the future, Thales plans to boost Azure Stack with the Guavus Reflex analytic platform to allow for real-time in-the-field data analysis without relying on a connection back to HQ. This could make it easier for military forces to tap into sensor data gathered by field sensors or exchange data with mobile apps used by soldiers “augmented” with the latest technology.

The military industrial complex is vast and a lucrative market with a healthy appetite for technology, which both Thales and Microsoft could further tap into with the Nexium Defence Cloud.

Image credit: Microsoft 

Microsoft announces general availability of Azure Kubernetes Services

Those who attended or watched Cisco Live’s opening keynotes earlier this week – or indeed read our story on it – will have recognised the importance of Kubernetes to attendees with Google Cloud’s cameo being a particular highlight.

Now Microsoft is making waves of its own – with the general availability of its Azure Kubernetes Services (AKS).

The move will see Microsoft add five new regions of availability, with two in the US, two in Europe – including one in the UK – and the other in Australia, bringing the total number up to 10. Microsoft said it hoped to double its reach in the coming months.

Earlier this month, Amazon Web Services (AWS) announced the general availability of Amazon EKS, its managed Kubernetes service, with regions operational in US East and US West and rapid expansion promised.

The major providers are therefore all in the race to become the most developer and company-friendly resource on which to build and manage Kubernetes projects. Google, of course, as the original designer of the orchestration system, is a bit further ahead with its Google Kubernetes Engine (GKE) being a focal point of the Cisco and Google partnership elaborated on this week.

“With AKS in all these regions, users from around the world, or with applications that span the world, can deploy and manage their production Kubernetes applications with the confidence that Azure’s engineers are providing constant monitoring, operations, and support for our customers’ fully managed Kubernetes clusters,” wrote Brendan Burns, Microsoft Azure distinguished engineer in a blog post confirming the news.

“Azure was also the first cloud to offer a free managed Kubernetes service and we continue to offer it for free in GA,” Burns added. “We think you should be able to use Kubernetes without paying for our management infrastructure.”

You can find out more about the news here.

Dropbox plans SMR deployment to transform its Magic Pocket infrastructure


Keumars Afifi-Sabet

14 Jun, 2018

Dropbox has announced plans to deploy shingled magnetic recording (SMR) technology on a massive scale in a bid to transform its in-house cloud infrastructure.

The file hosting platform said deploying SMR drives on its custom-built Magic Pocket infrastructure at exabyte scale will increase storage density, reduce its data centre footprint and lead to significant cost savings, without sacrificing performance.

Dropbox says it is the first company to deploy SMR hard drive technology on such a scale. 

“Creating our own storage infrastructure was a huge technological challenge, but it’s already paid dividends for our customers and our business,” said Quentin Clark, Dropbox’s senior vice president of engineering, product, and design.

“As more teams adopt Dropbox, SMR technology will help us scale our infrastructure by providing greater flexibility, efficiency, and cost savings. We’re also excited to make this technology open-source so other companies can benefit from it.”

SMR, a hard drive technology that allows tracks on circular disks to be layered on top of one another, will be deployed on a quarter of the Magic Pocket infrastructure by 2019, according to Dropbox, with plans to open source the test software created in this process underway in the coming months.

Magic Pocket is the name of Dropbox’s custom-built infrastructure project that was rolled out after the file sharing company decided to migrate away from Amazon Web Services (AWS). The company initially built a prototype as a proof of concept in 2013, before managing to serve 90% of its data from in-house infrastructure in October 2015.

In what Dropbox describes as a “significant undertaking”, SMR technology was chosen for its ability to expand disk capacity from 8TB to 14TB while maintaining performance and reliability. Drives were sourced from third-parties, before the company designed a bespoke hardware ecosystem around it, also creating new software to ensure compatibility with Magic Pocket architecture in the process.

“SMR HDDs offer greater bit density and better cost structure ($/GB), decreasing the total cost of ownership on denser hardware,” the Magic Pocket and hardware engineering teams explained. “Our goal is to build the highest density Storage servers, and SMR currently provides the highest capacity, ahead of the traditional storage alternative, PMR.

“This new storage design now gives us the ability to work with future iterations of disk technologies. In the very immediate future we plan to focus on density designs and more efficient ways to handle large traffic volumes.

“With the total number of drives pushing the physical limit of this form factor our designs have to take into consideration potential failures from having that much data on a system while improving the efficacy of compute on the system.”

Towards the end of the year, the file hosting service says its infrastructure will span 29 facilities across 12 countries, with Dropbox projecting huge cost-saving and increased storage density benefits if SMR deployment is deemed a success.

Exploring cloud APIs – the unnoticed side of cloud computing

Nowadays, it is increasingly easier to lose one’s self in dashboards, visualisation tools, nice graphics, and all sorts of button-like approaches to cloud computing. Since the last decade, UX work has improved tenfold; nevertheless, there is a side that usually goes unnoticed unless extremely necessary. That side belongs to application user interfaces (APIs).

An API constitutes a set of objects, methods and functions that allows the user to build scripts, or even big applications, to make certain things in the cloud happen that usually cannot be made to happen through a dashboard. This can be for various reasons; whether it is not yet being available to the end user; being very specific to the business, such as a competitive advantage; or being an automated portion that is not widely needed and may also be specific to the business.

Of course, there are other reasons. The cloud may not have the maturity to enable the user to do certain things. In all those cases, APIs are most welcome and, contrary to popular belief, the learning curve to build something with it is usually not steep.

Widespread use throughout the industry

Virtually every cloud has an API. Sometimes, it is not offered as an API, but as an SDK. Although they are not the same thing, they are sometimes seen as equals, or at least close siblings. SDKs come in different flavours; it may be a Python SDK, a Java SDK, a Go SDK, or one of many others. The ones mentioned seem to be the ones broadest in use, but there are also many others, such as JavaScript/Node.js, that we see frequently in AWS, Oracle Cloud, Google Cloud Platform, and even Azure.

From luxurious to essential

Although there is little luxury in writing tools that make use of an API, I would like to think that some things can be made more elegantly when built from scratch, or from first principles, instead of simply using what is on the market. Nonetheless, although I am an advocate of not re-inventing the wheel, if off-the-shelf tools and technologies do not fit the bill, it is necessary to come up with something from scratch, be it proprietary or open source.

APIs can be used to write code that will create complementary features, such as high availability solutions that are not provided ad hoc, or certain automations that are intrinsic to the business case at hand. Sometimes these may not be indispensable, and in those cases they may be considered luxurious.

On the other hand, there are things that you will inevitably need, such as building a tool that creates havoc in your cloud architecture in order to test your incident response. Chaos Monkey, for AWS, is a key example of this. I believe these are towards the essential tools model, and should not be taken lightly.

What is needed to use an API or SDK

This depends on the cloud used, of course, but it generally goes along the lines of the following:

  • An authentication key: This will be used to authenticate the tool against the endpoint that the cloud has proposed. Some cloud platforms allow for instance principals, in which case a key is not even needed
  • An SDK package: As an example, in Oracle Cloud Infrastructure, in order to use the Python SDK, it is necessary to ‘pip install’ the OCI package – something as simple as typing a one-liner
  • An initial read within the SDK documentation: Sometimes it takes less than an hour to come up with a prototype and scale from there. It is not necessary to be an expert in the inner workings of the SDK, although it helps
  • Some familiarity with the language of the SDK: This is a given, but nevertheless it would be unfair not to add to the list

The recommended approach

Usually SDKs come in the form of what is known as CRUD (Create, Read, Update, Delete). This is a simple approach, known since the dawn of time and as common-sensical as humanly possible, so it should not scare any seasoned engineer in the least.

The top-down approach: Starting in a very detailed manner can be daunting, so in this case it is my belief that a top-down approach is well suited, starting from a list of resources until familiarity with common structures is gained. Not to mention that in some cases, Paretto’s Law is in plain sight, meaning that in the first 20% of an SDK lies 80% of its power.

I have found this when I started to build tools for Oracle Cloud; with some simple methods in Python, leveraging resource listing and drilling down on some objects, I was able to gather all the data necessary to start devising a data science model of resource usage. A major gain for our team.

From documentation to source code: Although documentation is helpful, it will never be as helpful as exploring the source code. It is clear to one’s self that an hour going through the source code is equivalent to around five hours going through the documentation. I understand in some cases watching a YouTube video may be easier than reading a book on the same subject, but the depth acquired and the mental training is much greater when reading a book. With an SDK or any program it works in the same way. Reading the documentation is nice, is good and helps, but nothing will shed the same light as going through the source code.

The MVP: Since The Lean Startup hit the bookstores, the concept of MVP has gained popularity significantly. I am referring to the minimum viable product, a product that contains only the basic features and that is not meant to cover all cases but the standard base ones. This concept is well stated in The Lean Startup and though is attached often with Agile, it is an older concept.

Although it would be ideal to have a product with a perfect set of functionalities since version 1.0.0, it is often if not always utopic. There is no perfect set of functionalities and waiting to launch can lead to catastrophic failure. In IT, it is often significantly better to have a minimal product with a small client base that desperately wants the product, than have a big base that is relatively indifferent – if they have it they will use it, if not they can live without it.

Conclusion

At the end of the day this can be simplified into six points:

  • APIs are a significant advantage for automation and solutions such as HA or DR that are sometimes not provided by the cloud platform
  • Paretto applies everywhere – APIs are not focused only for big applications. A simple script can save several hours of weekly toil work
  • Start with broad strokes and slowly go into the details
  • Do not be afraid to examine the source code – it is often a gold mine of knowledge
  • Start with a small minimal version, instead of several features that may or may not be used
  • Look for hidden treasures in the cloud platform that are not easily spotted through dashboards and UX

Happy cloud automation – and until next time.

Editor’s note: You can read more of Nazareno’s articles here.

Why GDPR creates a “vicious circle” for marketers


Rene Millman

14 Jun, 2018

New data protection rules will frustrate consumers who demand personalised experiences but are wary of handing over their data, but organisations who prove trustworthy stand to benefit, according to experts.

The General Data Protection Regulation (GDPR) came into force on 25 May and gives people more control over what personal data organisations can collect, allowing them to move it to other companies or demand organisations delete it altogether.

It also requires companies to be more transparent about how they use people’s personal information and gets rid of passive opt-outs some organisations relied on to obtain customer consent: now people must actively agree to their data being collected and processed.

As a result of that, and with GDPR making people more aware of the value of their data, marketers’ jobs are about to get harder, according to The Content Advisory’s privacy lead, Tim Walters, speaking at Aprimo’s recent Sync Europe conference.

“I am convinced that GDPR will rather significantly reduce the amount of first party and third-party data that marketing teams have access to,” he said.

Walters said the data was the fuel for marketing efforts and pointed to a recent report by Accenture that identified a more systemic or structural problem that is choking off the supply of fuel for customer management.

But he pointed to a “vicious circle” being created by GDPR, where customers want “hyper-relevant” experiences when shopping online, but are very reluctant to hand over their personal data that would inform those relevant experiences.

“Consumers will punish brands that do not provide … relevant experiences by abandoning them for other providers,” he said. “Not because they necessarily know that another provider can provide that experience, but because [they] hope that they can.”

However, on the other hand, GDPR and the high profile Cambridge Analytica scandal, where millions of Facebook users’ profile data was harvested to allegedly influence US voters in the 2016 presidential election, has increased public awareness about the risks around sharing their data.

“The fact that they don’t know what’s going on with the data that they surrender … means that they are reluctant to provide that personal data. which is precisely necessary to create the kinds of experiences that they demand,” said Walters, adding: “That’s the vicious circle.”

But GDPR provides an opportunity for companies that recognise people are in control of their personal data – creating that trust breaks the vicious cycle.

“Every company in the world, whether they are subject to GDPR or not, should be looking for some kind of framework or template or guidebook to show them how to go about putting consumers in control of their data,” Walters concluded.

“That is the only way to make progress. That guidebook or template is exactly what the GDPR is.”

Edmund Breault, head of marketing at Aprimo, told IT Pro that GDPR is absolutely going to help marketing organisations by putting customers at the heart of their efforts.

“While the efforts in the short term on marketing organisations to become GDPR-compliant have added burden, GDPR fundamentally makes customer-centricity a “legal” requirement,” he said.

“We are now in a trust economy and marketers need to provide capabilities to allow consumers to stay in control of their data.”

Picture: Bigstock

Microsoft pushes Windows 10’s Fluent Design into Office


Roland Moore-Colyer

14 Jun, 2018

Microsoft is bringing its Fluent Design language over from Windows 10 and adding it into Office, with the goal of simplifying the user interface of the suite’s productivity apps.

The likes of Word, PowerPoint, Excel and Outlook are all set to receive a simplified ribbon that is smaller and contains icons that Microsoft says will be easier to use.

New animations, icons and subtle colour changes have been added as part of the shift to Fluent Design, thereby giving Office a lick of digital paint to de-clutter and modernise its user interface.

The web version of Office is also being built on a new platform, which Microsoft noted will enable it to run faster than before.

Jared Spataro, corporate vice president for Office and Windows marketing, explained the move to Fluent Design was centred around “three Cs”; customers, context, and control.

The idea behind these three areas was to research and analyse how Redmond’s customers use the Office suite and drive changes focused around them, with designs that understand the context of what users want to do and are working on in Office apps, as well as giving them control over how they want to interact with the apps.

“These updates are exclusive to Office.com and Office 365 – the always up-to-date versions of our apps and services. But they won’t happen all at once. Instead, over the next several months we will deploy new designs to select customers in stages and carefully test and learn. We’ll move them into production only after they’ve made it through rigorous rounds of validation and refinement,” noted Spataro.

The changes will be rather subtle at the start; alongside the new ribbon in the apps and icon and colour changes, Microsoft will boost the search function in Office, introducing “zero query search” which will serve up smart search recommendations based on the user’s interactions with the Office apps, thanks to the use of Microsoft Graph and the company’s machine learning technology.

Over the coming months, Fluent Design will get pushed into the Office suite for both desktop and web use, starting with Office.com then coming to Outlook for Windows in July and Outlook for Mac in August.

Image credit: Bigstock

IBM expands cloud availability with 18 new zones


Joe Curtis

14 Jun, 2018

IBM has launched 18 new availability zones for its cloud, introducing more datacentres to the UK, North America, Europe and Asia-Pacific.

The expansion will mean IBM Cloud operates in 78 locations, with the new datacentres opening in Germany, the UK, Washington, DC, Dallas, Tokyo and Sydney. The zones are defined as isolated clouds within a cloud region, and they are designed to improve capacity, availability, redundancy, and fault tolerance of IBM Cloud as a whole.

The tech giant’s customers will also be able to use IBM Cloud Kubernetes Service to deploy multizone container clusters across different zones within a region, something that allows containerised software to offer high availability.

“The world’s biggest companies work with IBM to migrate them to the cloud because we know their technology and unique business needs as they bridge their past with the future,” said David Kenny, senior vice president of IBM Watson and the cloud platform.

“Our continued cloud investment and growing client roster reflect that companies are increasingly seeking hybrid cloud environments that offer cutting-edge tools including AI, analytics, IoT and blockchain to maximise their benefits.”

Pointing to its hybrid cloud’s popularity with enterprises, IBM also revealed that ExxonMobil, Bausch + Lomb and Australian bank Westpac are migrating central workloads to its cloud.

ExxonMobil is using IBM Cloud to underpin a mobile app for motorists developed by IBM Services, while eye health firm Bausch + Lomb is using IBM’s cataract surgical system, Stellaris Elite and Westpac now deploys applications and customer products on IBM’s cloud.

IBM is also positioning its cloud as the platform to underpin smart building and IoT innovations, as a place to analyse the vast quantity of data such technologies produce.

“Buildings have long mimicked living organisms — plumbing circulates through the walls, wires innervate every room, and concrete and beams provide skeletal support — but until recently, buildings lacked the most critical body part: a brain,” said Bret Greenstein, global VP of Watson IoT.

“The IBM Cloud is the cognitive centre that enables buildings we live and work in to serve our needs in new and unprecedented ways.”

It is supporting elevator manufacturer Kone analyse the movement of people inside buildings, up lifts and escalators, to manage that flow better, and UK-based Chameleon Technology is using IBM’s Watson Assistant to allow people to speak to their smart energy meters.

Parallels Mac Management 7 Feature Focus: macOS Imaging via USB Boot Loader

Guest blog post by Timofey Furyaev, Project Manager, Parallels, Inc. macOS Imaging via USB Boot Loader The latest release of Parallels® Mac Management for Microsoft® SCCM brings several highly demanded features. The ability to boot from a USB drive during operating system deployment is one of them. Network OS deployment with task sequence support was […]

The post Parallels Mac Management 7 Feature Focus: macOS Imaging via USB Boot Loader appeared first on Parallels Blog.

The business buyer’s guide to remote support software


Dave Mitchell

13 Jun, 2018

Whatever size your business is, your support department needs to be responsive. Resolving problems quickly keeps productivity losses to a minimum. Sadly, most support teams are overworked and understaffed. Few companies can afford to have technicians rushing out to examine users’ systems in person every time an issue is reported.

That applies all the more for staff based in branch or remote offices – or, as is increasingly common, at home. A lengthy road trip is a highly inefficient use of resources, and anyone who’s ever tried to troubleshoot over the telephone will tell you what a frustrating experience this can be.

Happily, there’s a solution: remote support software, which lets technicians “take over” affected systems from afar, and investigate and fix problems without leaving the comfort of their own desk.

What’s wrong with free software?

The first question you might ask is whether you need to spend money on remote support software at all.

It’s true that there are a great many free remote access products available, including the Quick Assist tool that’s built into Windows 10. This allows anyone with a Microsoft account to take full control of a remote computer by simply exchanging a six-digit PIN with a remote user, and it can be very handy on the odd occasion when you need to help out a friend or family member.

The NetOp Remote Control agent provides a plethora of useful support tools

For a business environment, however, it’s a non-starter: these free tools can punch a hole clean through your firewall, and are virtually impossible to manage and audit.

However, premium apps are typically designed to be highly secure, with features such as AES 256-bit encrypted connections and endpoint password protection. They can also often enforce access permissions for support staff, allowing you to decide what level of control is permitted for each one. That could be useful if your business offers multiple levels of support: you can allow first-line responders only to passively view a client’s screen, while full remote control is limited to more senior personnel.

On-premises or cloud?

Some remote-support systems are hosted in the cloud by their vendors, while others run inside your company network and are managed by your own IT department.

The on-premises option is most suitable if you need total control over what can be accessed and by whom, and it’s easier to set up than you might fear.

DameWare can use RDP for quick client connections and simple remote control

The only potential gotcha arises when it comes to accessing systems outside the local network. Some packages use a proprietary gateway that links multiple sites together over encrypted links – but check whether this is included as standard, as several only offer it as an option.

If you want to support a range of devices that are spread across multiple locations then a cloud-hosted support solution might be a better choice. These platforms make it easy to access all clients securely from a single web portal, no matter where in the world they’re located.

NetSupport stores hardware and software inventories on its host system

For the best of both worlds, consider a hybrid solution that teams up an on-premises console with a web portal, allowing you to support local systems via a fast LAN connection and use the internet to access remote ones.

Undercover agents

Remote support normally requires a small agent to be installed on client systems, to listen for incoming connections and allow technicians to access the machine. Some systems can provide various functions using Windows’ built-in remote support features, but the key features – such as remote control and file transfer – typically require a proprietary agent.

On-premises solutions typically include tools to automatically push the agent software out onto multiple systems, so you can ensure that all clients are accessible. Alternatively, you can go for an on-demand support approach, where the agent is only loaded for the duration of the session, and automatically removed afterwards. Cloud-hosted solutions often handle this by allowing technicians to send connection requests to the client, possibly by email, who must then allow access.

This ensures that no-one can connect to your client PCs without express authorisation – reassuring, perhaps, if your users are dealing with confidential information. However, it also means that you can’t access remote systems when the user isn’t present. If this is an issue, look for cloud solutions that include an unattended agent which can be loaded permanently on selected systems.

More features

Alongside standard remote-control services, many support products offer a range of useful extra tools, such as a file transfer extension that enables simple drag-and-drop copies between technician and client, facilities for text, audio and video chat between the user and the support agent, Registry editors and session recording.

Hardware and software inventory tools can be very useful too, as they enable technicians to check what’s installed on the user’s PC prior to starting a support session. On-premises solutions can use the host system to store client inventories, for enhanced efficiency and security.

Many products are able to connect to Mac clients as well as Windows systems, although the experience isn’t always as slick. Platforms tend to require the macOS agent to be manually deployed, and advanced features are sometimes unavailable.

And when it comes to iOS devices such as iPads and iPhones, remote control isn’t possible at all, thanks to the operating system’s strict security model. However, many vendors offer free iOS and Android apps that allow you to use your mobile to connect to and control client systems.

Even if remote-support software can’t solve every problem, it can give a huge boost to the efficiency of your IT helpdesk team – and that, in turn, will boost business productivity.

Image: Shutterstock