Whether it’s pocket-sized mobile communicators, cars that can drive themselves or a global information-sharing network, scientists and researchers have a history of turning the marvels of technology dreamed up by science fiction writers into reality – and the composer of that endeavour is the creation of artificial intelligence.
Although we’re still some way from being served by self-aware robot butlers that can reliably pass the Turing test, AI technology has progressed immeasurably in the last decade alone. AI has moved from being the sole province of research projects working with giant supercomputers to something that all of us carry around in our pockets, and cloud computing has been a huge part of that move from research to reality.
The most foundational change came when public cloud offerings like Amazon Web Services and Google Cloud Platform became widely available. The development of AI using methods such as, deep-learning and neural networks requires a considerable amount of compute power. Once it became possible to rent as many servers as you needed from a cloud provider, tasks that were once only possible by universities and science labs suddenly became accessible to everyone.
Moreover, these servers take advantage of best-in-class hardware from Intel, featuring technical developments specifically designed to enable AI. For example, high-performance Xeon Scalable chips and low-latency Optane Memory. On top of this, many cloud platform providers have, in recent years, started to specifically cater to machine learning and AI development, offering servers and services tailored to make training deep-learning models quicker and easier than ever.
As these barriers to entry come down, companies and hobbyists around the world have started experimenting with machine learning and AI, exploring the possibilities and pushing the boundaries of what it can do. Much of this research has been shared with the wider community under open source licenses. A key example of is TensorFlow™, a machine learning library developed internally by Google and shared freely with the rest of the world.
Alongside the compute power to train deep-learning models, the cloud has also provided the datasets on which to train them. The development of AI has gone hand in hand with the big data boom, as companies start gathering and storing exponentially more data for analytical purposes. A side effect of this is that there are now huge corpuses of data that can be fed into machine learning models in order to train them in tasks like pattern recognition, clustering and regression.
All of this makes it much easier to improve and develop AI technology, but there’s one key reason that it’s now a legitimate business tool rather than simply a technical endeavour, and that’s the ease of consumption that cloud models offer. It’s so much easier for customers and end-users to consume AI tools running in the cloud as part of a SaaS application compared to traditional on-premise software.
None of the processing is done locally, so there’s no hardware requirements, and because the vendor is responsible for maintaining the AI on a day-to-day basis, there’s no need to hire machine learning or AI specialists. With no extra effort or investment required, companies are becoming more and more comfortable with the idea of integrating AI processes into their day-to-day workflows.
The general public has also grown increasingly familiar with AI technology thanks to the growing prevalence of AI-powered digital assistants like Siri, Alexa and Cortana. These services have helped acclimatise people to working with AI, as well as opening their eyes to the benefits offered by the technology.
These factors have made developing commercial AI more viable, resulting in an explosion of AI-enabled tools and services, many of which have been snapped up by cloud giants like Google and Salesforce and integrated into their product portfolios. Most cloud storage companies, for instance, augment their search capabilities by using machine vision algorithms to accurately identify objects in photographs or text in documents.
AI is also increasingly being used by companies as an initial point of contact for customer service, with chatbots handling Tier One support queries and sales inquiries. A far cry from the long-established automated telephone menus, these programs are intelligent and responsive, and are becoming increasingly common.
It’s not just public clouds that have spurred AI advancements; private and hybrid cloud deployments have also seen a great deal of change. Banking is one area where AI can have a huge impact in terms of analysing huge quantities of data very quickly, but for regulatory reasons, financial firms often can’t – or won’t – use public cloud providers. Instead, these institutions use private clouds to run custom-built or specially-adapted machine learning algorithms to sort through their data.
Intel’s advancements in processor technology have brought cloud-scale computing power within the reach of companies operating their own private cloud, meaning that you no longer need a sizeable data centre to run machine learning applications. Instead, you can run AI tasks on as little as one rack, depending on the size of the deployment. This enables you to keep total control of your data and infrastructure, whilst still taking advantage of cloud-style consumption and delivery models.
In the comparatively short timeframe that cloud computing has been a mainstream phenomenon, AI has gone from being the preserve of academics to a day-to-day reality for businesses around the world; used to perform all kinds of diverse tasks from data analysis to customer service. Machine learning applications are now being developed, deployed and delivered via cloud platforms, empowered and enabled by Intel’s next generation data centre technologies. Whether you’re looking to run your AI applications on a public, private, virtual private or hybrid cloud, Intel is making your AI smarter, stronger and faster than ever before.