All posts by Keri Allan

Data visibility: The biggest problem with public clouds


Keri Allan

14 May, 2019

Use of public cloud continues to grow. In fact, 84% of businesses had placed additional workloads into the public cloud in 2018, according to a recent report by Dimension Research. Almost a quarter of those (21%) reported that their increase in public cloud workloads was significant.

However, while respondents were almost unanimous (99%) in their belief that cloud visibility is vital to operational control, only 20% of respondents said they were able to access the data they need to monitor public clouds accurately.

“If there’s any part of your business – including your network – which you can’t see, then you can’t determine how it’s performing or if it is exposing your business to risks such as poor user experience or security compromise,” points out Scott Register, vice president, product management at Ixia, the commissioner of the report.

This sounds like a major issue and yet surprisingly, it’s nothing new. Tony Lock, distinguished analyst and director of engagement at Freeform Dynamics, has been reporting on visibility issues for over five years, and not just regarding public cloud.

“Believe it or not despite people having had IT monitoring technology almost since IT began, we still don’t have good visibility in a lot of systems,” he tells us. “Now we’re getting so much more data thrown at us, visibility is even more of a challenge – just trying to work out what’s important through all of the noise.”

He adds that for many years public cloud providers have been slow to improve their services and make it easier for organisations to see what’s happening, largely because they handled it all for them.

“To a degree, you can understand why [providers] didn’t focus on monitoring to begin with, as they’ve got their own internal monitoring systems and they were looking after everything. But if a customer is going to use them for years and years then they want to see what’s in there, how it’s being used and if it’s secure.”

The cost of zero visibility

A lack of visibility in the public cloud is a business risk in terms of security, compliance and governance, but it can also affect business costs. For example, companies may be unaware that they’re paying for idle virtual machines unnecessarily.

Then there’s performance. Almost half of those that responded to Ixia’s survey stated that a lack of visibility has led to application performance issues. These blind spots hide clues key to identifying the root cause of a performance issue, and can also lead to inaccurate fixes.

Another issue relates to legal requirements and data protection. With a lack of visibility, some businesses may not be aware that they have customer information in the public cloud, which is a problem when “the local regulations and laws state it should not be stored outside of a company’s domain”, highlights Lock.

Then there are the complexities around data protection and where the liability sits should a data breach occur.

“Often a daisy chain of different companies is involved in cloud storage, with standard terms and conditions of business, which exclude liability,” explains BCS Internet Specialist Group committee member, Peter Lewinton. “This can leave the organisation that collected the data [being] liable for the failure of a key supplier somewhere in the chain – sometimes without understanding that this is the position. This applies to all forms of cloud storage, but there’s less control with the public cloud.”

Understandably, security continues to be a big concern for enterprises. The majority (87%) of those questioned by Ixia said they’re concerned that their lack of visibility obscures security threats, but it’s also worth noting that general security concerns regarding public cloud still apply.

What’s the solution?

Lock believes that things are changing and vendors are beginning to listen to the concerns of customers. Vendors have started to make more APIs available and several third-party vendors are also creating software that can run inside virtualised environments to feed back more information to customers. “This move is partly down to customer pressure and partly down to market maturity,” he notes.

Ixia’s Scott Register recommends either a physical or virtual network tap that effectively mirrors traffic on a network segment or physical interface to a downstream device for monitoring.

“These taps are often interconnected with specialised monitoring gear such as network packet brokers, which can provide advanced processing, such as aggregation, decryption, filtering and granular access controls. Once the relevant packets are available, hundreds of vendors offer specialised tools that use the packet data for application or network performance monitoring as well as security analytics.”

Are vendors really to blame?

Although many businesses suffer with poor public cloud visibility, Owain Williams, technical lead at Vouchercloud, believes customers are too quick to blame the provider. He argues that there are many reliable vendors already providing the necessary access tools and that a lack of visibility is often down to the customer.

“This is my experience. As such it’s often entirely solvable from the business. The main providers already give you the tools you need. Businesses can log every single byte going in and out if they wish – new folders, permission changes, alerts; all the bells and whistles. If the tools themselves are inefficient, then businesses need to re-evaluate their cloud provider.

Instead, he believes that many of the visibility problems that businesses encounter can be traced back to those managing infrastructure – employees that may be in need of extra training and support.

“Better education for people – those charged with provisioning the infrastructure – is a strong first port of call,” he argues. “It’s about ensuring the businesses and individuals have the right training and experience to make the most of their public cloud service. The tools already exist to assure visibility is as robust as possible – it’s provided by these large public cloud organisations. Invariably, it’s a case of properly identifying and utilising these tools.”

2019’s highest-paying IT certifications


Keri Allan

5 Apr, 2019

In a competitive talent market, such as IT, obtaining a certification is a sure way to verify your expertise, demonstrate your knowledge quickly to others, and ultimately make job hunting a far smoother process. Recruiters look for credentials to back up details provided on an applicant’s CV and many companies request certain types of certification in order for an applicant to even be considered for a role.

According to training provider Global Knowledge, 89% of the global IT industry is certified. It recently published its list of the 15 top paying IT certifications in 2019, showing that employers are focusing on specific areas, in particular, cloud computing, cyber security, networking and project management. In fact, cloud and project management dominated the top five spots.

Global Knowledge 2019 report:

No. Certification Avg. salary (in $)
1. Google Certified Professional Cloud Architect 139,529
2. PMP – Project Management Professional 135,798
3. Certified ScrumMaster 135,441
4. AWS Certified Solutions Architect (Associate) 132,840
5. AWS Certified Developer (Associate) 130,369
6. Microsoft Certified Solutions Expert – Server Infrastructure 121,288
7. ITIL Foundation 120,566
8. Certified Information Security Manager 118,412
9. Certified in Risk and Information Systems Control 117,395
10. Certified Information Systems Security Professional 116,900
11. Certified Ethical Hacker 116,306
12. Citrix Certified Associate – Virtualisation 113,442
13. CompTIA Security+ 110,321
14. CompTIA Network+ 107,143
15. Cisco Certified Network Prof. Routing and Switching 106,957

Although the figures provided represent a look at the US market, we can see that Google’s own Cloud Architect certification is now the best qualification to pursue in terms of average salary, closely followed by qualifications in project management and then development roles for AWS.

“The two leading areas are cyber security and cloud computing, followed by virtualisation, network and wireless LANs,” notes Zane Schweer, Global Knowledge’s director of marketing communications. “Up and coming certifications focus on AI, cognitive computing, machine learning, IoT, mobility and end-point management.”

Cloud comes out on top

“Cloud computing is paramount to every aspect of modern business,” explains Deshini Newman, managing director EMEA of non-profit organisations (ISC)2. “It’s reflective of the highly agile and cost-effective way that businesses need to work now, and so skilled professionals need to demonstrate that they are proficient in the same platforms, methodologies and approaches towards development, maintenance, detection and implementation.”

Jisc, a non-profit which specialises in further and higher education technology solutions, has joined many other organisations in adopting a cloud-first approach to IT, and so relies heavily on services like Amazon AWS and Microsoft Azure.

“Certified training in either or both of these services is important for a variety of roles,” explains Peter Kent, head of IT governance and communications at Jisc, “either to give the detailed technical know-how to operate them or simply to demonstrate an understanding of how they fit into our infrastructure landscape.”

“Accompanying these, related networking and server certifications such as Cisco Certified Network Associate (CCNA) and Microsoft Certified Solutions Expert (MSCE) are important as many cloud infrastructures still need to work with remaining or hybrid on-premise infrastructures,” he notes.

Security certifications are also high on the most-wanted list, but they are required across a variety of different platforms and disciplines. One of the growth areas (ISC)2 has seen is in cybersecurity certifications in relation to the cloud. “This is something that is reflected by the positioning of the cloud within the Global Knowledge top 15,” Newman points out.

Aside from technical training, ITIL is still considered a key certification as a way of benchmarking an individual’s understanding of the infrastructure and process framework that IT teams have in place.

“But with ITIL v4 just around the corner I’d recommend holding off any training until v4 courses are widely available,” advises Kent.

And it’s not just about the accreditation – it can often also be about the company behind the certification itself. This is part of what makes the most desirable certifications desirable – the credibility and support of the issuing bodies.

The benefits of certification

Global Knowledge’s report highlighted that businesses believe that having certified IT professionals on staff offer a number of benefits – most importantly helping them meet client requirements, close skills gaps and solve technical issues more quickly.

This is great for the company, but what do you gain as an individual? Well, aside from being in higher demand and the ability to perform a job faster, the main answer is a larger paycheque.

“In North America, it’s roughly a 15% increase in salary, while in EMEA its 3%,” says Schweer. “We attribute cost of living and other unique circumstances within each country to the difference,” he notes.

Research by (ISC)2 and recruitment firm Mason Frank International also showed similar results.

“In our latest Salesforce salary survey 39% of respondents indicated that their salary had increased since becoming certified and those holding certifications at the rarer end of the spectrum are more likely to benefit from a pay increase,” says director Andy Mason.

“While the exact amount of money an individual can earn will fluctuate from sector to sector, it is clear that certifications in any sector can and do make a big financial difference,” agrees Newman. “That’s on top of setting individuals apart at the top of their profession.”

Does certification create an ‘opportunity shortage’?

However, not everyone regards certifications as the be-all-and-end-all for recruiting the best possible staff. Some, such as Mango Solutions’ head of data engineering, Mark Sellors, actually believe that it can often ‘lock-out’ certain candidates that might be perfect for a role.

“This can be troubling for a number of reasons,” he says. “In many cases certifications are worked out in an individual’s personal time. This means those with significant responsibilities outside of their existing job may not be in a position to do additional study, and that’s not to mention the cost of some of these certs.”

He adds that using certifications as a bar above which one must reach can also further reduce gender diversity within the IT space, as a past study by Hewlett Packard found that women are much less likely than men to apply for a job if they don’t meet all of the listed entry requirements.

It’s Sellors’ belief that the problem facing many hiring managers is not just a talent one, but rather a one of opportunity.

“They’re not giving great candidates the opportunity to excel in these roles as they’ve latched on to the idea that talent can be proven with a certificate,” explains Sellors. “Certifications can be useful in certain circumstances – for example when trying to prove a certain degree of knowledge during a career switch, or moving from one technical area to another. They’re also a great way to quickly ramp up knowledge when your existing role shifts in a new direction.

“More often than not, however, they prove little beyond the candidate’s ability to cram for an exam. Deep technical knowledge comes from experience and there’s sadly no shortcut for that.”

2019 will be the year cloud-native becomes the new norm


Keri Allan

8 Jan, 2019

The vast majority of businesses see cloud as a critical component of their digital transformation strategy – some 68% of businesses already have cloud-based systems place, or are in the process of implementing them, according to technology consultancy Amido.

But more specifically, businesses are recognising the benefits of cloud-native applications: software designed specifically to run on cloud infrastructure.

In 2018, the Cloud Native Computing Foundation (CNCF), a vendor-neutral home for cloud-native projects, saw its end-user community grow to over 50 members. This includes household names such as Uber, Airbnb, Netflix, Adidas, Spotify, Mastercard and Morgan Stanley.

Cloud native applications offer hyperscale provisioning, resilience, high availability and responsiveness, all which help businesses operate faster with greater flexibility. It’s, therefore, no real surprise that many industry experts believe that 2019 is the year cloud-native will become the ‘new normal’.

The benefits of cloud-native technology

“For CIOs, cloud-native is an enabler; a transformative technology,” says Amido’s chief technology officer (CTO) Simon Evan. “They’re using it to do things they can’t do on premise. Driving this are things like AI workloads, which benefit all sectors from finance through to healthcare and retail. You can free up staff from menial tasks, improve customer experience and benefit from predictive analytics,” he highlights.

“Taking a cloud-native approach means businesses can harness the real power of the cloud to their advantage as it offers them faster responses to the changing needs of the business and the market, ensures their technology portfolio is up to date and driving innovation and improves the customer experience while increasing ROI,” adds Puja Prabhakar, senior director, Applications and Infrastructure at consultancy firm Avanade UKI.

Cloud-native technologies can often become “boring” compared to emerging apps, according to CNCF CTO/COO Chris Aniszczyk, as the tech stabilises and matures over the years. However, he argues this shouldn’t be seen as a negative.

“Boring means organisations can focus on delivering business value, rather than spending time on making the technology usable,” he explains.

Experts advise businesses to embrace these ‘boring’ technologies in 2019, particularly the installation and configuration of platforms and containers as a service platforms, such as Docker, OpenShift and Kubernetes.

“I expect more traction for Kubernetes as more organisations use it for distributed applications across hybrid cloud infrastructure that includes public clouds, private clouds, multiple public clouds, public clouds with on-premise environments and combinations of them all,” says Jay Lyman, principal analyst, Cloud Native and DevOps at 451 Research.

He believes that more organisations will leverage containers and microservices for not only new cloud-native applications but also increasingly those built on traditional and legacy infrastructure.

The rise of serverless in 2019

Prabhakar adds that businesses should also consider how they’re designing their full stack and backend application engineering. Specifically, she believes engineering should be focused on creating applications inherently designed for development on the cloud, such as serverless frameworks, microservices frameworks, API integration frameworks, DevOps, data stores, and machine learning.

Other cloud-native technologies set to take a front-row seat in 2019 include commercialised service mesh offerings, which, according to CNCF’s Aniszczyk, are the next frontier in making service-to-service communication safer, faster and more reliable.

“Service meshes like Linkerd are ready to be used in production deployments and can help businesses scale applications without latency or downtime. They can also be used to help secure traffic between services and applications,” he points out.

Following an explosion of interest in 2018, serverless technologies also look set to pick up momentum in 2019.

“Serverless for enterprise is a huge trend,” says Liz Rice, chair of 2018’s CloudNativeCon and KubeCon events. “We’ll see lots of discussions on how and where enterprises can apply architectures based on serverless functions and perhaps a better understanding of the cultural/DevSecOps implications of serverless functions will emerge in 2019.”

“Serverless won’t be appropriate for all classes of application, and will co-exist alongside container architectures for some time to come,” she adds.

Talent and security challenges remain

In December, a flaw allowing easy access into every single machine in a cluster via the Kubernetes API server was quickly caught and resolved, making security another hot topic. The community came together to discuss how to best solve security challenges facing the open source/cloud-native community and a number of security-related initiatives have been announced to help organisations go beyond what is natively provided by the Kubernetes platform. “And as we go into 2019, I expect we’ll continue to see more efforts crop up,” Aniszczyk says.

In response to all these trends, businesses need to invest not just in technology, but also in acquiring new talent and retraining existing staff in cloud-native methodology and technology.

“Adoption of cloud-native technology will only be held back by the lack of skills in the market,” points out Ilja Summala, CTO of Nordcloud Group.

Lyman agrees that the lack of cloud-native expertise and experience is probably the biggest challenge facing the industry. “Few organisations can find large numbers of Kubernetes and other cloud-native experts and even if they could find them, it is an expensive proposition. This is why including and training existing staff in cloud-native initiatives as much as possible will be critical moving forward.”

He also recommends talent also focuses on open source technology.

“End users have never been as participatory and influential as with Kubernetes,” he explains. “There is ample room to get involved with many open source software projects and Kubernetes Special Interest Groups (SIGs), and this is helping the community to focus more directly on the problems that companies are facing and the objectives they are trying to meet.”

However, there’s one other issue that looks set to take longer to resolve. That’s changing the culture of how we work, a big challenge for business that’s not going to be fixed overnight.

“The shift from monolithic/waterfall to agile/DevOps is more about process and organisational psychology than it is about which technologies to employ,” points out Mark Collier, COO of the OpenStack Foundation. “This has been talked about for several years and it’s not going away anytime soon. It’s the big problem that enterprises must address and it’s going to take years to get there as it’s a generational shift in philosophy.”

Academics: Full cloud is like Netflix, bursting is just boring old iPlayer


Keri Allan

12 Jul, 2018

It’s easy to see why cloud bursting – where an application is run in a private cloud or data centre and then ‘bursts’ into a public cloud when demand dictates – could appeal to research universities.

It can provide institutions with an escape valve when their in-house resources are fully committed, helping to potentially speed up research and save costs.

In recent years adoption of cloud computing has been transforming research and education, and although change within academia can be slow, the latest UK Research & Innovation (UKRI) e-infrastructure report has shown a growing interest in community and public clouds.

“We also see that scientific computing teams at universities and research institutes are starting to look very seriously at virtualising their in-house compute clusters,” says Martin Hamilton, a member of UKRI’s Cloud for Research working group.

Although educational researchers tend to “thrash kit within an inch of its life”, Hamilton says there’s a “growing recognition that having the option of running a virtual machine (VM) image can make it easier for researchers to share and re-use code.”

However, there are divergent opinions within the research community as to how best cloud resources should be deployed. While bursting remains a go-to choice for some, others either remain reticent or have avoided the technology entirely in favour of a full-fledged cloud.

Cloud bursting advocates

Two of the world’s biggest champions of the cloud bursting approach are the University of Cambridge and, on the other side of the world, the National University of Singapore (NUS).

“NUS has a wide range of computing requirements, making it impractical for all resources and capacity requirements to be provided in-house,” says Tommy Hor, NUS’ chief information technology officer, speaking to Cloud Pro.

The National University of Singapore deploys cloud bursting to support its research projects

“Our researchers occasionally have ad-hoc service demands that require dedicated computing resources to speed up their work. We have started migrating our in-house pay-per-use service to the cloud, and this will give us greater financial agility and economies of scale.”

The University of Cambridge has gone as far as providing its own cloud bursting capabilities. Its Research Computing Services (RCS) operation has a dedicated private ‘public sector’ cloud designed specifically for scientific and technical computing.

“Researchers from across Cambridge University, plus UK universities and companies, use RCS for cloud bursting,” says Dr Paul Calleja, the university’s director of Research Computing. “Research undertaken includes large-scale genomic analysis for clinical diagnosis and simulations of jet engines.”

Cloud bursting challenges and limitations

But while cloud bursting has potential benefits, there are still problems to be ironed out. This includes interoperability issues between environments, pricing models and security.

“We recently saw a number of Docker images laden with malware removed from the public registry, opening black doors onto users’ machines and running cryptocurrency mining processes,” says UKRI’s Martin Hamilton.

“Things like this take on an even greater significance when we are talking about compute jobs to calculate stresses on airframes, analyse CT images looking for tumours, or model the effect a new drug will have on the human body.”

For the University of Bristol, cloud bursting is seen as a highly restrictive approach to deployment, one that needlessly increases the complexity of a network.

“In my opinion cloud bursting limits the use of the cloud to being just an extension of a local on-premise compute cluster,” says Dr Christopher Woods, leader of the university’s Research Software Engineering group, which is fully in the cloud.

“It also means you get the worst of both worlds – you’re running both a cluster and a cloud, so have twice the complexity.”

He adds that, in his experience, bursting can introduce problems when it comes to moving data between on-premise and the cloud, and that the “up-front-investment ‘batch queue’ way of using a cluster” isn’t always compatible with the on-demand way of paying for cloud computing services.

A stepping-stone to cloud

Cloud providers and organisations like Jisc are looking to address some of these issues by negotiating data egress waivers and special pricing agreements for universities.

However, as Dr Woods notes, universities may struggle with a change of payment model.

“The biggest issue is the money side. Universities are terribly slow at moving money around so it’s difficult to work out how the money would make its way from a researcher’s grant to the provider.

“A big question is how do they go from CAPEX to OPEX? Maybe this is why cloud bursting can be a good stepping-stone, as it lets universities effectively turn cloud into a CAPEX investment that’s been prepaid for.

“It’s a way to dip their toes in the water and get their heads around new contracts and procurement models,” he says.

Woods considers cloud bursting a “sticking plaster solution” that will disappear as more organisations trust their data to cloud providers and the option becomes cheaper than on-premise.

“My feeling is that the cost of cloud will be competitive by 2020 and that most universities will be fully on cloud by the end of 2025,” he says.

The iPlayer of cloud deployment

Woods says that cloud bursting, by definition, only offers a slice of the flexibility that full cloud deployment brings, something he suggests can be compared to TV streaming services.

“You get to run interactive simulations, interactive data analysis and publish interactive papers that can be re-run and re-used by others. The best way to describe the difference is that the cloud is the ‘Netflix of simulation’, while on-premise is like watching the BBC following a TV schedule.

“Cloud bursting is like iPlayer – a hybrid mix of terrestrial TV and on-demand streaming that’s unsatisfactory compared to just binge-watching whatever you want on Netflix on demand.”

The importance of engineers

Research software engineers like Woods at the University of Bristol are a relatively new kind of academic, using their DevOps mindset and technical knowledge to support other researchers.

Hamilton believes that this new mindset is going to be essential for research in the years to come, helping “researchers get to grips with the tools available and develop their scientific computing applications.”

In Woods’ experience, cloud providers are frequently only doing work with those institutions that are able to support projects with in-house research software engineers.

“You need to have that skill set within the university to make it work,” says Woods. “Academics want to solve a genome – they have no interest in putting together the supercomputer that will do that. You really need that layer of person to lead the way.

“Those institutions that have people that understand software and hardware – and can bring the two together – will be the ones to prosper and take advantage of everything cloud offers,” he adds.

Image: Shutterstock

Businesses ‘should already be on their journey to UCaaS’


Keri Allan

19 Jun, 2018

The unified communication and collaboration (UCC) market is seeing a dramatic shift towards cloud-based solutions, with unified communications as a service (UCaaS) leading the way.

The global user base of cloud UCaaS has now surpassed 43 million, with new users estimated to grow at a compound annual growth rate (CAGR) of 23% from 2016 to 2023, according to analyst firm Frost & Sullivan.

This move is due in part to growing confidence in cloud solutions and a better understanding of the benefits it can offer, but also down to many customer premises equipment (CPE) assets nearing end of life.

Art Schoeller, vice president, principal analyst at Forrester Research states that more than one out of three IT professionals considering UCC will deploy it as a subscription service in their next upgrade cycle.

However, as communication systems have historically had long lifecycles of 10 years or more, with a heavy emphasis on ‘investment protection’, he notes that “this might mean that some [of our respondents] might not move to UCaaS for another five years or so”.

In the meantime, many existing CPE assets are being integrated with cloud during their sunset years, while digital transformation initiatives are also pushing IT departments to look more closely at moving to UCaaS.

All change

Elka Popova, digital transformation vice president at Frost & Sullivan believes that customer confidence in UCaaS is also growing thanks to a recent swathe of mergers, acquisition and restructuring projects by IP telephony and UCC providers.

“In 2016 and 2017 the industry was marked by significant merger and acquisition (M&A) and restructuring activity affecting key providers. Their repositioning and strategy realignment is likely to determine the industry’s evolution and growth trajectory over the next few years.

“M&As, bankruptcy protection, internal reorganisation, international expansion and solution repackaging will aim to improve industry health and boost customer confidence in service and company long-term viability.”

The benefits of UCaaS

A key reason why companies are turning to UCaaS is the fact it offers a wide variety of communications technology and collaboration applications and services.

“Organisations are turning to UCaaS to reduce operational costs, expand into new markets or regions, boost creativity and innovation and also improve sales and marketing effectiveness,” points out Rob Arnold, Frost & Sullivan’s connected work industry principal.

Businesses are also looking for something that’s ‘on-demand’, offering greater flexibility over the services they may have been used to in the past.

“Customers like the flexibility of a cloud service, in most cases with little or no upfront cost or hardware investment – depending on the service,” says Cathy Gerosa, head of regulatory affairs at the Federation of Communication Services (FCS).

“The other major benefit for business is the flexibility it gives with today’s workforce, enabling greater collaboration between both internal team members and external parties, regardless of where they’re located.”

Developing a transition plan – things to consider

But as the market shifts towards this subscription service, vendors must support organisations by developing a transition plan that protects their existing investments and offers minimal disruption to the business.

Benefits such as predictable billing, outsourced ownership and the move from CAPEX to OPEX are clear, but understandably, businesses still have concerns – particularly around security, visibility and a potential lack of control.

For many companies, the first step is often a hybrid implementation that spans on-premise and cloud.

“Some organisations still feel that they need to control their own communications, especially from a security and risk perspective,” notes Forrester’s Schoeller.

Plus, there are cons to balance the pros. New endpoint devices may be needed, even if the old system still works well, leading to not only extra spending but changes to the way staff work and how they interact with others.

“People used to maintaining older phone systems have to really shift their mindset, skill set and approach to the job. Plus, the move to UCaaS is very disruptive to the historical distribution channel for communications systems,” Schoeller adds.

The key lies in selecting the right provider and developing a strong partnership in which you can have faith that the technology and implementation is what you need.

“The UCaaS market is relatively new so businesses do need to understand who they are taking their services from,” advises Gerosa. “For example, are they financially stable but at the same time nimble in how services are delivered and developed?”

Vertical is the new black

UCaaS adoption looks set to grow at a steady pace, but analysts believe on-premise solutions will continue to play a role for at least the next decade.

While hybrid may be the first step many organisations take, providers look set to push forward with new technologies, capabilities and offerings, even if change is perhaps slow on the side of customers.

“The next phase in the industry’s evolution will be marked by the emergence of ‘productivity UC’ ‘IoT UC’ and ‘vertical UC’ – vertical is the ‘new black’,” says Popova.

“Tailored services bundles, industry certifications, integration with vertical-specific apps and partnerships with vertical experts will deliver superior value in targeted industries.”

Gerosa believes we’ll soon see an even wider selection of services being offered by providers, with greater emphasis being placed on platform-agnostic applications.

“How businesses consume these services will be interesting with some of the dominant players like Microsoft with their 365 suite developing further,” says Gerosa.

“As we see today with mobile phone apps, the potential in cloud just keeps growing so the ability to integrate these applications regardless of which supplier they are taken from will become more of a requirement.”

“If businesses have not already started the journey down the UCaaS world we would certainly recommend they start now,” she adds.

Image: Shutterstock

Pushing cloud AI closer to the edge


Keri Allan

12 Apr, 2018

Cloud-based AI services continue to grow in popularity, thanks to their low cost, easy-to-use integration and potential to create complex services.

In the words of Daniel Hulme, senior research associate at UCL, “cloud-based solutions are cheaper, more flexible and more secure” than anything else on the market.

By 2020 it’s believed that as many as 60% of personal technology device vendors will be using third-party AI cloud services to enhance the features they offer in their products. However, we’re also likely to see a significant growth of cloud-based AI services in the business sector.

One of the biggest drivers of this has been the proliferation of VPAs in the consumer space, made popular by the development of smart speakers by the likes of Amazon and Google.

Users have quickly adopted the technology into their everyday lives, and businesses were quick to realise the potential locked away in these devices, particularly when it comes to delivering new products.

Drivers of cloud-based AI services

Amazon’s Alexa was the first personal assistant to achieve mass-market appeal

“It’s a confluence of factors,” says Philip Carnelley, AVP Enterprise Software Group at analyst firm IDC. “There is no doubt the consumer experience of using Alexa, Siri and Google Now has helped familiarise businesses with the power of AI.

“But there is also a lot of publicity around AI achievements, like DeepMind’s game-winning efforts – AlphaGo winning against the Go champion for example – or Microsoft’s breakthrough efforts in speech recognition.

He adds that improvements to the underlying platforms, such as the greater availability of infrastructure-as-a-service (IaaS) and new developments in graphical processing units, are making the whole package more cost-effective.

Yet, it’s important to remember that despite there being so much activity in the sector, the technology is still in its infancy.

“AI is still very much a developing market,” says Alan Priestley, research director for technology and service partners at Gartner. “We’re in the very early stages. People are currently building and training AI models, or algorithms, to attempt to do what the human brain does, which is analyse natural content.”

The likes of Google, Amazon and Facebook are leading this early development precisely because they have so much untapped data at their disposal, he adds.

The role of the cloud

Vendors have helped drive AI concepts thanks to open source code

The cloud has become an integral part of this development, primarily because of the vast computing resources at a company’s disposal.

“The hyper-scale vendors have all invested heavily in this and are building application programming interfaces (APIs) to enable themselves – and others – to use services in the cloud that leverage AI capabilities,” says Priestley.

“By virtue of their huge amount of captive compute resource, data and software skill set, [these vendors have been] instrumental in turning some of the AI concepts into reality.”

This includes the development of a host of open source tools that the wider community is using today, including TensorFlow and MXNet, and large vendor services are frequently being utilised when training AI models.

According to IDC, businesses are already seeing the value of deploying these cloud-based AI solutions. Although less than 10% of European companies use AI in operational systems today, three times that amount are currently experimenting with, piloting or planning AI usage – whether that be to improve sales and marketing, planning and scheduling, or general efficiency.

Benefits to business

Chatbots were an early AI hit within many businesses

“Businesses are seeing early implementations that show how AI-driven solutions, like chatbots, can improve the customer experience and thereby grow businesses – so others want to follow suit,” says Carnelley.

“Unsurprisingly, companies offering AI products and services are growing fast,” he points out.

Indeed, chatbots were one of the earliest AI-powered features to break into the enterprise sphere, and interest looks set to continue.

According to a report published this month by IT company Spiceworks, within the next 12 months, 40% of large businesses expect to implement one or more intelligent assistants or AI chatbots on company-owned devices. They will be joined by 25% mid-sized companies and 27% of small businesses.

However, organisations are also looking more widely at the many ways AI solutions could help them.

The insurance industry, in particular, is looking at how AI can be used to help predict credit scores and how someone may respond to a premium.

“This is not just making a decision but interpreting the data,” says Priestley. “A lot of this wasn’t originally in digital form, but completed by hand. This has been scanned and stored but until recently it was impossible for computer systems to utilise this information. Now, with AI, technology can extract this data and use it to inform decisions.”

Another example he highlights is the medical sector, which is deploying AI-powered systems to help improve the process of capturing and analysing patient data.

“At the moment, MRI and CT scans are interpreted by a human, but there’s a lot of work underfoot to apply AI algorithms that improve the interpretation of these images, and diagnosis (via AI),” says Priestley.

Moving to the edge

Self-driving cars will need latency-free analytics

Given the sheer amount of computational power on hand, the development of AI services is almost exclusively taking place in the cloud but, looking forward, experts believe that many will, at least partially, move to the edge.

The latency associated with the cloud will soon become a problem, especially as more devices require intelligent services that are capable of analysing data and delivering information in real time.

“If I’m in a self-driving car it cannot wait to contact the cloud before making a decision on what to do,” says Priestley. “A lot of inferencing will take place in the cloud, but an increasingly large amount of AI deployment will take place in edge devices.

“They’ll still have a cloud connection, but the workload will be distributed between the two, with much of the initial work done at the edge. When the device itself can’t make a decision, it will connect to the ‘higher authority’ – in the form of the cloud – to look at the information and help it make a decision.”

Essentially, organisations will use the cloud for what it’s good at – scale, training and developing APIs and storing data. Yet it’s clear that the future of cloud-only AI is coming to an end.

Image: Shutterstock