Blockchain development trends: C-suite buy in, logistics and authentication opportunities

Many business leaders have a much better understanding of blockchain technology than just a couple of years ago. There's been a surge in R&D, both internally and in partnership with third parties, and a recognition that blockchain has the potential to be deployed in a variety of commercial use cases.

As the number of blockchain research projects increased, awareness among the pilot participants and elsewhere in their industries gained momentum. Now other companies are beginning to consider whether they, too, should seek to gain a competitive advantage from a proof-of-concept deployment.

Blockchain market development

According to the latest worldwide market study by Juniper Research, 65 percent of survey respondent enterprises with over 10,000 employees are considering or actively engaged in blockchain deployment. This marks a significant rise from 2017 when the corresponding figure was 54 percent.

Moreover, nearly a quarter of companies considering deploying blockchain had moved beyond proof-of-concept into trials and commercial rollouts, with dramatic diversification in use cases over the past year.

Only 15 percent of proposed deployments were now related to payments – compared with 34 percent last year – with significant interest in opportunities across diverse fields including logistics, authentication and smart contracts.

The study findings also identified savings and cost reductions across a range of verticals in areas such as compliance and fraud reduction, including a forecast of more than $100 billion by 2030 in food exports.

The survey results revealed that nearly half of companies were considering using Ethereum as their blockchain; reflecting the fact that its token standardization has enabled the creation of an ecosystem of distributed applications (dApps) to be built on its chain.

Furthermore, all the responding companies which had already invested over $100,000 in blockchain indicated they would be spending at least this amount again on the technology over the next 12 months.

According to the Juniper assessment, this demonstrated initial C-suite feedback had been largely positive in most cases, sufficiently so for executives deciding to move to the next stage of integration. That said, three-quarters of respondents expect some disruption to internal or external systems.

Outlook for blockchain applications

"The findings illustrate the need for companies to engage in a prolonged period of parallel running new systems alongside the old, to resolve any issues that might arise," said James Moar, senior analyst at Juniper Research. Regardless, we should anticipate more blockchain application development in the future.

The survey participants also acknowledged IBM as the leading vendor for blockchain project planning and deployment, with the tech giant ranked first by 65 percent of respondents — that's nearly 10 times more than the second-place vendor, Microsoft (7 percent).

Why building successful SaaS delivery is easier than you think

Software as a service (SaaS) is becoming ubiquitous. It is estimated that enterprises currently use 16 SaaS applications on average, yet one in seven of these businesses also believe that more than 80% of their applications will be SaaS-based by 2020.

This would seem an especially accurate assessment too by enterprises, as the current market for SaaS products and services is worth £54 billion, yet it is expected to jump to over £85 billion by 2021.

Based on this upward trajectory, SaaS application providers clearly have a huge opportunity on offer. However, how can they maximise it? 

As ThousandEyes works with some of the top SaaS providers around the world, we’ve gleaned five key, operational ‘habits’ for successful delivery. Importantly, it is key to realise that successful delivery doesn’t just focus on your own company’s culture, or strategic direction, or even your operational knowledge. It also encompasses relationships with customers, vendors and other third-party providers.

Don’t point the finger so quickly

Every SaaS provider knows they will face up to both application and network issues, regardless of how good their product is. However, when it comes to actually solving these performance issues, the network tends to be singled out for blame first. This means your network team needs to prove its innocence before your application team will deal with the problem. With this type of culture, it can be a huge distraction from dealing with the issue at hand, while also stunting team collaboration, which will help with resolving issues quicker. Getting your teams to provide a quick view of network health, with application context, can define where the issue lies and be a huge help in dealing with problems.

Relationship with third-parties need to change

A simple fact for the vast majority of service providers is they can’t investigate, or even acknowledge, all performance-related inquiries they receive on a daily basis, due to finite resources. This situation can make resolving issues quite a challenge for you, as a SaaS provider, particularly if there’s no actual data to work off to show a fault.

A natural byproduct of this is that you can fall into a quite unproductive routine of trying to escalate problems, yet your service provider prefers to deny, deflect and defer.

Most of your third party providers will not spend their time troubleshooting for you, particularly if they are responsible on fixing an issue, or, especially if they may face a service level agreement (SLA) penalty.

Evidence in this scenario is critical. Forget mantras like ‘find and fix’, instead you need to adopt an ‘evidence and escalate’ approach. This involves having a proven set of steps to collate evidence, regardless of if you do not control certain services or networks, in order to successfully escalate an issue. 

With this approach, you are no longer relying solely on your external providers. Instead, you’ve taken control of the process and the benefits are significant, including reducing the time it takes to resolve problems, while also having the knock-on effect of benefiting the user’s overall digital experience.

Deal with customers in the right way

Every day, SaaS providers maintain an illusion. They provide a seamlessly delivered application, yet behind the curtain, they work extremely hard to build a unified customer experience, despite many different complex components and reliance on third parties. It is a very hard balancing act for providers. They can’t afford a questionable Internet service provider (ISP), a compromised domain name system (DNS) cache, or false route advertisements, or broken API endpoints.

However, when something goes wrong, as a provider, it is essential that you can speedily attribute responsibility to the right part of the user experience. This is not just to resolve the issue itself, but also so you can communicate with users quickly and effectively, which is crucially important as today’s end users demand a huge level of transparency. 

In order to do this, active networking monitoring is the solution. This technology can provide an unrivalled, clear view of the application delivery path from your customers, all the way to your application servers. This provides a precise understanding of where and why something isn’t working and also enable you to update your customers in a transparent and insightful way.

Monitor interservice communication that includes external networks

The majority of SaaS providers are relying on external APIs for their user experience now. This ranges from things like payment gateways and customer relationship management (CRM) databases. Yet anytime an external API is used, it becomes part of the application stack and, thus, needs to be managed. Again, you need to keep a close eye on when and why your APIs are available and performing (or not), to deal with any issues impacting on the user experience.

The lifecycle can’t end when monitoring your service

Sometimes monitoring can be treated as just an operations practice, in order to ensure service uptime. Yet, this is short-sighted as having access to data early in the application journey can be really useful for enabling you to make better choices around application architecture and network. 

For example, your choice of the location of data centres or workloads in public cloud providers can have a significant impact on service delivery. Therefore, with issues like this, you need to have detailed, visual, and reportable data that charts every aspect of user experience from web server, to network path, to Internet routing. This information is like gold dust before rolling out a new service, enabling you to remove (as much as possible) the risk from your most important SaaS planning decisions.

One thing to keep in mind about these five habits is that they rely on end-to-end visibility, which can be provided by leading network monitoring solutions. This enables a huge degree of transparency in performance across not just your own application, but also your providers. With visibility at the heart of your approach you can really change, for the better, how you deliver your SaaS application, while also future-proofing your performance to make sure you maximise the huge opportunity that is developing in the market.

AWS launches into Accenture and Capgemini partnerships

A couple of partnerships involving Amazon Web Services (AWS) with Accenture and Capgemini have been unveiled, around healthcare and enterprise migrations respectively.

The first partnership, announced with Accenture and Merck, aims to launch a cloud-based informatics research platform which is designed to help life sciences organisations in the early stages of drug development.

The platform will enable healthcare professionals to analyse and aggregate data from multiple applications and a single set of interfaces. The platform is being developed by Accenture and AWS with Merck being the first pharmaceutical company to use it.

Elsewhere, Capgemini and AWS are coming together to build a platform focusing on value-added cloud services for their customers, ranging from SAP migrations, to data centre modernisation and artificial intelligence (AI).

With Capgemini's partnerships with both SAP and AWS in mind, the company is able to migrate the former to the latter as part of the first focus of the initiative. The data centre modernisation will come through leveraging VMware Cloud on AWS – which regular readers of this publication will know all about – to deliver end-to-end hybrid cloud.

"Our clients look for global scale and excellence in digital transformation, enabled by cloud technologies," said Aiman Ezzat, chief operating officer at Capgemini. "With a commitment to scaling our AWS capabilities we can bring to our clients' digital jourenys both operational efficiencies and the power of new technologies, such as artificial intelligence and machine learning."

How AI and machine learning are redefining the war for talent

These and many other fascinating insights are from Gartner’s recent research note, Cool Vendors in Human Capital Management for Talent Acquisition (PDF, 13 pp., client access reqd.) that illustrates how AI and machine learning are fundamentally redefining the war for talent. Gartner selected five companies that are setting a rapid pace of innovation in talent management, taking on Human Capital Management’s (HCM) most complex challenges. The five vendors Gartner mentions in the research note are AllyO, Eightfold, jobpal, Knack, and Vettd. Each has concentrated on creating and launching differentiated applications that address urgent needs enterprises have across the talent acquisition landscape. Gartner’s interpretation of the expanding Talent Acquisition Landscape is shown below (please click on the graphic to expand):

Source: Gartner, Cool Vendors in Human Capital Management for Talent Acquisition, Written by Jason Cerrato, Jeff Freyermuth, John Kostoulas, Helen Poitevin, Ron Hanscome. 7 September 2018

Company growth plans are accelerating the war for talent

The average employee’s tenure at a cloud-based enterprise software company is 19 months; in the Silicon Valley, this trends to 14 months due to intense competition for talent according to C-level executives leading these companies. Fast-growing enterprise cloud computing companies and many other businesses like them need specific capabilities, skill sets, and associates who know how to unlearn old concepts and learn new ones. Today across tech and many other industries, every company’s growth strategy is predicated on how well they attract, engage, screen, interview, select and manage talent over associates’ lifecycles.

Of the five companies Gartner names as Cool Vendors in the field of Human Capital Management for Talent Acquisition, Eightfold is the only one achieving personalisation at scale today. Attaining personalisation at scale is essential if any growing business is going to succeed in attracting, acquiring and growing talent that can support their growth goals and strategies. Eightfold’s approach makes it possible to scale personalised responses to specific candidates in a company’s candidate community while defining the ideal candidate for each open position.

Gartner finds Eightfold noteworthy for its AI-based Talent Intelligence Platform that combines analysis of publicly available data, internal data repositories, HCM systems, ATS tools, and spreadsheets then creates ontologies based on organisation-specific success criteria. Each ontology, or area of talent management interest, is customisable for further queries using the app’s easily understood and navigated user interface. Gartner also finds that Eightfold.ai is one of the first examples of a self-updating corporate candidate database. Profiles in the system are now continually updated using external data gathering, without applicants reapplying or submitting updated profiles. The Eightfold.ai Talent Intelligence Platform is shown below:

Taking a data-driven approach to improve diversity

AI and machine learning have the potential to remove conscious and unconscious biases from hiring decisions, leading to hiring decisions based on capabilities and innate skills. Many CEOs and senior management teams are enthusiastically endorsing diversity programs yet struggling to make progress. AI and machine learning-based approaches like Eightfold’s can help to accelerate them to their diversity goals and attain a more egalitarian workplace. Data is the great equaliser, with a proven ability to eradicate conscious and unconscious biases from hiring decisions and enable true diversity by equally evaluating candidates based on their experience, growth potential and strengths.

Conclusion

At the center of every growing business’ growth plans is the need to attract, engage, recruit, and retain the highest quality employees possible. As future research in the field of HCM will show, the field is in crisis because it’s relying more on biases than solid data. Breaking through the barrier of conscious and unconscious biases will provide contextual intelligence of an applicant’s unique skills, capabilities and growth trajectories that are far beyond the scope of any resume or what an ATS can provide. The war for talent is being won today with data and insights that strip away biases to provide prospects who are ready for the challenges of helping their hiring companies grow.

Intel partners with Alibaba for edge computing platform

Intel has noted a $200 billion addressable opportunity for the 'data-centric' economy combining cloud, AI and edge – and the latter has come into force following a new partnership with Alibaba.

The collaboration, revealed at Alibaba's Computing Conference in Hangzhou, will see the two companies launch a joint edge computing platform.

The primary use case is for industrial manufacturing and smart buildings, integrating Intel's hardware, software and AI with Alibaba Cloud's IoT products. "The platform utilises computer vision and AI to convert data at the edge into business insights," the companies note.

The companies are partnering in other ways – deploying the latest Intel technology in Alibaba to prepare for the 11/11 shopping festival, as well as helping provide content for the Tokyo Olympic Games in 2020.

"Alibaba's highly innovative data-centric computing infrastructure supported by Intel technology enables real-time insight for customers from the cloud to the edge," said Navin Shenoy, Intel EVP and data centre group general manager in a statement. "Our close collaboration with Alibaba from silicon to software to market adoption enables customers to benefit from a broad set of workload-optimised solutions."

Last month, Shenoy told attendees at the Data-Centric Innovation Summit in Santa Clara of Intel's plans to address the 'biggest opportunity' in the company's history. In a subsequent company editorial, he further outlined his vision. "The proliferation of the cloud beyond hyperscale and into the network and out to the edge, the impending transition to 5G, and the growth of AI and analytics have driven a profound shift in the market, creating massive amounts of largely untapped data," he wrote.

This is not the only cloudy partnership Alibaba has tapped into in recent weeks – the company also struck a deal with SAP to launch in China, with the two companies jointly offering ERP suite S/4HANA Cloud in the country.

Cloud security and small businesses – what you need to know to avoid the pitfalls

Today we work in a world that is increasingly connected, convenient and cloud-based. This comes with a world of benefits not just for enterprises, but also for small to medium sized businesses (SMBs).

It’s now easier than ever to share documents in the cloud, video-conference with colleagues across the world and compile resources so that global teams can quickly access them from shared storage. The downfall, unfortunately, is that the use of cloud tools makes it that much easier for hackers to target your organisation. More points of access mean more places for hackers to target. Many SMBs are using these collaborative tools, but haven’t updated their firewall or network security solutions to adequately combat the rising threats cloud and collaborative programs bring, and it’s leaving businesses vulnerable.

According to Verizon’s 2018 Data Breach Investigations Report, more than 58 percent of malware attack victims are categorised as small businesses. SMBs have now become more appealing targets for hackers, and it’s costing them dearly.

Taking it to the cloud

Cloud applications have become essential, and are business-critical for most organisations. Particularly employees can be spread across many locations, working from home, the road or satellite offices. This necessitates a secure, reliable and performant connection to the Internet for every employee regardless of location. Internet connectivity is now entwined with business continuity for almost every organisation.

The problem, particularly with SMBs, is that often these organisations have scaled quickly and may be relying on improvised home-office equipment that doesn’t adequately protect their network from threats.

Never neglect the network

Network security is a vital piece of the puzzle. Most wireless routers aimed at consumers or small offices don’t have the right features to safeguard or optimise network traffic. The right next-generation firewall should be able to not only provide foundational security like intrusion prevention and malware protection, but also ensure business continuity with features like bandwidth shaping and the ability to provide failover options between Internet connections so that any business, regardless of size, can stay online and productive.

As SMBs grow, often security is one of the areas that isn’t scaled properly or upgraded in lieu of focusing on revenue-generating activities in the short term. Part of this is because it can be daunting to learn to protect a growing business against cyberthreats, but it doesn’t have to be.

Simplifying solutions

Many organisations struggle with securing multiple locations, whether they have branch offices, remote workers or retail storefronts. Having the right network security solution provider is an important factor. There are ways to avoid installing security solutions at each point of presence, reducing the management overhead, complexity and attack surface.

Solutions like SD-WAN for software-defined networking and cloud-hosted and managed firewalls can simplify and streamline complex security operations.  Smaller organisations often feel like they have to balance security with complexity and cost.

However, there are convergent technologies like next-generation firewalls and unified threat management solutions designed specifically for organisations with fewer resources and in-house expertise. They can provide the same level of security as complex enterprise solutions, but at a fraction of the cost and with greatly simplified configuration and management requirements.

Maintenance is key

The lifecycle of firewalls is often between three and five years, but many businesses are unaware of the need for regularly updating/replacing their firewalls. These are not “set it and forget it” technologies. At a minimum, administrators must keep not only threat signatures up-to-date, but also patches and upgrades to firmware. Since this can be an intimidating process, it is essential that companies look to software-first approaches to security that provide seamless upgrades to the software platform without the need for downtime.

Businesses don’t start big, they grow. SMBs are the heart and soul of the economy, and their security solutions need to be able to grow with them. Network security is an important place to start as your team works across geographies and time zones. There are plenty of excellent tools to help you manage this process, specifically designed for the needs of growing organisations.

Be certain to do thorough research to find the right fit of your organisation, and then don’t forget to reassess the security needs of your company on an ongoing basis to address new threats and the changing dynamics of your organisation.

Google Cloud launches container security tool and more at Tokyo jamboree

Google has rolled out a series of cloudy updates in time for its Cloud Next Tokyo event – around container security, in-memory data, and artificial intelligence (AI).

Container Registry vulnerability scanning, launched in beta, looks to prevent the deployment of vulnerable images by automatically detecting known security vulnerabilities during the continuous integration and delivery (CI/CD) processes.

Regular readers of this publication will certainly be aware of the importance of security in containerisation and DevOps. Indeed, back in June this publication wrote about the various pieces of research around unsecured consoles and dashboards, with companies including Tesla and Weight Watchers affected.

This is where Google wants to shore things up (below). All container images built using its fully managed CI/CD platform, Cloud Build, will now be automatically scanned for OS package vulnerabilities. What's more, vulnerability scanning will also be integrated with Binary Authorization, which ensures only trusted container images can be deployed without the need for manual intervention.

"When we set out to build vulnerability scanning for container images, we started from the premise that security needs to be built into CI/CD from the very beginning, to cut down on time spent remediating downstream security issues, and to reduce risk exposure," Google wrote in a blog announcing the launch. "Furthermore, security controls need to happen atuomatically, not as part of some manual, ad-hoc process.

"The system must be able to automatically block vulnerable images based on policies set by the DevSecOps team," the blog adds. "In other words, CI/CD security needs to be comprehensive, from scanning images, to enforcing validation, as part of every CI/CD pipeline."

Cloud Memorystore for Redis, made generally available with these updates, is  based on the open source Redis database and automates tasks such as provisioning, scaling, failover and monitoring. New regions which support the service are Tokyo – as one would expect – Singapore and the Netherlands, taking the total number of supported regions to eight.

The AI-focused announcement was specific to Japan; Google said that it was offering two courses, the Machine Learning with TensorFlow on Google Cloud Platform specialisation, and the Associate Cloud Engineer certification, in Japanese. A new Advanced Solutions Lab (ASL) is also being launched in Tokyo. "In the coming months, the ASL will offer an immersive training experience so that Japanese businesses can learn directly from Google Cloud ML engineers in a classroom setting," the company wrote. "With this training, businesses can build the skills they need to create and deploy machine learning at scale, using the full power of Google Cloud."

Another new feature is around more effective code search. Cloud Source Repositories, whose revamped product is now available in beta, is aimed around privately hosting, tracking, and managing changes to large codebases on Google Cloud Platform. The code search capabilities are based on document indexing and retrieval techniques used on Google Search. 

The company is in the midst of its Next world tour – with London on the agenda in October. 

NetApp acquires StackPointCloud for multi-cloud Kubernetes service offering

Another piece of cloud M&A but this time with a Kubernetes feel; hybrid cloud provider NetApp has announced the acquisition of StackPointCloud with the claim of providing the industry's first complete Kubernetes platform for multi-cloud deployments.

As the company's hosting page put it, the proposed hook up is 'the simplest way to deploy a Kubernetes cluster to the clouds'. The NetApp Kubernetes Service is compatible with Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform with more than 7500 clusters deployed thus far – 5785 on AWS, 1286 on Google and 596 at Azure. 

If multi-cloud is making more sense for organisations with greater efficiency for different workloads, and Kubernetes and containers makes sense for application development and deployment, why not both? 

StackPointCloud's technology offers 'zero to Kubernetes in three clicks' which comprises the key feature of easy upgrade. Other features include Istio support, volume support and dashboard capability.

Kubernetes has clearly been the leader in container orchestration – but as Ronald Sens, director EMEA marketing at A10 Networks noted in this publication earlier this week, there is more that can be done. "The key point here is that enterprise organisations are starting to take note and there are signs that the market for Kubernetes is growing very rapidly," wrote Sens.

"This acquisition will benefit customers looking to simplify the delivery of data and applications in clouds, across clouds and hybrid clouds," said Anthony Lye, SVP and general manager of NetApp's cloud data services business unit. "The StackPointCloud Kubernetes as a service platform combined with NetApp's Cloud Data Services creates a complete DevOps solution, so customers can focus on innovation, not administration."

Financial terms of the deal were not disclosed.

Why Kubernetes is helping to make cloud mainstream

There has been a lot of talk in the first half of 2018 around how cloud is being adopted for mission critical applications and becoming mainstream. Right now, the impact of cloud services, cloud technologies and practices for organisations is rapidly accelerating as we enter the next wave of cloud adoption.  To this point, analysts at Forrester predict that the public cloud market will grow by 22 percent in 2018, to $178 billion. This momentum is being driven by companies that recognise the potential benefits of a cloud-based infrastructure i.e. lower operational costs, increased speed of deployment and greater business flexibility.

Today, many companies have moved well beyond the experimental stage and view the cloud as a critical component of their IT strategy, whether they are transitioning their on-premise infrastructure and applications to the cloud or adding cloud-based services as part of a hybrid approach. This transition is being made even easier thanks to the implementation of Kubernetes. Kubernetes can allow layering and application scaling within containers in the cloud. It works in tandem with the infrastructure provided by the cloud to allow for a more portable, more productive, environment.

At the same time, the services, tools and the organisational best practices for cloud continue to evolve to support the needs of large-scale enterprises. With these trends in mind, here are a few thoughts on cloud becoming mainstream and the growing role of Kubernetes in delivering powerful improvements to your infrastructure.

Driving agility in the business

The prime motivator behind the move to cloud for every business is how it improves operational efficiency. The cloud offers many benefits to businesses, like easy and near-instantaneous provisioning of compute, storage, networking resources, elastic scaling of resources and a business model of pay as you go.  All these benefits delivered by the cloud go towards driving agility in the business by improving the flexibility of employees and assisting in future expansion.

Containers further allow portability of applications across environments, easy separation of functionality into smaller microservices for more agile development and allow development teams to move fast, deploy software efficiently, and operate at an unprecedented scale. It is the next step in enterprise hybrid cloud deployment.

Kubernetes dominates container orchestration

The fight for container orchestration dominance has been one of the cloud’s main events for the past two years. The three-way battle between Docker Swarm, Kubernetes and Mesos has been fierce. However, now Kubernetes is viewed as the clear winner.  Its rich set of contributors, rapid development of capabilities and support across many disparate platforms make it the victor.

Nevertheless, putting this into perspective, the overall number of companies using these technologies in earnest is still relatively low.  A recent report from Cloud Foundry shows that only 25% are currently using containers. But on the other hand, another research report, from Portworx, found that 69% of companies are ‘making the investment in containers’ so the key point here is that enterprise organisations are starting to take note and there are signs that the market for Kubernetes is growing very rapidly.  

Kubernetes and the cloud in unison

Kubernetes is unique in that there is no single company behind it.  It is a fully open source community-driven initiative, and this has been a large factor in its adoption to date. As an open-source service it has a lot of flexibility in how it is used: what software Kubernetes works with; whether the infrastructure is private or shared; and which provider it can work with, whether Google or AWS. Kubernetes is especially useful with hybrid or multi-cloud deployments, which are emerging as the most frequently used cloud model for businesses in 2018. However this can make containers very difficult to manage when there are so many of them across multiple clouds and infrastructures for a single business.

This is where Kubernetes is a benefit as it manages containers and automates the deployment process for them. Automation saves lots of money for businesses as it improves efficiency and allows IT teams to focus on other areas of the business. This is especially true when good container management means that software deployment through Kubernetes is almost always painless. It could also potentially reduce hardware costs by making more effective use of current hardware. All of this combined pushes Kubernetes into more mainstream deployments with continued growth in large production workloads.

Providing load balancing for Kubernetes in the cloud

With more application workloads moving to containers, Kubernetes is clearly becoming the de-facto standard. That said, Kubernetes does not provide application load balancing. It is the customer’s responsibility to build this service. In theory open source application load balancers and traditional application delivery controllers (ADC) will work in Kubernetes. Unfortunately, in practice they fail to handle the dynamic environment of containers.

So, what are the requirements for load balancing on Kubernetes?

Organisations considering applications in Kubernetes with continuous availability need to consider the following:

  • Scalable application load balancer that is built for containers and stateless with SSL termination
  • Centralised management for application load balancer
  • Application security
  • Application traffic visibility and analytics
  • Automation for monitoring container lifecycle events and keeping the application load balancer configuration in synchronisation with the environment

Here at A10 Networks our Kubernetes solution includes the Lightning ADC solution offers enterprise-grade application load balancing, the Harmony Controller  providing application and service analytics and centralized management and the  Ingress Controller for application load balancing in Kubernetes which provides tight integration with Kubernetes. This means that IT staff can focus on the application’s business value rather than being occupied with operations of application delivery.

In the cloud world, everything is moving very rapidly, and certainly many organisations are now adopting Kubernetes.  I personally believe that this adoption means that it will be mainstream in the next 12 months as organisations look to find innovative ways to consume cloud.

Almost a third of key enterprise IT spending to be cloud based by 2022, says Gartner

Cisco may have said that cloud traffic would represent 95% of total data centre traffic by 2021 – but how much of that will be driven by the enterprise? New figures from Gartner give an intriguing picture.

The analyst firm has forecast that by 2022 more than a quarter (28%) of spending within key enterprise IT markets will be cloud-based, up from 19% this year. The findings, which have been announced in the run up to Gartner’s Symposium/ITxpo in Australia, show an interesting shift with growth in enterprise IT cloud spending now moving more quickly than more traditional non-cloud markets.

Today, application software – such as customer relationship management (CRM) is driving the majority of enterprise IT spending. It will still be the largest market by 2022, but at considerably slower growth, given the saturation of the market, than system infrastructure.

By 2022, Gartner argues, almost half of addressable revenue will be in system infrastructure and infrastructure hardware. This is down to the legacy stack – data centre hardware, operating systems and IT services – being especially inflexible and difficult to change over. The coming few years will be critical for traditional infrastructure providers therefore.

“The shift of enterprise IT spending to new, cloud-based alternatives is relentless, although it’s occurring over the course of many years due to the nature of traditional enterprise IT,” said Michael Warrilow, Gartner research vice president. “Cloud shift highlights the appeal of greater flexibility and agility, which is perceived as a benefit of on-demand capacity and pay as you go pricing in cloud.

“As cloud becomes increasingly mainstream, it will influence even greater portions of enterprise IT decisions, particularly in system infrastructure as increasing tension becomes apparent between on- and off-premises solutions,” added Warrilow.

The figures come after a previous forecast from the company argued the global public cloud services market would grow 17.3% in 2019 to break the $200 billion mark. Cloud system infrastructure services, or infrastructure as a service, represented the fastest growing segment.