Protecting your company’s crown jewels: Building cloud-based backup and DR into ransomware defence

It’s a sad fact of life that whenever someone owns anything of value, there’s someone else out there who wants to get their hands on it illegally. Today’s corporate crown jewels are the critical data on which organisations depend and the highwaymen are cybercriminals, who have built a lucrative industry from ransomware attacks that disrupt businesses, steal data and aim to extract payment from their victims.

Tackling this scourge is a critical challenge for IT managers on several levels, but when it comes to the crunch, putting solid cloud backup and disaster recovery (DR) plans in place can help businesses keep hold of the aces and hang on to their crown jewels.

Ransomware is on the rise again and it’s getting smarter

Recent figures show that, after a slight lull towards the end of 2017, ransomware attacks have once again accelerated in the first half of 2018, reaching a reported 181.5 million incidents. This rise has been driven by the emergence of ransomware-as-a-service, which now means that almost zero technical expertise is needed to perpetrate an attack – just a target and a willing ransomware provider. 

As well as increasing in volume, attacks are also evolving to become more sophisticated, seeking out and encrypting remote network drives and servers and hunting down and removing shadow copies and backup files. The rationale behind this evolution is simple: to lock down the victim’s recovery options and increase the chances of a ransom being paid.

This alteration in tactics, combined with the risks of business disruption, financial loss and reputational damage associated with cyberattacks, means that IT managers are under greater pressure than ever as they strive to defend against ransomware. And there’s no silver bullet. The various attack vectors and strategies employed by adversaries means that a multi-layered approach is needed, requiring IT managers to wear a number of hats, from psychologist to detective to – in the final event – emergency services.

The psychologist

A large proportion of ransomware is launched via the actions of an innocent user who trustingly clicks on apparently genuine emails, links and websites. User training that helps employees understand the psychology of ransomware and the tricks attackers might use to target them is the first line of defence for businesses. Awareness of ransomware among the public has increased, partly due to the high profile Wannacry infection, but at the same time social engineering and phishing techniques have grown more sophisticated, so it’s important to keep users up to date and alert to the ways they could become vulnerable.

The detective

In an ideal world, users wouldn’t be exposed to ransomware attempts in the first place, and that’s where prevention and protection comes in. By ramping up endpoint detection capabilities, ensuring newly identified vulnerabilities are quickly patched and operating robust anti-virus and anti-malware software, businesses can detect and mitigate attacks before they can do any harm.

The emergency services

Despite these defensive tactics, the sheer volume and growing sophistication of attacks means businesses need to assume it’ll be a case of when, not if, an attack makes it through. A solid emergency response plan is essential. Three key tools, used in conjunction, can bolster the company’s arsenal, ready to swing into action in the event of a successful attack, to protect access to the organisation’s most valuable data and restore operations with minimal disruption:

  • Snapshots: A SAN/NAS-based snapshot is effectively a point in time image of your data. Snapshots can be programmed into the routines of practically any application or storage device and are completed isolated from the data itself, so there’s no way malicious code – whatever its level of sophistication – can detect or remove them. 
  • Backups: There are a raft of important reasons why businesses should use back-up in ordinary operations, but it is also a great place to have up your sleeve when you want to avoid paying the ransom and instead recover your data from your own sources. Follow the 3-2-1 rule: three copies of your data, on two different media types, with one copy off-site in the cloud. This off-site copy is your insurance policy. It’s “air-gapped” from the business so there is no way that it can be compromised by malicious code that seeks to delete or encrypt locally hosted or networked back-up files.
  • Disaster recovery: While it’s not a flood or a fire, a successful ransomware attack could be just as devastating for your business. In fact, given the volume of attacks in progress right now (figures suggest that a company is hit by ransomware every 40 seconds), you’re actually far more likely to find yourself with a ransomware disaster on your hands. With disaster recovery set up in the cloud, you can have your systems back up and running in that environment right back to the moment that the attack locked the system. This isolates your data from the event and minimises both recovery time and data loss – mitigating both the hard and soft costs of system outages and data breach.

Internal and external security threats to companies are occurring with increased regularity, with malware and viruses a constant challenge. This is why companies need a recovery solution that mitigates the risk of critical data being lost or destroyed, in the event of a breach, that can easily restore mailboxes to an instance before the attack. Backing up your data would be quite a long process if it had to be done manually. Fortunately, over the years, CSP providers like iland have adapted their solutions so they can be included directly in widely-used software suites such as Microsoft Office 365. This means that by automatically backing up your data once a day, the solution eliminates the risk of losing access to and control over Office 365 suite data including mail, SharePoint and OneDrive – so that users’ data is always hyper-available and protected, therefore avoiding any major disruption to your business.

The layered defence approach should also be applied to backup and recovery. The structure of that strategy revolves around classifying the value of your different data or application tiers and establishing your appetite for disruption for each tier. If you only back up your data overnight, say at 7.00pm, and the ransomware attack takes place at 6.45pm, your business loses a whole day of data. Is that acceptable? If not, you need to modify your schedules to match your risk appetite for the different classifications of data.

Testing is critical. If you don’t test your emergency plan regularly, how do you know it will work when it matters? It should be possible to fully test without interrupting the normal flow of business. It’s also worth remembering that ransomware attacks (and indeed other kinds of disaster) don’t happen quarterly, or during office hours, so your testing schedule needs to reflect the real world rather than an artificial timeframe to offer you the best information about the security performance of your system. Finally, take advantage of your cloud provider’s expertise and get them to advise you on the right kind of set-up for your needs – that’s what they’re there for.

Ransomware looks likely to remain the bane of the IT department for the foreseeable future and with attacks growing more sophisticated, it’s time to put cloud-based backup and disaster recovery in place to safeguard your data crown jewels and keep your business up and running.

10 ways to improve cloud ERP with AI and machine learning

Capitalising on new digital business models and the growth opportunities they provide are forcing companies to re-evaluate ERP’s role. Made inflexible by years of customisation, legacy ERP systems aren’t delivering what digital business models need today to scale and grow.

Legacy ERP systems were purpose-built to excel at production consistency first at the expense of flexibility and responsiveness to customers’ changing requirements. By taking a business case-based approach to integrating Artificial Intelligence (AI) and machine learning into their platforms, Cloud ERP providers can fill the gap legacy ERP systems can’t.

Closing legacy ERP gaps with greater intelligence and insight

Companies need to be able to respond quickly to unexpected, unfamiliar and unforeseen dilemmas with smart decisions fast for new digital business models to succeed. That’s not possible today with legacy ERP systems. Legacy IT technology stacks and the ERP systems they are built on aren’t designed to deliver the data needed most.

That’s all changing fast. A clear, compelling business model and successful execution of its related strategies are what all successful Cloud ERP implementations share. Cloud ERP platforms and apps provide organisations the flexibility they need to prioritise growth plans over IT constraints. And many have taken an Application Programming Interface (API) approach to integrate with legacy ERP systems to gain the incremental data these systems provide. In today’s era of Cloud ERP, rip-and-replace isn’t as commonplace as reorganising entire IT architectures for greater speed, scale, and customer transparency using cloud-first platforms.

New business models thrive when an ERP system is constantly learning. That’s one of the greatest gaps between what Cloud ERP platforms’ potential and where their legacy counterparts are today. Cloud platforms provide greater integration options and more flexibility to customise applications and improve usability which is one of the biggest drawbacks of legacy ERP systems. Designed to deliver results by providing AI- and machine learning insights, Cloud ERP platforms, and apps can rejuvenate ERP systems and their contributions to business growth.

The following are the 10 ways to improve cloud ERP with AI and machine learning, bridging the information gap with legacy ERP systems:

Cloud ERP platforms need to create and strengthen a self-learning knowledge system that orchestrates AI and machine learning from the shop floor to the top floor and across supplier networks

Having a cloud-based infrastructure that integrates core ERP web services, apps, and real-time monitoring to deliver a steady stream of data to AI and machine learning algorithms accelerates how quickly the entire system learns. The cloud ERP platform integration roadmap needs to include APIs and web services to connect with the many suppliers and buyer systems outside the walls of a manufacturer while integrating with legacy ERP systems to aggregate and analyse the decades of data they have generated.

Virtual agents have the potential to redefine many areas of manufacturing operations, from pick-by-voice systems to advanced diagnostics

Apple’s Siri, Amazon’s Alexa, Google Voice, and Microsoft Cortana have the potential to be modified to streamline operations tasks and processes, bringing contextual guidance and direction to complex tasks. An example of one task virtual agents are being used for today is guiding production workers to select from the correct product bin as required by the Bill of Materials. Machinery manufacturers are piloting voice agents that can provide detailed work instructions that streamline configure-to-order and engineer-to-order production. Amazon has successfully partnered with automotive manufacturers and has the most design wins as of today. They could easily replicate this success with machinery manufacturers.

Design in the Internet of Things (IoT) support at the data structure level to realise quick wins as data collection pilots go live and scale

Cloud ERP platforms have the potential to capitalise on the massive data stream IoT devices are generating today by designing in support at the data structure level first. Providing IoT-based data to AI and machine learning apps continually will bridge the intelligence gap many companies face today as they pursue new business models. Capgemini has provided an analysis of IoT use cases shown below, highlighting how production asset maintenance and asset tracking are quick wins waiting to happen. Cloud ERP platforms can accelerate them by designing in IoT support.

Reducing equipment breakdowns and increasing asset utilisation by analysing machine-level data to determine when a given part needs to be replaced

It’s possible to capture a steady stream of data on each machine’s health level using sensors equipped with an IP address. Cloud ERP providers have a great opportunity to capture machine-level data and use machine learning techniques to find patterns in production performance by using a production floor’s entire data set. This is especially important in process industries where machinery breakdowns lead to lost sales. Oil refineries are using machine learning models comprise more than 1,000 variables related to material input, output and process perimeters including weather conditions to estimate equipment failures.

Designing machine learning algorithms into track-and-traceability to predict which lots from which suppliers are most likely to be of the highest or lowest quality

Machine learning algorithms excel at finding patterns in diverse data sets by continually applying constraint-based algorithms. Suppliers vary widely in their quality and delivery schedule performance levels. Using machine learning, it’s possible to create a track-and-trace application that could indicate which lot from which supplier is the riskiest and those that are of exceptional quality as well.

AI and machine learning can provide insights into how Overall Equipment Effectiveness (OEE) can be improved that aren’t apparent today

Manufacturers will welcome the opportunity to have greater insights into how they can stabilise then normalise OEE performance across their shop floors. When a cloud ERP platform serves as an always-learning knowledge system, real-time monitoring data from machinery and production assets provide much-needed insights into areas for improvement and what’s going well on the shop floor.

Cloud ERP providers need to pay attention to how they can help close the configuration gap that exists between PLM, CAD, ERP and CRM systems by using AI and machine learning

The most successful product configuration strategies rely on a single, lifecycle-based view of product configurations. They’re able to alleviate the conflicts between how engineering designs a product with CAD and PLM, how sales & marketing sell it with CRM, and how manufacturing builds it with an ERP system. AI and machine learning can enable configuration lifecycle management and avert lost time and sales, streamlining CPQ and product configuration strategies in the process.

Improving demand forecasting accuracy and enabling better collaboration with suppliers based on insights from machine learning-based predictive models is attainable with higher quality data

By creating a self-learning knowledge system, cloud ERP providers can vastly improve data latency rates that lead to higher forecast accuracy. Factoring in sales, marketing, and promotional programs further fine-tunes forecast accuracy.

Implementing self-learning algorithms that use production incident reports to predict production problems on assembly lines needs to happen in cloud ERP platforms

A local aircraft manufacturer is doing this today by using predictive modeling and machine learning to compare past incident reports. With legacy ERP systems these problems would have gone undetected and turned into production slowdowns or worse, the line having to stop.

Improving product quality by having machine learning algorithms aggregate, analyse and continually learn from supplier inspection, quality control, Return Material Authorisation (RMA) and product failure data

Cloud ERP platforms are in a unique position of being able to scale across the entire lifecycle of a product and capture quality data from the supplier to the customer. With legacy ERP systems manufacturers most often rely on an analysis of scrap materials by type or caused followed by RMAs. It’s time to get to the truth about why products fail, and machine learning can deliver the insights to get there.

What’s New in Parallels Desktop 14 for Mac

Parallels Desktop® 14 for Mac is finally here! With Parallels Desktop, you can run Windows, Linux, and other popular operating systems on your Mac® without rebooting. For over 12 years, we’ve been the #1 solution for over 5 million users worldwide. Version 14 has over 50 new features, including performance improvements, graphics improvements, and support […]

The post What’s New in Parallels Desktop 14 for Mac appeared first on Parallels Blog.

Registration Opens for @LeeAtchison Session | @NewRelic #DevOps #Serverless #AWS #APM #Monitoring #DigitalTransformation

Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling.

But real cloud success, at scale, requires much more than a basic lift-and-shift migration. It requires successfully navigating the world of the dynamic cloud. The dynamic cloud doesn’t just let you scale your applications, but it makes the process much faster and easier. It also allows your development teams to respond to changes faster, and implement these changes faster. Not everyone is ready for moving their applications to a fully dynamic cloud-based environment, but success in the cloud requires it.

read more

Louis Frolio to Present #MachineLearning and #IoT at #CloudEXPO NY | @IBMcloud @FrolioL #IIoT #AI #SmartCities

With the mainstreaming of IoT, connected devices, and sensors, data is being generated at a phenomenal rate, particularly at the edge of the network. IDC’s FutureScape for IoT report found that by 2019, 40% of IoT data will be stored, processed, analyzed and acted upon at the edge of the network where it is created. Why at the edge? Turns out that sensor data, in most cases, is perishable. Its value is realized within a narrow window after its creation. Further, analytics at the edge provides other benefits.

read more

DigiPlex aims to reuse waste data centre heat in Oslo apartments with new partnership

As technology continues to improve, so too does the responsibility of infrastructure providers in ensuring an environmentally-friendly future. Nordic data centre firm DigiPlex has announced a scheme whereby waste heat from its facilities will be reused in residential apartments across Oslo.

The company signed a letter of intent with Fortum Oslo Varne, Norway’s largest district heating supplier, to see to the needs of approximately 5,000 apartments across the Norwegian capital.

DigiPlex insists, in its own words, that ‘a progressive data centre industry must do what it can to reduce its environmental footprint.’ “We are proud to reinforce our leading role in our industry regarding climate change, using renewable power and the waste heat from our data centre at Ulven to keep the citizens of Oslo warm,” said Gisle M. Eckoff, DigiPlex CEO.

“Digitisation must move towards a greener world, and our cooperation with Fortum Oslo Varme is an important step in that direction.”

This is by no means the only initiative being undertaken for a greener industry. Eyebrows may have been raised in June when Microsoft unveiled Project Natick, whereby a data centre was placed underwater, off the Orkney Islands, to provide naturally cooler temperatures. Yet while it’s worth noting it is at the experimental stage right now, some struggle to see value in the project.

Writing for this publication, Joseph Denne, founder and CEO of DADI, argued: “It’s hard to believe that this is a realistic option for the future. “Being surrounded by seawater might keep the temperature of the hardware under control without requiring the specialist cooling systems used in conventional server farms, but it also makes servicing a faulty node pretty much impossible, and a lot of energy has to go into making the thing in the first place,” wrote Denne.

“Surely it makes much more sense to maximise the potential of the devices we already have at our disposal, which would otherwise be idle for around three-quarters of their lifetime.”

Of course, much of the innovation is in the Nordic regions where naturally cooler temperatures can be fed in and taken advantage of without greater energy output. Alongside the Norwegian partnership, DigiPlex also has initiatives in place for both Sweden, with district heating provider Stockholm Exergi, and Denmark. Heatwaves aside, the temperate UK can also benefit from this, with Rackspace’s UK data centres being among those with these features built in.

Despite this, the past year has felt as though environmental efforts are being stepped up. Back in April, Google announced it had hit its 100% renewable energy targets, claiming to be the first public cloud provider to do so.

Read more: A data centre with no centre: Why the cloud of the future will live in our homes

Why real digital transformation is hard to achieve

Becoming a digital business is very challenging because it demands new thinking, a willingness to evolve and bold ideas. As market leaders continue to embrace a digital transformation agenda, they're finding that the transition requires significant changes to organisational culture and internal systems.

A recent Gartner survey found that a relatively small number of organisations have been able to successfully scale their digital business initiatives beyond the experimentation and piloting stages.

"The reality is that digital business demands different skills, working practices, organisational models and even cultures," said Marcus Blosch, research vice president at Gartner. "To change an organisation designed for a structured, process-oriented world to one that's designed for ecosystems, adaptation, learning and experimentation is hard."

Gartner has identified six barriers that CIOs must overcome to transform their organisation into a truly digital business. Savvy CEOs and line of business (LoB) leaders will expect meaningful plans to fix these known obstacles to progress.

A change-resisting culture

Digital innovation can be successful only in a culture of collaboration. People have to be able to work across boundaries and explore new ideas. In reality, most IT organisations are stuck in a culture of change-resistant silos and hierarchies.

CIOs aiming to establish a digital culture should start small: Define a digital mindset, assemble a digital innovation team, and shield it from the rest of the organisation to let the new culture develop. Connections between the digital innovation and core teams can then be used to scale new ideas and spread the culture.

Limited sharing and collaboration

The lack of willingness to share and collaborate is a challenge not only at the ecosystem level but also inside the organisation. Issues of ownership and control of processes, information and systems make people reluctant to share their knowledge.

Digital innovation with its collaborative cross-functional teams is often very different from what typical enterprise employees are used to with regards to functions and hierarchies – resistance is inevitable.

The business isn't ready

Many business leaders are caught up in the hype around digital business. But when the CIO or CDO wants to start the transformation process, it turns out that the business doesn't have the forward-thinking talent skills or resources that are needed to succeed.

"CIOs should address the digital readiness of the organisation to get an understanding of both business and IT readiness," Blosch advised. "Then, focus on the early adopters with the willingness and openness to change and leverage digital. But keep in mind that digital may just not be relevant to certain parts of the organisation."

The ongoing talent gap

Most organisations follow a traditional pattern – organised into functions such as IT, sales and supply chain and largely focused on operations. Change can be slow in this kind of legacy  business environment.

Digital business innovation requires an organisation to adopt a different approach. People, processes and technology blend to create new business models and associated services.

Employees need new skills focused on innovation, change and creativity along with the new technologies themselves – such as artificial intelligence (AI) and the Internet of Things (IoT).

Current practices don't support the talent

Having the right talent is essential, and having the right practices lets the talent work effectively. Highly structured and slow traditional processes don't work for digital business. There are no tried and tested models to implement, but every organisation has to find the practices that are best suited to their needs.

"Some organisations may shift to a product management-based approach for digital innovations because it allows for multiple iterations. Operational innovations can follow the usual approaches until the digital business team is skilled and experienced enough to extend its reach and share the learned practices with the organisation," Blosch explained.

Change isn't easy

It's often technically challenging and expensive to make digital business work. Developing platforms, changing the organisational structure, creating an ecosystem of partners – all of this effort requires an investment in time, resources and money.

Over the long term, enterprises should build the organisational capabilities that make embracing change simpler and faster. To do that, they should develop a 'platform-based strategy' that supports continuous change and design principles and then innovate on top of that platform, allowing new services to draw from the platform and its core services.

Will Brown Joins @CloudEXPO NY Faculty | @IBMcloud @willb77 #Cloud #API #DevOps #Microservices #DigitalTransformation

Enterprises that want to take advantage of the Digital Economy are faced with the challenge of addressing the demands of multi speed IT and omni channel enablement. They are often burdened with applications that are complex, brittle monoliths. This is usually coupled with the need to remediate an existing services layer that is not well constructed with inadequate governance and management.

These enterprises need to face tremendous disruption as they get re-defined and re-invented to meet the demands of the Digital Economy. The use of a microservices approach exposed through APIs can be the solution these enterprises need to enable them to meet the increased business demands to quickly add new functionality.

read more

Himanshu Chhetri Joins @DevOpsSUMMIT NY Faculty | @Addteq @Atlassian #DevOps #APM #ContinuousDelivery

The DevOps dream promises faster software releases while fostering collaborating and improving quality and customer experience. Docker provides the key capabilities to empower DevOps initiatives. This talk will demonstrate practical tips for using Atlassian tools like Trello, Bitbucket Pipelines and Hipchat to achieve continuous delivery of Docker based containerized applications. We will also look at how ChatOps enables conversation driven collaboration and automation for self provisioning cloud and container infrastructure.

read more

The future of enterprise software: Big data and AI rules okay – and the ‘decentralisation of SaaS’

Machine learning, cloud-native and containers are going to be key growth drivers of the future enterprise software stack – but it could be the end of the road for software as a service (SaaS).

That’s the verdict from an extensive new report by venture capital fund Work-Bench. The full 121-slide analysis (Scribd), titled ‘The Enterprise Almanac: 2018 Edition’, aims to dissect a ‘once in a decade tectonic shift of infrastructure’, focusing on the new wave of services that will power the cloud from the end of this decade onwards.

“Our primary aim is to help founders see the forest from the trees,” wrote Michael Yamnitsky, report author and VC at Work-Bench. “For Fortune 1000 executives and other players in the ecosystem, it will help cut through the noise and marketing hype to see what really matters. It’s wishful thinking, but we also hope new talent gets excited about enterprise.”

If this analysis is anything go by, there will be plenty to get excited about in the coming years.

Machine learning

Large technology companies are winning at AI, Work-Bench asserts. And why not? This publication has devoted plenty of column inches in recent months to how among the hyperscalers are using artificial intelligence and machine learning as a differentiator – indeed, Google Cloud this week launched pre-packaged AI services to try and stay one step ahead of the competition.

It’s not so much of a differentiator if everyone’s getting in on the act, though. And this is where others are struggling. “Despite hopeful promise, startups racing to democratise AI are finding themselves stuck between open source and a cloud place,” the report notes.

It’s a data-driven world, of course – but the disconnect between the ever-increasing amounts of data being crunched and the data scientists available to crunch it is clear. And this is where the Googles, Facebooks, Microsofts and Amazons of this world are again at an advantage – by hoovering up most of the AI talent.

Those who are making strides outside of the behemoths, however, are startups focusing on automated machine learning (AutoML). The key, instead of beating Amazon and Google at their own games with SageMaker, TensorFlow et al, is to focus their products and messaging on BI analysts (above). Companies such as Tableau have got data visualisation nailed – but about getting reports in natural language, or ascribing even greater insights? To illustrate this perfectly, Tableau acquired Empirical Systems, an MIT-originated AI startup, in June for this very reason.

“Expect all modern BI vendors to release an AutoML product or buy a startup by [the] end of next year,” Work-Bench concludes.

Cloud-native

Writing for this publication earlier this week, Jimmy Chang, director of products at Workspot, discussed the frustrations of terms such as ‘cloud-native’ and ‘cloud-enabled’ being interchangeable. Being in the virtual desktop business, Chang uses an example from his own industry: only two of the VDI players in the market have genuinely cloud-native products.

It’s important therefore to determine what’s what without the risk of cloud washing. For Work-Bench, it begets an exploration of cloud infrastructure and software from Amazon Web Services, Microsoft Azure and Google Cloud Platform – a subject which is always good to analyse at the end of each quarter, as regular CloudTech readers will testify.

The Work-Bench analysis certainly makes sense from here. AWS is entrenched as #1, Microsoft at #2 for now, and Google at #3, in spite of the latter two’s continued momentum. ‘Killer products… but where’s the enterprise love?’, the report asks of Google.

The majority of organisations continue to struggle with containerising applications and have three key strategies, the report notes. The first strategy is ‘monocloud’ – think Ryanair, GoDaddy – where companies go all-in on the provider of choice. The second is a price broker model with workloads run wherever they are cheapest – Kubernetes is seen as a key tool here for those who have gotten to grips with it – and the third is a function broker model with different clouds for different workloads. Remember the brouhaha when it was revealed long-time AWS house Netflix was running disaster recovery workloads on Google – an arrangement the company stressed had been going on for a while? It’s on its way – and makes good business sense when applicable.

The report also bows to the king of container orchestration in Kubernetes; despite struggles it has a clear market lead, with half of enterprises using containers in some capacity according to 451 Research. But Work-Bench asserts the puck is heading towards the service mesh, a configurable infrastructure layer for microservices applications offering load balancing, encryption, authentication and more. Security will be the killer use case going forward. “Service meshes are like broccoli… you know you need them but only adopt when you feel the pain of not having them,” the report says.

The decentralisation of SaaS

This is arguably the most interesting punt in the report: as software as a service (SaaS) ate infrastructure, infrastructure will go back and eat SaaS.

According to IDC’s most recent figures, software as a service spending globally was at $74.8bn, almost three times the size of infrastructure as a service ($24.9bn). By 2022, IDC predicts SaaS spending to be ahead of SaaS, IaaS and PaaS combined at $163bn.

But the biggest players could get too big for their boots (above), as the report explains. “SaaS vendors are becoming mighty and taking advantage of it – using aggressive tactics to expand dollar share within existing accounts, often by shoving excessive features and extensive contract terms down customers’ throats,” the report notes. “Customers have no choice but to succumb to these closed-ecosystem tactics.”

The reasoning goes back several years and further: as SaaS provided good economic sense when running infrastructure was expensive and configuration was difficult, the pendulum with cloud computing has swung.

The report adds that there is one solution: containers. If enterprises are struggling with them today then they will need to act fast, as in the opinion of Work-Bench it doesn’t quite fit in with SaaS customisation. “In a world where services written in different languages can easily communicate, proprietary languages that require hiring ‘experts’ will be obsolete.”

The empire strikes back

The report focuses on the return of the big traditional enterprise software players as an introduction – but it can also be seen as an overarching sentiment of the industry today.

Tellingly, two of the largest software acquisitions over the past six years were closed in the last six months. This is not so much in terms of the amount of money spent – although $7.5bn and $6.5bn respectively were shelled out for GitHub and MuleSoft by Microsoft and Salesforce respectively – but by dividing enterprise value by trailing 12 month revenue.

As venture capitalist Tomasz Tonguz points out, comparing the Microsoft/GitHub deal (24.5 EV/TTV) and Salesforce/Microsoft (21.2) with, for instance, Microsoft’s acquisition of LinkedIn (6.8) and Cisco’s buy of Broadsoft (5.9) shows much greater value with this year’s buys.

“I expect substantially more acquisitions of the scale and at these multiples through 2018,” Tunguz wrote back in June when disclosing these figures. “The growing sizes of the software market. The desire for continuing growth. The pace of innovation within software. The increasing competition amongst incumbents. A vibrant public market that is continuing to price companies aggressively.

“It’s a great time to sell a fast growing billion-dollar company.”

You can look at the full slides here.

Main pictures credit: Work-Bench