How to leverage cloud architectures for high availability

Cloud architectures are used broadly nowadays; the cloud is a plethora of amazing alternatives in terms of services and solutions. However, with great power comes great responsibility, and the cloud presents itself as a place where failure can and eventually will occur. Thus, consequently, it will spread upon the entire architecture fast, possibly causing massive outages that can leave a business on its knees.

Okay, that’s not an overly optimistic scenario – more likely the opposite – but not to fear. This is the nature of almost any architecture – and why should cloud be any different?

Cloud architects face two different problems at scale at any given time in order to prepare for the worst; firstly, if something unexpected and undesired happens, how to continue business operations as if nothing happened, and secondly, if something unexpected and undesired happens and I am unable to continue operations as usual, how can I bring the architecture up someplace else and within a reasonable window of time, and then, resume operations as usual?

In these terms we can discuss:
– Continue business as usual in the face of an outage
– Resume business as usual in the shortest term possible in the face of an irrecuperable outage

The first is covered by high availability, and the second is covered by disaster recovery. Here, we will look at high availability.

The alternatives currently on the table

The cloud yields more than what is expected to face both scenarios. Most clouds are distributed in a geographic and technical way as to avoid massive outage scenarios by themselves; at a small scale, clouds have what is known as Availability Zones (AZs) or Availability Domains (ADs). These are usually different buildings, or different clusters of buildings, in the same geographic area, interconnected but highly redundant, especially in what refers to power, refrigeration and storage.

At a large scale, clouds are divided by regions; global regions, that is, with 10 or 15 regions if we look at giants such as Google Cloud and Amazon Web Services. These regions are spread geographically across the globe and serve two purposes; isolation in case of disaster, and performance. Customers in different countries and continents will be served by the nearest point of service, not rerouted to the main one. That is what makes the latency smaller and the response higher.

Putting all this into consideration, it is the task of the architect to design the service with availability zones and regions in mind, in order to serve customers properly and take advantage of the technologies at hand. Architectures are not replicated by cloud providers in different regions – that is something architects and engineering teams need to consider and tackle, and the same goes for availability domains, unless the discussion is about storage; compute instances and virtual networks to mention core services, are not replicated throughout Ads or AZs for the most part.

The alternatives for high availability involve avoiding single points of failure, testing the resilience of the architecture before deploying to product, and either constructing master/master, master/slave or active/passive solutions in order to be always available, or have an automation that is able to reduce the unavailability time to the minimum.

What are considered best practices?

The following is a list of best practices in terms of providing HA in the cloud. It is not completely comprehensive, but it may also apply, to a lesser degree, to data centre architectures as well.

  • Distributing load balancers across Ads, beware of single point of failure (SPOF) in the architecture: two is one and one is none
  • If the cloud provider is not providing redundancy across Ads and at least three copies of the same data automatically, it may be a good idea to re-evaluate the provider decision, or contemplate a service that does so
  • Easy to get in, easy to get out: it is necessary to have the certainty that in case it becomes primordial to move or redirect services, it is possible to do so with minimum effort
  • Implementing extra monitoring and metrics systems if possible, not to mention good integration: also if possible, off-the-shelf, through third parties that can provide for timely alerts and rich diagnostic information. Platforms such as New Relic, or incident tools such as PagerDuty, can be extremely valuable
  • Keeping the architecture versioned, and in IaaC (infrastructure as code) form: if an entire region goes away, it will be possible to spawn the entire service in a different region, or even a different cloud, provided data has been replicated and DNS services are elastic
  • Keeping DNS services elastic: this goes without saying, especially after the previous step; flexibility is key in terms of pointing records in one direction or another
  • Some clouds do not charge to have instances in a stopped state, especially with VMs e.g. Oracle only charges for stopped instances if those are Dense or HighIO, otherwise it does not. It is easy to leverage this and keep a duplicated architecture in two regions; with IaaC, this is not unreal and it is also easy to maintain
  • Synchronising necessary and critical data across ADs constantly in the ways of block storage ready to use and often unattached, avoid NVMe usage if that implies being billed for unused compute resources to which those NVMe are connected to
  • Leveraging object storage in order to have replicated data in +2 regions
  • Leveraging cold storage (archive, such as Glacier) to retain critical data in several sparse regions; sometimes the price to pay to break the minimum retention policy and request a restore Is worth in order to bring a production environment up
  • Using the APIs and SDKs for automation, by creating HA and failover tools, automation can transform systems into autonomous systems that take care of failovers by themselves, mixing this with anomaly detection can be a game changer. Do not rely so heavily on dashboards – most things can/are, and some must, be done behind the curtain
  • Nobody says it is necessary to stick to one cloud: with the power of orchestration and cloud providers, it is simple enough to have infrastructure in more than one cloud at the same time, running comparisons and if necessary switching providers
  • Using tools to test the resilience of the infrastructure and the readiness of the engineering team – faking important failures in the architecture can yield massive learning

Conclusion

Although best practices are only best practices if applied, not all of them can be applied in the same architecture or at the same time, so the judgement of an experienced architecture and engineering team is always necessary.

That said, most of the points can be applied without significant effort. It only takes some hard work and disposition, but the results yielded will make it worth its weight in copper.

Happy architecting and keep up the good work.

Five Internet of Things Trends That’ll Influence Businesses in 2018 | @MobiDev_ @ExpoDX

2018 is a year when many of the trends surrounding the Internet of Things (IoT) are expected to turn from hype to real results. Adoption continues at a brisk pace, and medium and large enterprises are increasingly finding themselves at decided advantages for bringing IoT systems into their infrastructures. At such a crucial turning point, it’s a good idea to take a look at five of the biggest trends that are expected to dominate this transformation process.

read more

Best Android file managers 2018


Steve Clark

26 Jun, 2018

Whether it’s a quick cut-and-paste job or a dive into the digital depths of your Android smartphone, free file manager apps give you greater control over your documents than Google’s built-in counterpart. However, how do you know which is the one for you?

We’ve put together some of our favourites and ranked them according to their feature set, performance and how easy they are to use. 

Astro File Manager App

With an uncomplicated interface and an emphasis on productivity, Astro’s file manager app ensures you won’t be fumbling your way through yet another tricky file move on your Android phone or tablet.

You’ll find a familiar Home screen populated with the essentials: file types, storage locations, cloud services, recent files, and favourites. Everything you need is accessible from the moment you open the app.

Setup is pleasantly straightforward. Thumb-friendly ‘Add’ buttons jump out from the Home screen offering LAN, FTP, SFTP and SMB server location support. You’re also able to connect to the most common cloud and social services, such as OneDrive, Google Drive, Dropbox and Facebook. That all makes it ridiculously easy to navigate to your destination. Alternatively, if you don’t know where a file it, you can use the search bar.

Peeking under the hood reveals no-nonsense file manager options. Features don’t get much more exotic than .ZIP and .RAR compression and extraction, alongside app back-up and a ‘task killer’ that can force apps to close to protect battery life. However, Astro’s core strength is that it focuses on being a file manager first and foremost, to copy, move and share your apps and documents, with no unnecessary extras – not even ads.

How it can be improved

While the app remains clean and accessible, the Home screen’s horizontal swipe is unnecessarily finicky. Cleaning up space using the SD Card Usage option is a challenge, which sees file selection looping you back to its folder location. You can get around this by pressing and holding the file to open it, rather than tapping it as normal. The app could benefit from the inclusion of multiple-panes to make navigation even quicker and an increase in cloud storage locations would be also welcome.

Verdict

With streamlined functionality, Astro won’t be for everyone, but it’s that simplicity that gives this app broad appeal, while there are just enough extras to meet most common needs. After all, there’s only so much a file manager app needs to do and Astro does most of it.

Features: 4

Performance: 5

Ease of use: 5

Overall: 5

Total Commander

Total Commander started life as a desktop app and it shows. From design to performance, this feels like a file manager app built for serious users.

You can tell from one glance at the bold, black theme and its Windows-like icons that Total Commander means business. You won’t find flash colour schemes or other such frivolous options here, which is fine as long as you can live with the existing colour scheme.

Total Commander supports FTP, SFTP, WebDAV and LAN connections. You can also connect the app to your OneDrive, Google Drive and Dropbox accounts, but you need to add plugins to the app to get them working. You’ll also find handy extras like creating your own internal commands and a permissions editor.

If you’ve ever used a desktop file manager, you’ll feel right at home here.

How it can be improved

Some terminology, while accurate, isn’t particularly user-friendly because of the transition from PC software to app. Selecting additional plugins takes you to an inelegant external site that’s little more than a wall of text and a few links and bringing those options in-app would refine the process. At present, the app only connects to a small number of cloud services — we’d like to see more.

Verdict

Total Commander isn’t just a name; it’s a declaration of intent. It’s a well-built app that will appeal to those wanting a comprehensive file overview and is a powerful tool to add to Android. However, thanks to its awkward desktop origins, you’ll need to spend a little time learning the app to fully understand its quirks.

Features: 5

Performance: 5

Ease of use: 4

Overall: 4

File Manager

Originally created by Asus for its first-party smartphones, File Manager is now available on other Android devices. Despite its uncreative name and platform-specific origins, the app is surprisingly delightful.

Design-wise, it’s as if Asus decided to improve upon Astro’s minor failings. The app’s striking icons make usage extremely intuitive for Android users. For those sick of manually navigating to common tasks, File Manager also conveniently places key actions like desktop file transfer and storage analyser at the bottom of the Home screen. The analyser itself is a dream: its large tabs and visual reports are designed with smartphones in mind. Elsewhere, a PIN-protected Hidden Cabinet hides private files away from prying eyes.

This is a lightweight app, though, only covering the absolute basics. However, this makes File Manager fast and efficient.

How it can be improved

The app is a tease. It’s so good that in almost every department, you find yourself wishing it offered even more: more actions; more cloud storage options; more network and server connections. As such, clever ideas stashed across the app, such as the Hidden Cabinet, jar with the otherwise very obvious restrictions, making the app feel under-developed. Quick links to additional storage locations on the Home screen would offer a productivity boost.

Verdict

What File Manager does, it does well. Fluid navigation and a single-minded focus on simple file management make the app a joy to use. However, it’s held back by limited capabilities. That’s a shame, but even without expanded functionality, there’s a lot to enjoy using File Manager.

Features: 3

Performance: 5

Ease of use: 5

Overall: 4

Best of the Rest

ES File Explorer

ES File Explorer bills itself as the world’s number one file management app. Efficient in performance and feature-rich, the easy charm of ES should make it the clear Gold winner.

Unfortunately, the ad-supported app forces you to download bloatware to gain access to locked features, which defeats its purpose.

Solid Explorer

Solid Explorer kicks off with a 14-day free trial, with a full upgrade costing a reasonable £1.49. Fitting in well with the Android aesthetic, you won’t have any trouble navigating most of the app. There are some nice bonuses here, too, including individual file encryption. But moving files across to different storage locations isn’t at all smart, which makes the app difficult to recommend, given that it’s a primary function of a file manager.

X-plore File Manager

Aping a desktop file manager, X-plore is similar to Total Commander. It supports more cloud services than any other app on our list, as well as the standard collection of server connections. However, the desktop-style aesthetic is an uneasy fit on Android because the layout is ugly. Also, opening the nested folders soon overcrowds the screen. The dual pane goes some way to addressing this, but it’s not enough to fix all of its problems.

Image: Shutterstock

Oracle bundles cloud revenues, claiming it reflects hybrid approach


Bobby Hellard

25 Jun, 2018

Oracle has changed the way it reports cloud revenue figures every quarter by only offering up a combined figure for SaaS, PaaS and IaaS.

The database vendor used to report SaaS numbers on their own, and a combined figure for PaaS and IaaS, but it is now reporting just one figure for all of these, lumped in with license support. It’s also combined new cloud licenses and new on-premise licenses under ‘new software licenses’, not breaking either out.

Oracle co-CEO Safra Katz explained the change in a conference call with analysts last week, transcribed by Seeking Alpha.

“We have now labelled new software licenses as cloud license and on-premise license, and we’ve combined cloud SaaS plus cloud PaaS and IaaS, plus software license updates and product support, into cloud services and license support,” she said.

Katz said the changes were justified because of the company’s recent introduction of the option for on-premise customers to use a bring your own licence (BYOL) model when shifting to Oracle’s cloud.

“BYOL allows customers to move their existing on-premise licenses to the Oracle cloud so long as they continue to pay support for those licenses,” she said.

“BYOL also makes it cost effective for customers to buy new licenses, even if those licenses are only going to be used in the cloud. So some of our customers are buying new licenses and immediately deploying them in the cloud.”

Her argument is that customers are adopting a hybrid approach to buying Oracle kit, where revenues can’t be broken out neatly as cloud or on-premise, so instead, Oracle’s decided to bundle them together.

“To say it another way,” Katz added, “customers are entering into large database contracts where some of those database licences are to be deployed on-premise, while other database licenses are used in the cloud.

“Previously, all of those licenses and its related support revenue would have been counted entirely as on-premise, which clearly it isn’t.”

However, Oracle has doubled down on its cloud strategy over the last few years, aiming to outgrow rivals like Workday and Salesforce, making this a significant change.

Results a year ago saw cloud revenues grow a huge 60% year-on-year to $1.36 billion across SaaS, PaaS and IaaS. SaaS alone grew 75%.

This year Oracle’s financials painted a very different picture. Its bundled cloud services and license support category grew just 8% year-on-year to $6.8 billion, and its license revenues were down 5% to $2.5 billion.

CTO Larry Ellison cited AT&T’s decision to move thousands of databases into Oracle Cloud, saying: “We think that these large scale migrations of Oracle database to the cloud will drive our PaaS and IaaS businesses throughout FY19.”

But in Oracle’s third quarter for the three months up to the end of February 2018, when it was still listing SaaS separately, and IaaS and PaaS together, growth had slowed. SaaS grew 33% year-on-year (compared to 75% in June 2017) and IaaS and PaaS together grew 28%. Overall cloud grew 32%, almost half that 60% figure it recorded in June 2017.

Picture: Shutterstock

Going up: Public cloud market continues to soar with 2017 ‘pivotal’ year, says IDC

2017 was a ‘pivotal’ year in expanding public cloud service adoption according to IDC – with spending growth remaining at a constant level despite the overall market tripling in size.

The figures, which come from the analyst firm’s latest Worldwide Semiannual Public Cloud Services Tracker, show that while the overall growth rate for 2017 was a little smaller than the previous year, revenue growth of the top 16 providers by market share went up. The top tier vendors now capture more than half (50.7%) of the overall market.

Software as a service (SaaS) remains the largest bucket by some distance at $74.8 billion (£56.5bn) globally, with IDC predicting the SaaS market will hit $163bn by 2022 – well ahead of overall 2017 figures of $116.7bn. Infrastructure as a service (IaaS) spending last year was at $24.9bn, while platform as a service (PaaS) was at $17bn.

PaaS remained the fastest growing of the three markets, showing a 47.1% year on year increase, compared with IaaS (39.9%) and SaaS (22.4%). Both PaaS and IaaS however saw slightly slower growth when compared to the previous year’s figures, of 48% and 45% respectively.

Breaking down the numbers into regional figures, IDC found the US continues to provide the bulk of public cloud services revenue, although again seeing a minor decline from 62% to 60%. This is a trend which will continue in the coming years with new regional services and expansion from global players cited.

These figures make for interesting reading when compared with analysis from Synergy Research also published late last week. Synergy focused more on geographical dispersion, finding that aside from Asia Pacific, Amazon Web Services (AWS) led the way ahead of Microsoft and Google. The anomaly is down to the rise of China – where the top five providers are all local companies – with Alibaba second in APAC and fourth worldwide. Synergy described public cloud as ‘essentially a global market’ that was ‘a game of scale… and to be a market leader demands vast ongoing investments, a global presence and a global brand.’

“2017 saw some intriguing market share shifts among the major players, as all of them have significantly increased their focus on the cloud, and competitive pressure has ratcheted way up,” said Frank Gens, IDC senior vice president and chief analyst. “The next three years will determine IT industry leadership for the next two decades and beyond.”

Three unbeatable security advantages of cloud-based solutions for your business

Cloud-based solutions have never been more popular than ever. Proponents and opponents have their reasons to keep debates fuelled, but small to mid-sized businesses shouldn’t ignore the security benefits cloud can offer.

Higher standards

Implementing cloud-based solutions for your business is certain to bring a higher standard of security that your in-house IT team or a locally managed system is unlikely to achieve.

Multi-factor authentication: Small to medium-sized businesses don’t have the time, resources, or skills to implement higher security standards like multi-factor authentication. With hacking techniques becoming more effective every day, your systems and data aren’t necessarily safe with just a combination of a unique login ID and a complicated password.

Multi-factor authentication verifies user identity via more than one verification method from independent credential categories. These verifications combine something that the user knows (password), something that the user has (hard token), and something that the user is (fingerprint).

Physical security: When it comes to physical security of their data and facilities, small to medium-sized businesses can only do so much to prevent breaches. But cloud computing vendors can employ stronger physical security measures at their facilities to ensure data safety.

IT support providers are also equipped to prevent data loss from natural disasters, power outages, and common errors with well-documented disaster recovery plans.

Security certificates: Businesses can’t afford to take chances with their data, and compliance and security certifications make it easier for organisations to trust cloud computing providers. Providers with cloud security certifications are sure to employ individuals who are qualified and experienced with configuring cloud servers and keeping client data secure.

Some businesses are also required to be compliant with stringent rules depending on the industry they belong to. For small to medium-sized business, acquiring these certifications for themselves can not only be difficult, but expensive.

Less room for error

Advancements in technology reduce the need to rely on humans for many tasks. Since manual effort is not required for tasks that need to be replicated, using technology for those jobs directly translates to fewer errors.

When it comes to cloud computing, there’s no reason to worry about data being stolen as a result of misplacing storage devices or laptops and mobile phones. Since data is stored on the cloud, the loss of a physical device does not affect the data – though of course it is worth noting that if you lose a device and it contains sensitive data then you could be in trouble with the authorities.

Cloud providers also ensure their employees are on the same page and drawing from a single knowledge base. As experts performing as a team, cloud-based solutions and IT support services can be just what you need to achieve project success.

Patch management

Patch management involves installing and managing patches or code changes on all systems within a network. These patches improve systems, keep them up to date, and fix security vulnerabilities to keep hackers and malware at bay.

Security patches need to be applied to daily use software products diligently; it’s also necessary to test the patches to ensure they’ve been applied correctly. Because of this, patch management can be a tedious task for IT admins. And since not all small to medium-sized businesses have the resources to carry out this task, it can eventually put their systems and data at risk.

Cloud-based solutions allow for patch management with comprehensive scanning to identify missing patches. Deployment is efficient, and you can select a patch management tool that offers reporting capabilities to match your business’ unique requirements. Proactive monitoring and timely solutions with managed IT solutions not only mean data security but reduced downtime and increased productivity as well.

Registration Opens for @CTERA Session on #DigitalTransformation | @ExpoDX #AI #IoT #IIoT #FinTech #SmartCities

For years the world’s most security-focused and distributed organizations – banks, military/defense agencies, global enterprises – have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properly managed, is poised to bring about a digital transformation to enterprise IT. We will discuss the trend, the technology and the timeline for adoption.

read more

View from the airport: HPE Discover 2018


Adam Shepherd

25 Jun, 2018

This year marks my very first HPE Discover, stepping in to cover for IT Pro’s resident HPE expert Jane McCallion, and it’s been a good introduction to the company’s new direction – it’s safe to say that the HPE we saw this week is a rather different beast to the enterprise giant of old.

This year’s event was new CEO Antonio Neri’s first Discover as head of the company, and the first real opportunity for HPE’s customers, partners and staff to get a sense of his leadership style without the shadow of former boss Meg Whitman hanging over him. More than anything else, he came across as profoundly genuine; he’s been with the company for more than 20 years, starting out in the customer service department and working his way up the ranks, and it’s clear that he eats, sleeps, lives and breathes HPE.

He obviously cares deeply about the company, and one of the messages he kept repeating throughout the week was that he’s planning for the long game, rather than chasing short-term successes. As far as I’m concerned, HPE couldn’t be in safer hands from a leadership perspective.

With that said, however, I do have some slight reservations coming away from Discover 2018.

For one thing, the company’s strategy feels somewhat confused – HPE is an infrastructure provider first and foremost, but the company had virtually no new technology to show off. There were some minor updates to its Edgeline systems and new software-defined networking from Aruba, but other than that, the company’s traditional storage and server products hardly got a look-in.

This is slightly troubling for a company whose main business still revolves around these products. HPE has been putting a lot of effort into building out its GreenLake flexible consumption offering – which is a good direction to explore for HPE and its channel partners, especially in light of the growing desire for businesses to shift their spending from CapEx to OpEx.

On the other hand, the fact remains that even with flexible consumption, customers will still need something to consume, and we’re slightly worried that the company may soon end up slipping behind its rivals in traditional infrastructure R&D.

There is one notable exception to this – The Machine.

Long-time HPE followers will know that The Machine is the surprisingly awesome-sounding codename given to the company’s memory-driven computing project, which has had something of a chequered history. Martin Fink, the ex-CTO who was the brains behind the project, retired two years ago, and many believed The Machine had retired with him.

Amazingly, however, this year’s discover saw HPE actually launch something off the back of the project, in the form of a cloud-based environment designed to let developers play around with memory-driven computing. It may not be quite what we were initially promised – not yet, anyway – but it’s still surprising to see that The Machine is still chugging along.

As for the rest of the show, most of the focus was placed on what HPE is branding ‘the intelligent edge’. Translated, this means ‘anything that’s not a data centre or the cloud’. Astute readers will notice that this covers a pretty huge range of products, environments and use-cases, from industrial IoT systems, to office networking, to connected cars and more.

HPE has committed to a $4 billion investment in ‘the intelligent edge’ over the next four years, and while it’s a smart play for the company (not to mention being in line with its previous strategy), I can’t help but worry that covering such a broad area with a single blanket term runs the risk that it’ll lose all meaning.

One thing that was also repeatedly emphasised was HPE’s renewed focus on customers and partners, and unlike some other enterprise companies, it does seem sincere in this regard. Whether or not its more ambitious bets around edge computing and flexible consumption pay off, it seems like HPE has its heart firmly in the right place, and we’ll be watching with interest when Discover Europe rolls around in Autumn.

Image: Shutterstock

AWS leads across all geographies in public cloud – with Alibaba second in APAC

Amazon Web Services (AWS) has total geographic dominance in public cloud – but while Microsoft and Google secure second and third place through most of the globe, Alibaba is a clear second in APAC.

This is the key finding from the latest note by Synergy Research, which focused on Q1 data – although the majority of the figures won’t differ from what many market watchers already recognise.

AWS ranks at #1 worldwide, ahead of Microsoft, Google, Alibaba, and IBM. Across North America and EMEA, the top three remain the same but with IBM and Salesforce taking the last two places, while in Latin America the latter two swap over. In APAC, AWS leads Alibaba, ahead of Microsoft and Google, with Tencent taking fifth position – the latter because of its strong market placing in China.

Indeed, in China the top five providers are all local companies – and it is this sign which, as regular readers of this publication will recognise, has enabled Alibaba to expand further afield. As Yeming Wang, general manager of Alibaba Cloud Europe, explained last month: “To go global is definitely a corporate level of strategy.”

“Despite some local data sovereignty and regulatory issues, in most meaningful ways public cloud computing is essentially a global market,” said John Dinsdale, a chief analyst and research director at Synergy Research. “This is a game of scale and to be a market leader demands vast ongoing investments, a global presence and a global brand.

“Of course there will often be local issues that might enable local companies to carve out niche positions for themselves, but they will remain small local players in a specific country or sub-region,” added Dinsdale. “It is also true that in such cases the global leaders can usually deploy different local strategies to enable them to succeed.

“With the glaring exception of China, we view this as a truly global market.”

SMBs now need MSPs more than ever


Maggie Holland

21 Jun, 2018

Small and medium-sized businesses (SMBs) are struggling in the face of growing challenges that are, in some cases, being made much more complex by the cloud rather than simplified.

So claims Datto CEO Austin McChord, who talked about how increasingly challenging the small business landscape has become and is becoming, while speaking at the firm’s Dattocon event in Austin, Texas this week.

Given such a backdrop, SMBs will increasingly turn to managed service providers (MSPs) to provide the added layers of expertise and proficiency they either lack or can’t afford to recruit internally.

“Small businesses are facing challenges. Whether it’s regulation, security or the fact that moving to the cloud makes things more complex not simpler. Many of these small businesses don’t have the knowledge or expertise to navigate this landscape,” McChord said.

“The opportunity is massive. More than $40bn runs through small businesses. And up to 50% of this touches MSPs. By 2022m it’s expected to be north of $72bn.”

Datto announced a series of enhancements to its solution set – both on the products MSPs service SMBs with as well as the PSA tools providers, many of which who are SMBs themselves, use to run their business.

It also made good on promises made at the last Dattocon event and also post news of the Vista Equity Partners acquisition and merger with Autotask. In particular, it pledged to continue to better help support partners so they can, in turn, better serve the varying needs of their own customers.

Choice seemed to be the watchword, and that led onto a commitment of greater openness and integration with companies such as Connectwise.

Mark Banfield, Datto’s senior vice president of international, echoed the need for MSPs to support SMBs as they navigate a maze of complexity and uncertainty.

“Certain markets in the UK are dominated by SMBs. Germany, Italy etc are the same. UK MSPs will be the engine to deliver IT services to the SMB market. With such complexity, they need MSPs more than ever.”