ManageEngine OpManager Plus 12.4 review: Ideal for VM monitoring


Dave Mitchell

20 Mar, 2020

Simple licensing and a good set of built-in features make this a fine monitoring choice that’s easy to manage

Price 
£2,920 exc VAT

If you want to keep your licensing simple, OpManager is sure to appeal: pricing is based solely on the number of monitored devices, regardless of how many interfaces or elements each one has, and it starts at just £188 for a perpetual ten-device licence. There’s also an annually licensed option, which includes the NetFlow, IP service-level agreement and deep packet inspection add-ons (which must otherwise be purchased separately). Whichever licensing model you pick, VMware, Hyper-V, XenServer and Nutanix host monitoring come as standard.

This latest version of OpManager also supports Windows Server 2019 systems and improves integration with enterprise storage systems from the likes of Dell EMC and NetApp. Deployment is simpler than ever, thanks to over 8,000 predefined device templates, while dashboards can now be personalised for specific users, and alerting options include Slack and ServiceNow.

The software itself is quite light, so you don’t need to dedicate a host to its use. We had it up and running on a Windows 10 desktop in 15 minutes, and all it added was a single service and a default PostgreSQL database.

On first launch a discovery wizard steps you through entering IP address ranges, providing credentials and setting a schedule for future discovery runs. It took around ten minutes to then scan the lab network.

With this done, it’s time to turn to the OpManager web console. This can be accessed directly on the host or remotely and opens with a smart dashboard view. You can customise this with your choice of over 100 widgets, although you can’t tweak the total number of columns shown – this is determined by the size of the widgets and where you position them.

We found it a breeze to set up multiple dashboards showing details such as CPU and memory usage for individual devices, plus Active Directory availability and alarm summaries. The heatmap widget provides a grid of coloured blocks representing each device and their status, with quick links to each one, and you can set up multiple dashboard views with large-print displays, suitable for support departments.

All features are accessible from a clear ribbon menu across the top so we were quickly able to find and inspect our switches, printersWindows servers and Linux-based NAS appliances. The virtualisation dashboard presented plenty of detail about our VMware ESXi and Hyper-V hosts too, and you can drill down to examine host storage devices and resource usage. The VM sprawl display, meanwhile, shows idle VMs, and those with over and under-provisioned CPU and RAM resources.

While there are a lot of features included in the price, one extra that’s worth considering is the Application Performance Management add-in, which adds details of an impressive range of applications, databases and cloud services. It snaps into the main OpManager web console, and we found it handy for keeping a close eye on our Amazon S3 cloud storage.

As for alerts, you don’t necessarily have to lift a finger: preset warning thresholds are assigned to all devices, and for the next release ManageEngine is working on adaptive thresholds that use AI and ML algorithms. If you want to get more hands-on, you can set up automated responses to specified conditions: the console’s Workflow Builder tool makes it easy to drag and drop conditions and actions, and apply them to critical devices.

Easy to deploy and simple to licence, ManageEngine’s OpManager Plus is a great choice for those who want to keep their management burden to a minimum – and its built-in virtualisation monitoring makes it tempting value.

Getting started with Kubernetes


Danny Bradbury

19 Mar, 2020

Container systems like Docker are a popular way to build ‘cloud native’ applications designed for cloud environments from the beginning. You can have thousands of containers in a typical enterprise deployment, and they’re often even more ephemeral than virtual machines, appearing and disappearing in seconds. The problem with containers is that they’re difficult to manage at scale, load balancing and updating them in turn via the command line. It’s like trying to herd a bunch of sheep by dealing with each animal individually.

Enter Kubernetes. If containers are sheep, then Kubernetes is your sheepdog. You can use it to handle tasks across lots of containers and keep them in line. Google created Kubernetes in 2014 and then launched the Cloud Native Computing Foundation (CNCF) in partnership with the Linux Computing Foundation to offer it as an open project for the community. Kubernetes can work with different container systems, but the most common is Docker.

One problem that Kubernetes solves is IP address management. Docker manages its own IP addresses when creating containers, independently of the host virtual server’s IP in a cloud environment. Containers on different nodes may even have the same IP address as each other. This makes it difficult for containers on different nodes to communicate with each other, and because containers on the same host share the same host IP address space, they can’t use the same ports. Two computers on the same node can’t each expose a service over port 80, for example.

Understanding Kubernetes pods and clusters

Kubernetes solves problems like this by grouping containers into pods. Each container in a pod has the same IP address, and they can communicate with each other on localhost. It exposes these pods as services (an example might be a database or a web app). Collections of pods and the nodes they run on are known as clusters, and each container in a clustered pod can talk to containers in other pods using Kubernetes’ built-in name resolution.

You can have multiple pods running on a node (a physical or virtual server). Each node runs its own Kubelet, which ensures that a cluster is in the correct state, along with a kube-proxy, which handles network communication for the pods. Nodes work together to form a cluster.

Kubernetes manages all this using several components. The first is the network overlay, which handles networking between different pods. You can install a variety with a range of capabilities, including advanced ones like the Istio service mesh.

The second component is etcd, which is a database for all the objects in the cluster, storing their details as a series of key:value pairs. etcd runs on a master node, which is a machine used to administer all the worker nodes in the cluster. The master node contains an API server that acts as an interface for all components in the cluster.

A node controller running on the master node handles when nodes go down, while a service controller manages accounts and access tokens so that pods can authenticate and access each other’s services. A replication controller creates running copies of pods across different nodes that run the same services, sharing workloads and acting as backups.

Installing and running Kubernetes

Installing Kubernetes will be different on each machine. It runs not just on Linux, but also on Windows and macOS. In summary, you’ll install your container system (usually Docker) on your master and worker nodes. You’ll also install Kubernetes on each of these nodes, which means installing these tools: kubeadm for cluster setup, kubectl for cluster control, and kubelet, which registers each node with the Kubernetes API controller.

You’ll enable your kubelet service on each of these nodes so that it’s ready to talk to the API. Then initialise your cluster using the kubeadm command kubeadmin init for your master node. This will give you a custom kubeadm join command that you can copy and use to join each worker node to the cluster.

After this, you can create a pod. You define the pod’s characteristic using a configuration file known as a PodSpec. This is often written in YAML (“YAML ain’t markup language”), which is a human- and machine-readable configuration format. Your YAML file will define the name space that your pod exists in (you can name Kubernetes clusters differently so that you can run multiple clusters on the same physical machine). 

The PodSpec also defines the details for each container inside the pod, including the Docker images on which they’re based. This file can define a pod-based volume for them in the same pod so that they can store data on disk and share it. You can create a pod using a single command – kubectl create – passing it the name of your YAML file.

Running copies of a pod for resilience and workload sharing is known as replication, and a collection of replicated pods is called a replica set. While you can handle replica sets directly, you’ll often control them using another kind of Kubernetes object known as a deployment. These are objects in the Kubernetes cluster that you use to create and update replica sets, and clear them away when you’re done with them. Replica sets can contain many pods, and a deployment gives you a strategy to update them all (adding a new version, say).

A YAML-based deployment file also contains a PodSpec. After creating a deployment (and therefore its replica pods) using a simple kubectl create command, you can then update the whole deployment by changing the version of the container image it’s using. Do this using kubectl set image, passing it a new version number for the image. The deployment now updates all the pods with the new specification behind the scenes, taking care to keep a percentage of pods running at all times so that the service keeps working.

This is all great, but how do we actually talk to and reference these pods? If we have, say, ten web server pods in a replica set, we don’t want to work out which one’s IP to visit. That’s where Kubernetes’ services come in. We define a service that exposes that replica set using a single IP address and a service name like ‘marketing-server’. You can connect to the service’s IP address or the service name (using Kubernetes’ DNS service) and the service interacts with the pods behind the scenes to deliver what you need.

That’s a short introduction to Kubernetes. As you can imagine, there’s plenty more to learn. If you’re hoping to manage native cloud services in any significant way, you’re going to bump up against it frequently, so it pays to invest the time in grokking this innovative open source technology as much as you can. With Kubernetes now running on both Amazon Web Services and Azure alongside Google’s cloud service, it’s already behind many of the cloud services that people use today.

Paessler PRTG Network Monitor 19.4 review: Outstanding cloud monitoring


Dave Mitchell

18 Mar, 2020

An affordable and feature-rich monitoring solution that will keep an eye on just about anything on your network

Price 
£3,832 exc VAT

If you have a diverse range of hardware and systems to keep an eye on, Paessler’s PRTG Network Monitor could be the perfect solution. It supports no fewer than 257 sensor types – all of which are included in the standard package – so there’s a good chance it will work with everything on your network.

When we say everything, we mean it. PRTG can keep tabs on servers, switches, routers, Hyper-V, VMware and Citrix XenServer hosts, and plenty of cloud services, business apps and storage providers. And since the software updates itself automatically, you’ll get access to new capabilities as soon as they become available: the latest release brings a new health sensor for Fujitsu’s iRMC server management controller, and even expands into Internet of Things territory with a sensor that collects data from IoT-type devices.

Despite all this, you don’t need a powerful server to install the software. You can run PRTG locally on a modestly specified system, or let Paessler host it in the cloud for you. You don’t need to pay for more features than you need, either, thanks to Paessler’s sensor-based licensing. At first you might be alarmed to see how quickly your sensor count gets eaten up, but this is because when PRTG runs its first discovery, it automatically assigns a range of sensors to each device if finds: our Hyper-V server alone accounted for 51 sensors. Happily, it was easy to review these and delete the ones we didn’t require so they were available for use elsewhere. 

The discovery process also sets default alert triggers for each sensor, with alerting options that include mobile push notifications, email, Slack and Microsoft Teams. You can also set alerts to trigger actions such as restarting a service.

The main web console presents not only a complete status overview of your network, but a helpful tree view with all systems tidily organised into hierarchical groups. Moving devices between groups causes them to automatically inherit settings such as discovery schedules and login credentials. 

Spotting problems is easy, as sensors are colour-coded to indicate whether they’re in up, down, paused or warning states and you can instantly drill down into them for more detail. You can also pull up views of the top ten sensors for uptime, downtime, CPU usage, fastest website responses and more.

Cloud support is a real strength of PRTG. It includes seven different Amazon CloudWatch sensors, plus others for Google DriveDropbox and OneDrive, while the SaaS sensor keeps an eye on cloud application platforms such as Office 365Server hardware gets plenty of attention, too: along with a global IPMI sensor, PRTG can directly monitor Dell’s iDRAC controllers and report on physical storage devices and power.

As an alternative to using the web console, PRTG also comes with native Windows and macOS desktop apps, which replace the older PRTG Enterprise console. Alongside full device tree views, the Windows app allowed us to manage sensors, edit multiple objects, drag and drop devices into new groups and enable system tray alerts. The free Android and iOS apps are excellent, giving you convenient remote access to the PRTG server and all sensor data. With the app loaded on our iPad, we had no problem connecting to the core PRTG server, pulling up sensor data on selected systems and receiving push notifications when sensor thresholds were breached.

Businesses that want to monitor everything on their network without having to worry about extra costs or unsupported devices will find Paessler’s PRTG Network Monitor a fine choice. It dishes out sensors a bit more liberally than you’ll probably want, but these can be easily moved to where they’re needed, resulting in a monitoring solution that’s not only highly capable but good value.

Panda Adaptive Defense 360 review: Security in black and white


Dave Mitchell

17 Mar, 2020

Panda’s innovative cloud endpoint protection service fills the gaps other security solutions leave behind

Price 
2147 exc VAT

Panda’s Adaptive Defense 360 (AD360) takes cloud-hosted security to the next level, combining a wealth of endpoint protection features with data control, encryption and patch management tools. This makes it appealing to businesses with GDPR compliance on their minds, as they can protect endpoints from malware, keep them updated with the latest patches and stop data containing PII (personally identifiable information) from leaking, all with a single tool.

AD360’s advanced protection module analyses and classifies every application being run on Windows endpoints and only blocks those it doesn’t know about. It doesn’t stop them from running permanently though; Panda’s cloud service checks the app’s security posture in the background and, if it’s deemed to be safe, will instruct the endpoint client to allow it through.

AD360’s endpoint protection features are extensive, including file, email and web antivirus, a firewall, web filtering and removable device controls for Windows systems. Exchange servers are supported too, and AD360 provides separate antivirus, antispam and attachment content filtering components.

The data protection module scans protected endpoints using machine learning algorithms and regular expressions to detect PII content in a wide range of file formats. It keeps track of all activity and can tell you what each user has been doing with these files such as opening, editing and renaming them, sending and receiving them via email or copying them to removable media.

Panda Adaptive Defense 360 review: Deployment

Deployment is undemanding, thanks to endpoint agents for Windows, macOS, Linux and Android, which can be downloaded from the portal or emailed as a web link. A quicker option for installation on the LAN is to install the agent on one machine first and designate it as a discovery computer.

This scans the network and presents a list of all discovered devices, where you select them and push the agent out remotely. Either way, it only took us a minute to load it on each of our Windows 10 clients after which it contacted the cloud service and applied all our predefined settings. 

All endpoints are dropped into a default group with a base security profile for immediate protection but you can easily create your own groups, each with a set of custom profiles. These are used to define active security services, firewall rules and update frequency while web filtering offers over 60 categories to block or allow and can use daily schedules to determine when it was active.

Initially, you run the advanced protection in ‘audit’ mode where it gathers information about your everyday apps. When you’re ready, you can set it to ‘hardening’ mode which will block unknown external programs until they’ve been assessed, while the ‘lock’ mode includes all local apps as well.

Panda Adaptive Defense 360 review: Patch management

Patch management is an optional feature and requires the endpoint protection or adaptive defense components to be licensed. As with Avast’s Business Patch Management (BPM), it can’t be run on its own but Panda has made a far more professional job of implementing it.

Panda requires Windows automatic updates to be disabled and, unlike Avast’s BPM, it’s all done for you. When creating a patch management policy, you can request automatic updates to be disabled and we found it worked perfectly on all our Windows 10 test clients with no manual intervention required.

Profiles determine a scan frequency of between one hour and once a day and after scanning all our clients, Panda created a list of available updates separated into five criticality levels along with non-security related and service pack groups. Tasks are used to deploy patches and include client groups, a schedule, selected patch groups and third-party products from the software inventory that you also want patched.

Panda then just gets on with the job of patching and provides a task status view that shows which clients are patched and those in progress. If users try to reboot their system during this process, they’ll receive a pop-up message advising them that patching is in progress.

Panda Adaptive Defense 360 review: Data control

The data control component is fully integrated into the web portal and uses profiles to determine what it should search for. To scan and index Office documents, each Windows endpoint requires the Microsoft Filter Pack 2.0 installed which we downloaded straight from the AD360 portal.

You can choose to index only text files but if you opt to index everything on each client, the first run will take many hours and possibly a day. Even so, it’s worth the wait as Panda came back with a heap of valuable information about files residing on our clients that contained PII.

The portal separates them into groups such as personal ID, passport, credit card and phone numbers, email addresses plus bank account details and clicking on a graph category takes you to a list of clients with details of the exact file locations. We could run advanced searches on selected clients to look for keywords and phrases in a range of file types and use the portal to remotely delete unwanted files.

The advanced visualization tool takes this further as it’ll tell you what actions have been carried on these files and when, the application that accessed them, the user responsible and exfiltration risk levels. It provides a lot more information than this though, as it can present detailed reports and graphs on security incidents, malware detections and app controls.

Panda Adaptive Defense 360 review: Verdict

Panda’s Adaptive Defense 360 is a clever cloud security solution that delivers a wealth of endpoint protection features at a great price. It’s easy to deploy and manage, offers sophisticated data control features and whereas other security vendors stumble with patch management, Panda has perfected it.

1&1 Ionos HiDrive Business Pro review: Simple but unsophisticated


Dave Mitchell

13 Mar, 2020

Good, simple cloud file sharing – but administrative controls are minimal and it’s comparatively pricey

Price 
£20 exc VAT per month

One of Europe’s largest hosting companies, 1&1 Ionos is a relative newcomer to the file-sharing party, with its HiDrive service offering a simple cloud file syncing and collaboration solution.

Three plans are available: we tested the top-dog Pro option, which starts at ten users and dishes up 2TB of cloud storage for £20 per month on a one-year contract. That may sound like a bargain, but note that your 2TB isn’t per user, but a total that’s shared across all users.

One notable thing about HiDrive is that it includes a backup service that creates copies of all of your cloud data and retains them for up to a year. This isn’t as smart as the file versioning systems offered by many competitors, but it can be run as often as every four hours, and lets users easily download selected backups from the cloud. The only catch is that these backups count against your storage allocation.

Adding new members to your team is a breeze: invitations can be emailed from the cloud portal and you can choose whether or not each account gets administrative privileges. On opening the invitation, new users will find a link to the web portal, from which they can download the HiDrive desktop app for Windows and choose which cloud folders they want synced to their desktop. Cloud folders can also be conveniently mapped to a local password-protected drive letter. 

If your office runs entirely on Windows, this is great – but be aware that there’s no desktop client for Mac and Linux users, so they will need to use the web portal to get at their data. Alternatively, the administrator can enable access via various protocols, including CIFS/SMB, WebDAV, FTP, SFTP and rsync. Another option is to use the HiDrive mobile apps: the iOS version, running on our iPad, let us view all our cloud data, upload and share files, use the camera to scan documents to the cloud and back the device up.

Another limitation of HiDrive is that it doesn’t give users the ability to share their own personal folders with other team members – something that most competing solutions allow. It does, however, provide a general-access Public folder, which you can make available to all users, and which everyone can optionally synchronise to their desktop like a personal folder. 

While users can’t share folders, they can securely send file links to others – including those without a HiDrive account – directly from either the web portal or Windows Explorer. It’s good to see that, when creating a link, you’re prompted to apply password protection, a download limit and an expiry date. You can also send email requests to non-HiDrive users inviting them to upload files to a password-protected folder.

All data is secured in transit using SSL and encrypted on the HiDrive cloud servers; if you choose the Pro plan then there’s also an end-to-end encryption option, although, surprisingly, it’s actually left to the user to choose whether to apply this and to manage their own encryption keys – something we suspect administrators won’t be delighted about.

The Pro plan also includes a scheduled device backup function, allowing users to have selected local folders automatically copied up to the cloud. Data can be restored from the desktop app or from the portal; again, though, administrators have no control over these processes.

At £20 per month for a shared 2TB of cloud storage, HiDrive Business Pro isn’t the cheapest cloud file-sharing solution out there, and we would be happier if managers were able to take full control of user activities. It is easy to use, though, making it a good fit for smaller businesses seeking uncomplicated file-sharing and syncing services for Windows.

G Suite hits two billion users as remote working surges


Bobby Hellard

13 Mar, 2020

Google’s G Suite service, which includes Gmails, Google Docs, Hangouts and more, surpassed two billion monthly active users at the end of last year, according to its general manager.

Javier Soltero, who is also the vice president of G Suite, made the announcement to Axios on Wednesday.

He declined to give a detailed breakdown of the numbers, according to Axios, so there is little information on what products are used most or how many pay for the service compared with free users.

G Suite has long been seen as the challenger to Microsoft’s Office services, and Soltero knows both extremely well having left the latter for Google last year.

But with many companies around the world either entering a period of remote working or are well into self-isolation, Google’s suite of productivity services is already raking in healthy numbers.

“That’s a staggering number… These products have incredible reach,” he said. “Changing the way people work is something we are uniquely positioned to do.”

At the start of the month, Google announced parts of its enterprise service would be free, for a limited time, to help mitigate the impact of COVID-19 on businesses entering periods of mass remote working. This mainly focused on Hangouts Meet, its video conferencing service.

Likewise, Microsoft is offering a free six-month trial for Teams, according to Business Insider, which was originally just for schools and businesses in China, but has now been expanded globally – this will also come with an update that lifts the restrictions on the number of users per team.

According to Vox, Team’s saw a 500% increase for meetings, calls and conference usage in China towards the end of January, with those numbers to be likely mirrored in Europe in the coming weeks and, possibly, months.

Pentagon to ‘reconsider certain aspects’ of JEDI Microsoft cloud contract award

The Pentagon has asked a federal court for 120 days to ‘reconsider certain aspects’ of the decision to award Microsoft the $10 billion (£7.9bn) federal cloud computing contract.

The ruling, in a court order published on Thursday, noted that Amazon Web Services (AWS), who last month won a temporary injunction against the award, would ‘likely be able to show that the Department of Defense (DoD) erred’ in its evaluation.

Both parties would not be able to re-evaluate their proposals in terms of adding new offerings, aside from one particular price scenario, the order added.

Microsoft had been announced as the winner of the JEDI (Joint Enterprise Defense Infrastructure) contract in October, to the surprise of many in the industry. Of particular interest to pundits was the explanation, in the DoD’s news release, that the award ‘continued [its] strategy of a multi-vendor, multi-cloud environment… as the department’s needs are diverse and cannot be met by any single supplier’.

AWS has been running the CIA’s cloud operations for the past five years, with multiple reports last month saying the agency was looking to upgrade its offering in a ‘tens of billions’ deal. A month later, it was reported that AWS had filed with the US Court of Federal Claims to protest the decision, with chief executive Andy Jassy telling employees at an all-hands meeting that potential presidential interference made the contract process ‘very difficult’ for government agencies.

Jassy also reportedly said during the meeting that AWS was ‘about 24 months ahead of Microsoft’ when it came to functionality and maturity. Per the terms of the injunction last month, Amazon is betting $42 million to cover costs should the final ruling fall to Microsoft.

Cloud pundit Bill Mew, however, said the update shows how the story has moved away from technology to one purely around procurement. “JEDI has gone from being about the comparative merits of a single cloud or multi-cloud approach to being a case study in procurement dysfunction,” Mew told CloudTech. “The lobbying, dirty tricks and arguments about political bias have completely eclipsed any technology arguments. This in itself shows how badly JEDI has gone off the rails.”

Mew, whose career has not only spanned 15 years at IBM but a stint as an officer in the Royal Navy, analysed the DoD function alongside the UK government’s upcoming review of foreign policy, defence, security and international development. “Compared to JEDI, even UK defence procurement looks good,” he added.

CloudTech has reached out to Amazon and Microsoft for comment and will update this story as and when it arrives.

You can take a look at the court order, as published by the Washington Post, here. (Disclosure: Jeff Bezos, CEO of Amazon, is also owner of the Washington Post).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Pentagon requests time to reconsider Microsoft JEDI bid


Bobby Hellard

13 Mar, 2020

The US Department of Defence (DoD) has requested permission to reconsider parts of its decision to award its $10 billion cloud migration contract to Microsoft, court filings have revealed.

These concern parts of Microsoft’s bid that detail price scenarios and online marketplaces which have been deemed not “technically feasible” by the US Federal Court of Claims.

Work on the Joint Enterprise Defence Infrastructure (JEDI) project was halted in February after AWS launched a legal appeal that the evaluation of the bidding process was flawed; the tech firm also suggested it was subject to unfair political influence.

Federal Claims Judge Patricia Cambell-Smith, who ordered the suspension of Microsoft’s work on JEDI, said that AWS was “likely to succeed” in its legal challenge as the DoD had improperly evaluated a Microsoft storage price scenario.

Now lawyers for the US government have asked for “120 days to reconsider certain aspects of the challenged agency decision”, according to court filings made late on Thursday.

“DoD does not intend to conduct discussions with offerors or to accept proposal revisions with respect to any aspect of the solicitation other than price scenario,” the filing said, according to Reuters.

There are no exact details on what the issue is with the pricing proposed by Microsoft but the company feels it is an easy problem to solve. A spokesman said in a statement to Bloomberg that it supports the decision to reconsider a small number of factors “as it is likely the fastest way to resolve all issues and quickly provide the needed modern technology to people across our armed forces”.

Political influence, namely from President Donald Trump, is also a significant part of AWS’ legal challenge, but as yet, the courts have not mentioned any action on that element of the case.

“We are pleased that the DoD has acknowledged ‘substantial and legitimate’ issues that affected the JEDI award decision, and that corrective action is necessary,” a spokesman for AWS said to Reuters.

Is the best cloud a small cloud?


David Howell

13 Mar, 2020

Since the inception of the cloud, large monolithic infrastructures have been the norm. Azure, AWS and Google Cloud all offer almost infinite scalability and relatively low cost. However, is the dominance of the big three cloud service providers waning?

Businesses have been increasingly creating smaller hybrid cloud structures to meet their needs. By mixing on-prem and larger hosted platforms, they have been afforded greater choice and the ability to develop specific cloud infrastructures.  

However, are we moving into an era where bespoke cloud services become popular, as businesses look to create ‘boutique’ clouds – offering more personalisation and a specific set of features often linked to one service application?

Speaking to IT Pro, Nick McQuire, senior vice president of enterprise research at CCS Insights, says: “The definition of what a boutique cloud is remains open. You could argue, for instance, that a private managed cloud is also a boutique cloud, so I think we need to define what we mean.”

“I think the future of cloud services is a real mix. The hyperscalers will always be there, as will the hybrid cloud infrastructures,” McQuire continues. “Inside of these, we may see more specialised cloud services, which could be described as ‘boutique’ for specialist sectors such as financial services, or to meet specific regulatory requirements.”

The industry is already seeing a move to multi-cloud deployments: Microsoft’s Azure Arc – a rebranding of its Data Box Edge hardware – for example, focuses on the burgeoning IoT and edge computing space and is in public preview. The idea is to bring VMs and containers to any infrastructure no matter irrespective of needs of size. Enterprises with specific requirements for their cloud deployments could create a boutique cloud within the Microsoft environment.

For many industry watchers, the next battle will take place across the multi-cloud, as enterprises continue to focus on building the bespoke services they need. Already the major players are jockeying for position: Microsoft has Azure Arc, Google has debuted Anthos, IBM will run its services on multi-cloud management systems and Cisco has its CloudCenter Suite.

The adoption of cloud services will continue. According to Gartner, by 2022, nearly a third (28%) of spending on essential IT services will shift to the cloud.

Michael Warrilow, research vice president at Gartner, explains: “Cloud shift highlights the appeal of greater flexibility and agility, which is perceived as a benefit of on-demand capacity and pay-as-you-go pricing in cloud.”

Is smaller better? It all depends on the specific business need. What is certain is the cloud environment is rapidly changing. We are moving out of its first phase of development to more refined services and flexible infrastructures.

Compact and bijou

Research from Flexera illustrates how hybrid cloud adoption has expanded, with 84% of enterprises have a multi-cloud strategy. Enterprises with a hybrid approach (combining public and private clouds) grew to 58% last year according to their survey.

Flexera also revealed: “Among enterprises, the central IT team is typically tasked with assembling a hybrid portfolio of clouds. This year, while 31% of enterprises see public cloud as their top priority, a combined 45% of enterprises see a hybrid cloud or a balanced approach between public and private as the biggest focus. Only 9% of enterprises are focusing on building a private cloud, and 6% see their top priority as using a hosted private cloud.”

The multi-cloud and hybrid cloud have continued to expand to become the dominant form of cloud service infrastructure. But as we move closer to real-world deployments of 5G and edge computing, the multi-cloud may change again to become more boutique as services specialise.

CCS Insights’ McQuire explains: “If your business is a complex IT environment, then existing suppliers like IBM and Red Hat, for instance, will be able to offer you the services your business requires. The mix of on-prem and public cloud isn’t going to go away anytime soon. However, what I think we are beginning to see the first green shoots of is the large players pushing into specific industries. IBM last year, for instance, launched a cloud service aimed at the financial sector.”

Commenting on the research hic company conducted with Freeform Dynamics, Hiren Parekh, UK country leader for OVH, adds: “Our results show strong interest in working with specialist cloud providers, with 21% of organisations committed to using providers aligned to specific applications or infrastructure; 18% are committed to using specialist providers focused on particular use cases and 17% are committed to local providers who cover a specific geography. This echoes what we are hearing from our customers, suggesting demand for cloud providers of all sizes and underlining the popularity of the multi-cloud approach.”

A specific need will drive the business case for smaller cloud services. Large cloud deployments can become unwieldy with businesses often feeling they have little control. Hybrid cloud infrastructures have addressed these anxieties to a degree, but we could see more refinement in how companies buy and organise their cloud services over the short term.

A small cloud future

Smaller cloud service providers such as Vultr, Packet, UpCloud and Linode offer compact and specific services, which could define both what the boutique cloud means today and how some cloud services could be bought over the next few years, particularly by smaller businesses.

“I see a cultural shift in risk appetite which we see across the whole spectrum of technology,” says Justin Day, CEO of Cloud Gateway. “Businesses are seeking out smaller cloud service suppliers because they offer better flexibility and more agile working while also being more focused or simply better at delivering more niche services. Because at the end of the day, it’s about allowing businesses of all sizes to get the very best out of their cloud systems and leverage the very best out of those service suppliers.”

Adam Bradley, UK MD of Ekco, believes service levels are pushing companies towards smaller cloud service providers. “I think people are just sick of bad service,” he says. “They are sick of waiting for someone who knows what they are talking about to call them back, and they are tired of having to do all the hard work themselves.”

Cloud services have become somewhat horses for courses. CTOs tasked with adopting an agile cloud-based IT infrastructure have often found themselves managing what could be described as cloud sprawl – running several cloud deployments from several vendors. Businesses are rationalising their use of the cloud.

As cloud services have matured, it has opened the door for smaller service providers who can focus on specific sectors or industries. Building cloud services for these highly defined spaces is a crucial trend through 2020. In the medium term, whether these boutique vendors can remain viable faced with shifts by the large cloud suppliers towards multi-cloud and specialist cloud services, remains to be seen.

HPE reveales text book-sized micro server


Daniel Todd

12 Mar, 2020

HPE has expanded its Small Business Solutions portfolio with the new ProLiant MicroServer Gen10 Plus, which the company claims provide industry-leading capabilities that will help SMBs drive growth and digital transformation.

Designed to address businesses’ budget, IT and special business needs, the system offers automation, remote management, security capabilities, as well as Intel Pentium and Intel Xeon E processors.

Customers can use the new MicroServer, which is the size of a typical hardback text book, for less than $20 per month. And HPE says the offering is as easy to set up as a smartphone. In fact, the new offering weighs in at just 10 pounds and is a third of the size of existing server market products, the tech firm added.

“We are committed to helping small businesses innovate, serve their customers, and drive growth and digital disruption by empowering them with enterprise-class technologies that uniquely address their needs for IT expertise, budget and space,” said Tim Peters, vice president and general manager at Global SMB & Mid-Market, HPE.

“The design of our latest HPE MicroServer and strategic pricing model was inspired by our SMB customers to meet their expectations for the most economical, secure and easy-to-manage solutions that supports their entire business operation.”

The new ProLiant MicroServer Gen10 Plus offers a range of capabilities that HPE said enable faster performance, data protection, automation and ease-of-management. For starters, the inclusion of Intel Pentium or Intel Xeon E delivers compute support for virtualisation and database workloads but registers at just 36 decibels for versatile placement.

The system is also the first ProLiant MicroServer – and the industry’s only server family – to provide the HPE-exclusive silicon root of trust technology, the company explained, which extends security protection at the silicon level.

Users can also use cloud-based AI management tool HPE InfoSIght for Servers, HPE Integrated Lights Out 5 (iLo5) for remote management, as well as flexible options for both in-office operations and the cloud.

“SMBs are looking for easy-to-manage solutions that can scale as needed. Solutions, such as this one from HPE, addresses this demand with small businesses by delivering enterprise-grade technologies, which combine servers, software, networking and cloud capabilities that are easier for small business to adopt and manage regardless of their in-house IT capabilities,” commented Shari Lava, research director, Small Medium Business (SMB) Research Program at IDC.