Six things to keep in mind when shopping for a cloud backup solution

When you’re looking for a cloud backup solution for your business, you’re not just thinking about storing a few files in a remote location. You’re planning your disaster recovery strategy. You want a reliable tool that will store your business data and help you recover it when you need it. The thing is, your data is spread over different types of computers, devices and applications, which makes your strategy more complicated. There are a few things you should keep in mind.

You want a cloud backup solution that can protect local machines

The term ‘local machine’ goes beyond the simple desktop. You’re using computers for all your daily tasks and they’re essential for your business. For example, they host your:

  • Services to manage your network at the office (like DHCP, Wi-Fi, router)
  • Local domain services (like Active Directory, DNS)
  • Print services (sharing multifunctional printers in the company)
  • Physical access control system (surveillance cameras, cards or biometrics to enter company buildings)
  • System management for food and beverage machines

And the list goes on. Every single downtime on these services can damage your company’s productivity. A cloud backup solution for business that helps you do a full backup of the operating system and your computer applications will ensure that you can always get back on your feet.

You want a cloud backup solution that can protect your servers

Your servers can be hardware servers or virtual machines (VMware, Hyper-V, etc.). They can run any Linux distribution or Windows, as the operating system. The most important thing is the service they are hosting:

  • Public web site
  • eCommerce application
  • Accounting platform
  • Email system
  • Customer relationship management

Your servers could be on-premises or in the cloud, it doesn’t matter. When you’re looking for a cloud backup solution for business, you want to recover quickly if something goes wrong, like a user error or a ransomware attack.

You want a cloud backup solution that can protect your databases and applications

A modern IT application will be made up of different functionalities. It’s often a 3-tier design with a:

  • Presentation interface (the client software or web page with interactive windows)
  • Business logic (a server that receives and processes requests, and pulls the right data)
  • Database and programming (the data and its management system)

For example, with your Outlook client software, you write emails that the SMTP email server will send, and those emails will be stored in a mailbox database so you can read them later or forward them to someone else.

Your cloud backup solution for business should be able to handle this complexity as well as its constraints. For example, it should know how to handle files that are ‘open,’ meaning being read or written by the system. It should allow you to back up all the tiers and restore them in the right sequence, so they can run together again after an incident.

You want a cloud backup solution that can protect all the diversity of end-user devices

Gone are the days when employees were only using desktops or laptops at the office.

  • Today’s users are also accessing and processing business data from smartphones and tablets
  • They’re working from public or home networks
  • They’re using their personal devices for work
  • Business data is cohabiting with personal files

Every IT administrator wants to make sure the data on these devices is protected. Your cloud backup solution for business needs to back up as much data as possible from mobile devices.

You want a cloud backup solution that offers centralised management and cloud-to-cloud capabilities

IT administrators are sometimes forced to switch from one product to another to do their backup tasks. This is because some vendors will only offer backup for specific scenarios. However, a few vendors will cover them all, so you can orchestrate your backup strategy from a single console.

A cloud backup that offers a centralized management tool will help businesses:

  • Organize the backups based on the type of device or application
  • Prepare different schedules to run the backup tasks
  • Delegate backup management activities to other people
  • Manage backups for different domains
  • Restore data remotely on a device or computer

Being able to back up local machines, virtual machines, cloud servers, applications, databases and mobile devices from a single interface is the IT person’s dream.

You don’t want a file sync and share solution

Many vendors will present their file sync and share solution as a cloud backup solution for business. We both know that it’s not.

A file sync and share solution will only help you:

  • Share files between different users
  • Have multiple users edit the same version of a file in real time
  • Get the latest edited version of a file synced on multiple devices

A file sync and share solution works well for documents, spreadsheets, slide decks and multimedia. It won’t work for backing up servers, workstations, databases and emails.

In other words, you can’t just use DropBox, Google Drive, OneDrive or similar platforms as a cloud backup solution for your business.

Conclusion

We’ve told you what to keep in mind when you’re shopping for your cloud backup solution for business. Remember, the success of your disaster recovery strategy depends on the solution you choose.

Want to know more about our cloud backup solution for business? Click here.

The post 6 Things to Keep in Mind When Shopping for a Cloud Backup Solution for Business appeared first on SherWeb.

A GDPR Compliance Journey | @DevOpsSummit #BigData #DevOps #FinTech #AI #ML #DX

In preparation for General Data Protection Regulation (GDPR) compliance, a global 100 financial services organization embarked on a journey to assess its core information processing environments with the objective of identifying opportunities to strengthen its data privacy protection programs. This article focuses on the technology challenges, approach, and lessons learned for the centralized testing environment.

read more

[slides] Modernize Your Applications | @CloudExpo @InteractorTeam #DX #AI #IoT #SDN

Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, discussed how given the magnitude of today’s application ecosystem, tweaking existing software to stitch various components together leads to sub-optimal solutions. This definitely deserves a re-think, and paves the way for a new breed of lightweight application servers that are micro-services and DevOps ready!

read more

Are availability zones a disaster recovery solution?

I recently read an article which began “you can’t predict a disaster, but you can be prepared for one.” It got me thinking. I can hardly remember a time when disaster recovery was a bigger challenge for infrastructure managers than it is today. In fact, with ever increasing threats to IT systems, a reliable disaster recovery strategy is now absolutely essential for an organisation, regardless of their vertical market.

What does all this have to do with availability zones, I hear you cry? Furthermore, what is an availability zone and is it a good disaster recovery strategy? The purpose of availability zones is to provide better availability while protecting against failure of the underlying platform (the hypervisor, physical server, network, and storage). They give customers more options in the event of a localised data centre fault. Availability zones can also allow customers to use cloud services in two regions simultaneously if these regions are in the same geographic area.

Let us begin our discussion about availability zones by looking at the core capabilities that provide availability and resilience. Dynamic Resource Schedulers (DRS) provide Virtual Machine (VM) placement. That is, which host should run a given VM? A DRS also moves VMs around a cluster based on usage in order to balance out the cluster. High Availability (HA) provides the capability to restart VMs on other hosts in a cluster when either a host fails, or a VM crashes for any reason.

Now, let us look at the advantages that availability zones offer, as well as areas where they may fall short of constituting an effective disaster recovery strategy. This analysis of availability zone effectiveness will be divided based on three key challenges that cloud providers face: handling crashes or downtime, performing maintenance, and offering sufficient storage.

Crashes or downtime

It is not unusual for a cloud provider to only offer HA and not DRS. In this case, in the event of a host hypervisor crash or deliberate shutdown, VMs are restarted on other hosts because they have shared storage. This is done using initial placement calculation. However, providers often do not have the ability to move a running VM between hosts in a cluster with no loss of service, and to incorporate such a DRS capability would strengthen disaster recovery preparedness.

Maintenance 

There is also a problem with this model around planned maintenance. When hosts are updated, it is not possible to move the VMs that are running on them without loss of service. Therefore, VMs occasionally have the rug pulled out from underneath them.

With this in mind, many service providers talk about a ‘Design for Failure’ model when designing resilient services. In a nutshell, this means designing cloud infrastructure on the premise that parts of it will inevitably fail. Resiliency is provided at the application level. At the very least, this requires the doubling up of all applications, and for many deployments this necessitates additional licensing and additional costs for the VMs themselves.

Storage

Another crucial area to factor into this analysis is persistent storage. In the past, storage was protected using RAID techniques. Yet as we move to the public cloud, object storage has appeared as a popular way of storing data. This method uses the availability zone topology to protect data — but only if you choose it and pay for it. To protect against individual disk failure, three copies of the data are spread across the storage subsystems.

For virtual machines requiring persistent storage, Elastic block storage (EBS) is often used, and is replicated within the availability zone to protect against failure of the underlying storage platform.

EBS storage is not always replicated to other regions. 

Regardless, having data replicated to another region does not mean that the VMs are available there. It only guarantees back-up storage. VMs would need to be created from the underlying replicated storage.  It is also important to note that replicating storage to another availability zone or region only protects against storage subsystem failure. It does not protect against storage corruption, accidental deletion, or recent threats such as ransomware encrypting the files within the storage. To that extent, it is not creating a Disaster Recovery solution.

So, we return to our original question: can availability zones theoretically offer the resiliency needed for a good disaster recovery strategy? In the event of a crash, Dynamic Resource Schedulers can be used to move a VM between hosts in a cluster with no loss of service. However, when hosts are being updated, it is very difficult to move the VMs that are running on them without loss of service. As we have just discussed, redundant storage does not guarantee VM availability in other regions. Most importantly, these capabilities do not protect against data corruption or threats such as ransomware that encrypt data. Given this, a disaster recovery solution should be implemented in addition to the use of availability zones.

Cloud-to-cloud disaster recovery as a service (DRaaS) can be adopted between data centres. With the iland DRaaS solution, VMs can be rebooted within seconds in the event of a crash or downtime. iland DRaaS also offers a continuous replication solution with a journal supporting up to 30 days. This means that you can recover data if it is lost or corrupted; for example, you can recover data from a ransomware attack. Self-service testing can also be carried out whenever required, while replication carries on in the background. As customers think about migrating their traditional virtualised services to the public cloud, they need to consider crashes, maintenance, storage, and also a disaster recovery strategy.

Symantec chooses AWS as ‘strategic infrastructure provider’ for majority of cloud workloads

It may be holiday season in the US – but two companies who probably won’t be on the same Thanksgiving table are Amazon Web Services (AWS) and Microsoft. The two largest public cloud providers appear to have crossed paths again, this time over security giant Symantec.

Late last night, AWS issued a missive announcing that Symantec has chosen the Seattle firm as its ‘strategic infrastructure provider for the vast majority of its cloud workloads’.

According to the press materials, Symantec has “transformed legacy applications into cloud-based solutions, and built innovative, cloud-native, as well as hybrid offerings” through AWS, adding its relationship was long-term and ‘bi-directional’. Symantec built a data lake on AWS collecting tens of terabytes of data each day from 175 million endpoints and more than 57 million attack sensors.

“Symantec is committed to protecting the cloud generation through our leading security products, as well as leveraging the cloud to deliver our services,” said Raj Patel, Symantec VP cloud platform engineering in a statement. “AWS’s experience serving some of the most risk-sensitive enterprise customers was an important part of the decision to choose AWS as we execute on our enterprise Integrated Cyber Defence strategy.”

This is all well and good, yet just over a month previously Microsoft issued a release titled ‘Symantec powers consumer security with the Microsoft Cloud’, positing that Symantec was “using the Microsoft Azure cloud to help deliver its Norton consumers products to a global community of more than 50 million people and families.”

A rift? Perhaps not. It may not be multi-cloud in the strictest interpretation, but using different cloud providers for different parts in organisations is not uncommon. Take General Electric (GE) as an example. In 2015, the company moved 300,000 of its employees to Office 365. Last month, GE chose AWS as its preferred cloud provider, according to an Amazon announcement, ‘[continuing] to migrate thousands of core applications’, while starting next week customers and developers using GE’s IIoT platform Predix will be able to build industrial apps on Azure.

One other slightly confusing aspect of this AWS announcement revolves around the timing. With AWS re:Invent due to kick off next week, expect a plethora of customer wins, product updates, and perhaps the occasional competitor smackdown. Last year, for instance, saw shipping carrier Matson go all-in, and Workday confirm it was using AWS as its preferred public cloud supplier.

[slides] Nordstrom’s Cloud Transformation | @CloudExpo #DX #Cloud #DevOps

Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration and discussed ways to control cloud costs.

read more

[slides] Hybrid Cloud-Based Apps | @CloudExpo @Cedexis #APM #Monitoring

The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making real-time decisions based on a combination of real user monitoring, synthetic testing, APM, NGINX / local load balancers, and other data sources, is critical.

read more

Google announces lower prices for NVIDIA Tesla GPUs

Google has announced a price reduction for GPUs attached to on-demand Google Compute Engine virtual machines by up to 36%.

For US regions – Oregon and South Carolina – NVIDIA’s Tesla P100 GPU attached to a VM will cost $1.46 per hour, while the K80 GPU will set users back $0.45 per hour. The P100 and K80 GPUs are also available in Belgium and Taiwan

The company added that organisations such as Shazam and oilfield services provider Schlumberger were among those taking advantage of GPUs to ‘innovate, accelerate and save money.’ Companies can utilise GPUs from Google in various ways; hardware is passed through directly to the virtual machine to focus on bare metal performance, while faster disk performance can be achieved through attaching up to 3TB of Local SSD to any GPU-enabled virtual machine.

Alongside this, Google added it was lowering the price of preemptible Local SSDs by almost 40% compared to on-demand Local SSDs – equating to $0.048 per GB-month in the US.

Google’s focus on making GPUs more affordable is good news for customers, but it’s even better news for NVIDIA. Earlier this month, the company put out a statement saying that every major cloud provider has put out cloud services based on its product. Alongside this, NVIDIA’s most recent financial results found record revenues of $2.64 billion, up 32% from this time the previous year.

“We hope that the price reduction on NVIDIA Tesla GPUs and preemptible Local SSDs unlocks new opportunities and helps you solve more interesting business, engineering and scientific problems,” wrote Chris Kleban, Google product manager in a blog post.

Meg Whitman to step down as HPE chief exec: Analysing the company’s fortunes

Meg Whitman is to step down as CEO of Hewlett Packard Enterprise (HPE), bringing down the curtain on a six-year tenure and overseeing one of the largest corporate breakups of recent years.

Whitman had in July stepped down as chairwoman of HP’s board of directors, remaining chief executive of HPE, and had previously faced speculation about her role as the chief executive’s seat at Uber dramatically became available earlier this year.

Whitman’s replacement will be HPE president Antonio Neri, a 22-year HP veteran who will take over in February. “I said for many years that the next leader of HPE should come from within the company and Antonio Neri is exactly the type of leader I had in mind,” Whitman told analysts, as transcribed by Seeking Alpha, adding the board of directors had approved the new boss. “He is a computer engineer by training, has a deep technology background and is passionate about our customers, partners, employees and culture.”

The news of Whitman’s departure inevitably pushed HPE’s fourth quarter results somewhat into the shade. The company posted Q417 combined net revenue of $7.8 billion, up 5% from the previous year, while full year 2017 revenue was at $28.9bn, down from $30.3bn for FY16.

Yet it may be apt here to assess the various initiatives Whitman has put into place to attempt to turn around HP.

First announced three years ago, Hewlett Packard split into two companies in November 2015; HP Inc, which would focus on printers, PCs, and more on the consumer side, while HPE would be more attuned to the B2B side of data centres, networking, and servers. Whitman told analysts yesterday that the move was “exactly the right decision because it allowed both companies to optimise for strength and invest in core strategies.”

Last year, HPE announced it would merge its enterprise services division with CSC to create a new company, DXC Technology – a move that was finalised in April this year – as well as spinning off its application software business with Micro Focus. On the acquisitions side, HPE has bought networking firm Aruba, hyperconverged infrastructure provider Simplivity, and most recently Cloud Technology Partners to bolster its cloud consulting presence and hybrid IT capabilities.

Analysts continue to position the company at the sharp end of proceedings for cloud infrastructure equipment – a report from Synergy Research in March saw HPE in a three-way tie alongside Cisco and Dell-EMC.

In terms of the company’s position, Whitman said she was proud that HPE was exiting the year with almost $6 billion in net cash as well as ‘reigniting innovation and delivering groundbreaking new technology solutions’. Key to this is ‘The Machine’, a huge single-memory computer which aims to be ‘built for the big data era’ and with a prototype containing 160 terabytes of memory.

This continues to be a key part of HPE’s narrative; a news advisory piece issued by the company last week described the release of high-density compute and storage solutions focused on high performance computing (HPC) and artificial intelligence applications.  “Today, HPE is augmenting its proven supercomputing and large commercial HPC and AI capabilities with new high-density compute and storage solutions to accelerate market adoption by enabling organisations of all sizes to address challenges in HPC, big data, object storage and AI with more choice and flexibility,” said Bill Mannel, HPE VP and general manager of HPC and AI segment solutions.