Cloud disaster recovery on the increase but challenges remain, says CloudEndure

(c)iStock.com/baona

While the vast majority of organisations hope for 99.9% uptime throughout the year, 57% of companies polled by CloudEndure say they had at least one outage in the past three months.

The survey, which quizzed 141 IT global IT professionals, found organisations are, in general, gaining confidence in cloud disaster recovery (DR) solutions. This is noticed in the key risks to system availability. The number one risk remains human error, followed by network failures and application bugs. Downtime of cloud providers fell from the third highest risk last year to #6 in 2016.

CloudEndure found the key challenges in meeting availability goals were insufficient IT resources, followed by budget limitations and a lack of in-house expertise. Yet in some cases, it is difficult to perceive how companies assess their uptime levels; 22% of organisations polled say they do not measure service availability at all.

More than half (54%) of respondents use the public cloud for disaster recovery target infrastructure, compared to private cloud (35%) and physical resources (11%). Of the public cloud users, more than half use AWS (53% and 26% overall). VMware vSphere (20% overall) was also highly cited, with Microsoft Hyper-V and Azure (10% overall) trailing.

84% of respondents say service availability is at least seven or higher out of 10 in terms of how critical it is to their customers. 38% say uptime is most critical – 10 out of 10 – while only 4% deem is as not critical. 19% say their goals for uptime is the magical ‘five nines’, or less than five minutes of outages per year, while only 3% say they have a goal of 99% uptime, or more than 88 hours down.

“While more companies continue to migrate large portions of their business data to the cloud, our survey findings are clear,” said Ofer Gadish, CloudEndure CEO. “Companies that invest in DR are likely to reap an upside in savings from the cost of downtime disrupting their operations.”

Regular readers of this publication will be aware of how the trend of cloud disaster recovery is shaping up in 2016. Writing in January Monica Brink, EMEA marketing director at cloud provider iland, explained: “Disaster recovery as a service [has] continued to claw its way to the top of CTO priority lists. With IT budgets tight and faster adoption of cloud in general, we’ve noticed a growing comfort level and confidence in cloud-based disaster recovery solutions.”

Microsoft adds RedHat Linux, Containers and OneOps options to Azure

AzureMicrosoft has launched a trio of initiatives aimed at widening the options of its potential clients of its Azure cloud services.

It made the announcements through the Azure Blog, which promises the availability of new RedHat Enterprise Linux ‘instances’ (i.e. units of computing resources), a new application lifecycle manager, OneOps, and showcased a preview of an imminent Azure Container service.

The Red Hat Enterprise Linux instances are available from the Azure Marketplace. According to the blog, 60 percent of the images available are now Linux-based. Microsoft claims its hybrid model can be running ‘in minutes’ with Red Hat Enterprise Linux images available on Azure Marketplace on a Pay-as-you-go model with hourly billing.

Among the eligible products are Red Hat Enterprise Linux, Red Hat JBoss Enterprise Application Server, Red Hat JBoss Enterprise Web Server, Red Hat Gluster Storage and Red Hat OpenShift.

“Both Microsoft and I love Linux,” said Corey Sanders, Azure’s Director of Program Management. The new instances will help cloud users cater for on-demand workloads, development and testing and cloud bursting in a simple, easily quantifiable system, Sanders said. The Red Hat Enterprise Linux 6.7 and 7.2 images are now live in all regions, except China and the US Government.

The imminent Azure Container Service – currently available for preview – will build on previous Docker and Mesosphere initiatives to make it easier to provision clusters of Azure Virtual Machines onto containerized applications. The process will be a lot quicker since the machines will have been pre-configured with open source components, Sanders said.

Sanders also disclosed that Microsoft has certified for the Azure Marketplace a group of Linux images created by Bitnami. Meanwhile, Microsoft’s new OneOps offering on Azure, which gives clients the user of an open-source cloud and application lifecycle management platform, is a product of a collaboration with the WalmartLabs team (the IT offshoot of retail giant Walmart).

Giant IT distributor Ingram bought by HNA Group for $6 billion

M&A merger acquisitionGlobal technology distributor turned cloud service provider Ingram Micro is to be bought by Chinese conglomerate HNA Group for $6 billion.

Ingram will continue to be based in California but is now a subsidiary of marine logistics specialist Tianjin Tianhai, which is owned by The HNA Group, an aviation, tourism and logistics outfit. The deal is expected to close in the second half of 2016. Ingram CEO Alain Monié and his management team will remain and its business is expected to continue as normal.

“The addition of Ingram Micro would help the logistics sector of HNA Group transform from a logistics operator to a supply chain operator, and provide one-stop services while improving efficiencies,” said Adam Tan, CEO of HNA Group.

Founded in 1989 when IT was a hardware defined industry Ingram Micro became the world’s largest wholesale technology products distributor with clients such as Acer, Apple, Cisco, Hewlett-Packard, IBM, Lenovo, Microsof and Samsung. It ranks 62nd in the 2015 Fortune 500. As the IT industry evolved to become software driven, it has taken steps to transform itself. It announced cloud partnerships with IBM and Parallels at its Ingram Micro Cloud Summit 2014 and promised tools to help its reseller clients make the transition to selling cloud services. To this end Ingram Micro added three new services, Hosted Exchange, Virtual Private Server and Web Hosting.

However, the funding from the new owners could help it make more of a transition, according to a statement from Alain Monié, Ingram Micro CEO. “Our agreement to join HNA Group delivers near-term and compelling cash value to our stockholders and we expect it to provide exciting new opportunities for our vendors, customers and associates,” said Monie, “innovation, new services introduction, brand management and ensuring the stability and continuity of the businesses joining their enterprise are fundamental to HNA Group’s overall strategy.”

Analyst Clive Longbottom at Quocirca said TTI seems to be ‘paying high’ in order to gain direct access to western markets. “Whether this will work remains to be seen,” said Longbottom, “will the US government and all its dependent departments shy away from Ingram now, fearing that the Chinese will be implementing back doors via firmware or software changes?”

For the general user, this will probably make very little difference, the analyst predicted. “It’s likely that TTI will be hands off, using Ingram to optimise its overall buying power on a global basis to be able to either provide better margins or to compete on price more effectively where required, ” said Longbottom. Ingram shareholders, meanwhile, are getting a 39% premium over Ingram’s closing price.

Operational NFV at 100G: It’s Necessary, But Can It Be Done? By @DanJoeBarry | @CloudExpo #Cloud

We live in a hyper-connected, mobility-enabled world, one in which carriers must make drastic changes to how they do business if they are to survive and thrive in the future. Accordingly, great strides have been made over the last three years to prove the viability of Network Functions Virtualization (NFV). Many Proof of Concept (PoC) trials have proven that workloads can be migrated to virtual environments running on standard hardware, and there are even examples of carrier deployments using NFV.
The next step is to determine how to make NFV work effectively so it will deliver on its promise. The issue is no longer whether a service can be deployed using NFV, but whether we can manage and secure that service in an NFV environment. In other words, the challenge now is to operationalize NFV. How can we ensure that NFV is ready for this challenge?

read more

The Future of Storage: 2016 and Beyond | @CloudExpo #Cloud

A year ago, I wrote a two-part series on how lower cost, higher performance on-premise storage and nearly free cloud based storage were driving both innovation and disruption in the storage industry. Applying Clayton Christensen’s theory of innovation and disruption to the storage industry, my premise was that flashy startups, (e.g. Pure, Nimble, VMem, Tegile and Tintri) that were first to introduce credible data reduction to flash arrays were innovators but not disruptors in the space and would therefore disappoint investors; storage incumbents (e.g. HDS, EMC and IBM) who added data reduction would continue to survive; but the real disruptors would be the cloud players (e.g., Amazon, Google and Microsoft). The combination of struggling share prices, weak earnings reports, recent acquisitions, and raging cloud revenue witnessed throughout 2015 and into 2016 continue to point to Christensen’s theory as the explanation for an ongoing economic transformation that will forever change the storage industry as we know it.

read more

Using the Cloud for Disaster Recovery

Here’s a short video I did discussing how we’ve helped clients use the cloud as a disaster recovery site. This can be a less expensive option that allows for test fail over while guaranteeing resources. If you have any questions or would like to talk about disaster recovery in the cloud in more detail, please reach out!

Using the Cloud for Disaster Recovery

Or click to watch on YouTube

 

By Chris Chesley, Solutions Architect

 

Roger Strukhoff Returns as @CloudExpo Conference Chair | @IoT2040 #IoT #Cloud #BigData

SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 New York.
The 18th Cloud Expo and 5th @ThingsExpo will take place on June 7-9, 2016, at the Javits Center in New York City, NY.
«The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it,» stated Roger Strukhoff. «More importantly, it leverages the power of devices and the Internet to enable us all to improve the state of the world and lives of people.»

read more

Nokia creates foundations for launching telcos into the cloud

nokia data center servicesNokia’s Data Center Services division has unveiled plans to launch mobile telcos into the cloud. Plans include a custom-made a multivendor infrastructure to support its transformation consulting services.

These services aim to help telecoms operators re-shape their people and processes for the new cloud-centric comms industry. In a statement it explained that its new managed cloud operations aim to make the introduction of multi-vendor hybrid operations, cloud data centres and the virtualisation of network functions (VNFs) as painless as possible.

The networking vendor is expanding its cloud services portfolio with the launch of three professional services. Nokia Data Center services will offer development and operations (DevOps) services, with a brief to help telcos use cloud technology to launch services as quickly as possible.

Secondly, the Nokia Cloud Transformation Consulting services aim to help operators make the fullest use of telco cloud opportunities. Nokia said it is using expertise rom the Bell Labs Consulting practice to support operators and enterprises in addressing cloud transformation.

Finally the Managed Cloud Operations managed service will help telcos run hybrid operations across hardware, cloudware and application layer management, without the build up of silos of information that have traditionally hamstrung telcos turned comms service providers.

In order to support the data centre services Nokia is creating a design facility in the UK, supported by global delivery depots across the globe. To complement its services portfolio, Nokia has now invited partners, such as global supply chain Sanmina, to focus on Data Center services.

The service is needed because 62% of operators are very likely to rely on network equipment providers for data centre transformation, according to Heavy Reading research figures quoted by Nokia.

Meanwhile, in a related announcement Nokia said it will simplify networks with a new Shared Data Layer, a central point of storage for all the data used by Virtualized Network Functions (VNFs). This could free VNFs from the need to manage their own data, creating so-called stateless VNFs that are simpler and have the capacity for rapid expansion or contraction.

The result is a more flexible, programmable network for 5G that can minimise latency and maximise network speeds in order to cater for the Internet of Things (IoT). The network also becomes more reliable as a failed stateless VNF can instantly activate and provide access to the shared data to maintain seamless service continuity.

Cost-Effective IoT Devices | @ThingsExpo #IoT #M2M #API #InternetOfThings

The best-practices for building IoT applications with Go Code thatattendees can use to build their own IoT applications.
In his session at @ThingsExpo, Indraneel Mitra, Senior Solutions Architect & Technology Evangelist at Cognizant, will provide valuable information and resources for both novice and experienced developers on how to get started with IoT and Golang in a day.
He will provide information on how to use Intel Arduino Kit, Go Robotics API and AWS IoT stack to build an application that gathers, analyzes, and acts on data generated by connected devices.

read more

How to ensure enterprise cloud app use complies with the GDPR

(c)iStock.com/Creative-idea

After months of fine tuning and approvals from various bodies, the EU General Data Protection Regulation (GDPR) is almost upon us.

When the GDPR finally becomes law in spring of this year after passing a final stage of approval, organisations will have two years to comply with the regulation. Two years might seem plenty of time, but the complex picture of cloud use in modern enterprises means that GDPR compliance will be a challenge. A recent Netskope and YouGov survey found that almost 80% of IT pros in medium and large companies are not confident of ensuring compliance with the regulation in time for the expected deadline of spring 2018.

Enterprise cloud app use is a significant potential stumbling block for organisations seeking GDPR compliance, not least because cloud apps create unstructured data which is more difficult to manage but still explicitly included in the regulation. The survey found that almost a third of IT pros admit to knowing unauthorised cloud apps are in use within the organisation, but only 7% have a solution in place to deal with this phenomenon – also known as shadow IT.

Cloud app use provides such huge productivity gains that blocking apps isn’t an option. Companies must discover how to continue using cloud apps while ensuring protection of structured and unstructured data, both in-transit and at-rest. So how can organisations ensure compliance and continue using cloud apps?

The GDPR requires organisations actively to take measures to protect the data they hold. They won’t comply with the GDPR only through legal arrangements like policies, protocols and contracts. Rather, companies must take deliberate organisational and technical measures to ensure data protection and compliance in all areas. This is known as data protection by design, and goes beyond traditional security measures aimed at confidentiality, integrity and availability of the data.

Controlling and securing data in cloud apps will be central to GDPR compliance, so managing an organisation’s interactions with the cloud is a good place to start. This can be achieved by:

  • Discovering and monitoringall cloud applications in use by employees
  • Knowing which personal data are processed in the cloud by employees – for example, customer information such as name, address, bank details, or other forms of personally identifiable information (PII)
  • Securing data by setting up policies which ensure that unmanaged cloud services are not being used to store and process PII. The policy should be granular enough to stop the unwanted behaviour while allowing compliant use of the cloud to continue
  • Coaching users to adopt the services you sanction
  • Using a cloud access security brokerto assess the enterprise-readiness of all cloud apps and cloud services to ensure that all data are protected when in transit or at rest

The complications arising from the use of cloud and shadow IT mean that personal data is harder to track and control than ever before. The GDPR will have significant and wide-ranging consequences for both cloud-consuming organisations and cloud vendors, and security teams will need to make the most of the two-year grace period before penalties for non-compliance come into force. Examining an organisation’s cloud app use is a great place to start.