Case Study: Lush Handmade Cosmetics

“With Parallels Remote Application Server, licensing costs have significantly reduced. We are able to easily create a stable network environment that is easy to deploy and manage.” ~ Dale Hobbs, Manager, Network and Security Systems Lush Handmade Cosmetics chose Parallels Remote Application Server over Citrix to overcome licensing issues related to it (Citrix). This move resulted in […]

The post Case Study: Lush Handmade Cosmetics appeared first on Parallels Blog.

Customer Centricity Tips for Digital Transformation | @CloudExpo #Cloud

One of the most important tenets of digital transformation is that it’s customer-driven. In fact, the only reason technology is involved at all is because today’s customers demand technology-based interactions with the companies they do business with.

It’s no surprise, therefore, that we at Intellyx agree with Patrick Maes, CTO, ANZ Bank, when he said, “the fundamental element in digital transformation is extreme customer centricity.”

So true – but note the insightful twist that Maes added to the customer-driven digital mantra: extreme.

In the context of digital transformation, then, what are some examples of customer centricity we would consider to be extreme? Here’s our take.

read more

Disaster recovery: How to reduce the business risk at distance

(c)iStock.com/natasaadzic

Geographic distance is a necessity because disaster recovery data centres have to be placed outside the circle of disruption.

The meaning of this term depends on the type of disaster. This could be a natural phenomenon like an earthquake, a volcanic eruption, a flood or a fire. Calamities are caused by human error too; so the definition of the circle of disruption varies. In the past, data centres were on average kept 30 miles apart as this was the wisdom at the time.   But then today, the circle’s radius can be up to 100 miles or more. In many people’s views, a radius of 20 or 30 miles is too close for comfort for auditors, putting business continuity at risk.

With natural disasters and global warming in mind David Trossell, CEO of leading self-configuring infrastructure and optimise networks (SCIONs) vendor Bridgeworks, therefore ponders on what is an adequate distance between data centres in order to ensure that business goes on, regardless of what happens within the vicinity at one of an organisation’s data centres:

“Many CIOs are faced with a dilemma of how to balance the need of having two data centres located within the same Metro area to ensure synchronisation for failover capability yet, in their hearts they know that both sites will probably be within the circle of disruption,” Trossell explains. He adds that, in order to ensure their survival, they should be thinking of what their minimum proximity from the edge of the circle is, for a tertiary DR site.

“After all, Hurricane Sandy ripped through 24 US states, covering hundreds of miles of the East Coast of the USA and caused approximately $75bn worth of damage and. earthquakes are a major issue throughout much of the world too, so much that DR data centres need to be to be located on different tectonic plates”, he explains.  

A lack of technology and resources is often the reason why data centres are placed close to each other within a circle of disruption. “There are, for example, green data centres in Scandinavia and Iceland which are extremely energy efficient, but people are put off because they don’t think there is technology available to transfer data fast enough – and yet these data centres are massively competitive”, says Claire Buchanan, chief commercial officer at Bridgeworks.

Customer risk matrix

Michael Winterson, EMEA managing director at Equinix says that as a data centre provider, his company provides a physical location to its customers. “When we are talking with any one of our clients, we usually respond to their pre-defined risk matrix, and so they’ll ask that we need to be a minimum of ‘x’ and a maximum of ‘y’ kilometres away by fibre or line of sight”, he explains. His company then considers and vets the ‘red flags’ to consider the data centres that fall within or outside of the criteria set by each customer. Whenever Equinix goes outside of the criteria, research is undertaken to justify why a particular data centre site will be adequate.

“We will operate within a circle of disruption that has been identified by a client, but a lot of our enterprise account clients opt for Cardiff because they are happy to operate at distances in excess of 100 miles from each data centre”, says Winterson. Referring back to Fukushima and to Hurricane Sandy he claims that all of Equinix’s New York and Tokyo data centres were able to provide 100% uptime, but some customers still experience many difficulties with transportation and access to networks and to their primary data centres.

“If you operate your IT system in a standard office block that runs on potentially three or four hours of generator power, within that time, you’re now in the dark and so we saw a large number of customers who tried to physically displace themselves to our data centres to be able to operate the equipment directly over our Wi-FI network in our data centres, but quite often they would have difficulty moving because of public transportation issues because the areas were blocked”, explains Winterson. Customers responded by moving their control system in Equinix’s data centres to another remote office in order to access the data centre system remotely.

Responding to disasters

Since Fukushima his company has responded by building data centres in Osaka because Tokyo presents a risk to business continuity at the information technology and network layers. Tokyo is not only an earthquake zone; power outages in Japan’s national grid could affect both its east and west coast. In this case the idea is to get outside the circle of disruption, but Equinix’s New Jersey-based ‘New York’ and Washington DC data centres are “unfortunately” located within circles of disruption – within their epicentre because people elect to put their co-location facilities there.

“In the City of London for instance, for active-active solutions our London data centres are in Slough, and they are adequately placed within 65 kilometres of each other by fibre optic cable, and it is generally considered that you can run an active-active solution across that distance with the right equipment”, he says. In Europe customers are taking a two city solution, and they are looking at the four hubs of telecommunications and technology – London, Frankfurt, Amsterdam and Paris because they are really 20 milliseconds apart from each other over an Ethernet connection.

Internet limitations

With regards to time and latency created by distance, Clive Longbottom, client service director at analyst firm Quocirca, says: “The speed of light means that every circumnavigation of the planet creates latency of 133 milliseconds, however, the internet does not work at the speed of light and so there are bandwidth issues that cause jitter and collisions.”

He then explains that active actions are being taken on the packets of data that will increase the latency within a system, and says that it’s impossible to say “exactly what level of latency any data centre will encounter in all circumstances as there are far too many variables to deal with.”

Longbottom also thinks that live mirroring is now possible over hundreds of kilometres, so long as the latency is controlled by using packet shaping and other wide area network acceleration approaches. Longer distances, he says, may require a store-and-forward multi-link approach which will need active boxes between the source and target data centres “ensure that what is received is what was sent”.

Jittering networks

Trossell explains that jitter is defined as packets of data that arrive slightly out of time. The issue is caused, he says, by data passing through different switches and connections which can cause performance problems in the same way that packet loss does. “Packet loss occurs when the line is overloaded – this is more commonly known as congestion, and this causes considerable performance drop-offs which doesn’t necessarily reduce if the data centres are positioned closer together.”

“The solution is to have the ability to mitigate latency, to handle jitter and packet loss”, says Buchanan who advises that this needs to be done intelligently, smartly and without human intervention to minimise the associated costs and risks. “This gives IT executives the freedom of choice as to where they place their data centres – protecting their businesses and the new currency of data”, she adds.

Mitigating latency

A SCION solution such as WANrockIT offers a way to mitigate the latency issues created when data centres are placed outside of a circle of disruption and at a distance from each other. “From a CIO’s perspective, by using machine intelligence the software learns and makes the right decision in a micro-second according to the state of the network and the flow of the data no matter whether it’s day or night”, Buchanan explains. She also claims that a properly architected SCION can remove the perception of distance as an inhibitor for DR planning.

“At this stage, be cautious, however it does have its place and making sure that there is a solid plan B behind SCION’s plan A, means that SCIONs can take away a lot of uncertainty in existing, more manual approaches”, suggests Longbottom. 

One company that has explored the benefits of a SCION solution is CVS Healthcare. “The main thrust was that CVS could not move their data fast enough, so instead of being able to do a 430 GB back-up, they could just manage 50 GB in 12 hours because their data centres was 2,800 miles away – creating latency of 86 milliseconds.  This put their business at risk, due to the distance involved”, explains Buchanan.

Their intermediate solution was to send it offsite to Iron Mountain, but CVS wasn’t happy with this solution as it didn’t meet their recovery requirements. Using their existing 600Mb pipe and WANrockIT and each end of the network, CVS was able to reduce the 50 GB back-up from 12 hours to just 45 minutes irrespective of the data type.  Had this been a 10 Gb pipe, the whole process would have taken just 27 seconds. This magnitude change in performance enabled the company to do full 430 GB back-ups on a nightly basis in just 4 hours. The issues associated with distance and latency was therefore mitigated.

The technology used within SCION, namely machine intelligence, will have its doubters as does anything new. However, in our world of increasingly available large bandwidth, enormous data volumes and the need for velocity, it’s time to consider what technology can do to help businesses underpin a DR data centre strategy that is based upon the recommendations and best practice guidelines that we have learnt since disasters like Hurricane Sandy.

Despite all mankind’s achievements, Hurricane Sandy taught us many lessons about the extensive destructive and disruptive power of nature.  Having wrought devastation over 24 States this has dramatically challenged the traditional perception of what is a typical circle of disruption in planning for DR. Metro connected sites for failover continuity have to stay due to the requirements of low delta synchronicity but this is not a sufficient or suitable practice for DR. Sandy has taught us that DR sites must now located be hundreds of miles away if we are to survive.

Report assesses how ISVs take their cloud solutions to market

(c)iStock.com/TheaDesign

A report from the Cloud Technology Alliance has assessed how independent software vendors (ISVs) take their cloud solutions to market, and found the age old discussion over who owns the customer relationship remains unsolved.

39 companies responded to the survey, 72% of respondents based on North America and 26% in EMEA. Of the respondents, only 36% say they expect channel partners to be self-sufficient in closing business, while 33% of ISVs surveyed expect their channel partners to support and bill their customers. Only one in 10 ISVs expect their partners to close upsell opportunities and renewals.

Yet ISVs are not as rigorous as one may expect in terms of reviewing their channel partners in terms of company fit and performance. Only 60% of ISVs surveyed say they review and cut non-performing partners, with only 14% doing it systematically. The numbers differ by company; 65% of Google ISVs say they review their partners’ performance, compared to only 54% of Microsoft ISVs.

The report was broken down into seven categories: an overview of respondents’ demographics and go to market strategies; how ISVs work with the channel; channel recruiting and program structures best practices; how ISVs work with the channel; channel conflict; future investments; and achieving vendor-channel alignment.

The majority of ISVs surveyed use some sort of free version of their product to go to market; 61% offer free trials and 16% leverage a freemium pricing model. The majority of respondents price on a per-user basis, while others – most notably in the Microsoft ecosystem – price their solutions based on the total number of employees in an organisation.

31% of those polled – mostly Google Apps for Work ISVs – do not work with the channel. “These companies are likely in the early stages of launching their products or have optimised for e-commerce”, the report notes. Of the remainder, 28% have been working in the channel for more than three years, compared to 25% between one and three years and 17% for less than 12 months. 47% of ISVs reported they receive less than a quarter of their revenues through channel partners.

The report’s assessment of the differences between Microsoft and Google houses, as well as the disparity between channel partners and vendors, argues several points to achieve vendor-channel alignment. Sources of friction include an expectation around vendors providing their channel partners with leads, and who holds responsibility for customer renewals. The report argues having proper channel managers in place can address the issues of better business planning, training, and forecasting.

Why Cloud Anti-Virus Engines Are Critical in the Fight Against Malware | @CloudExpo #Cloud

The next few years could see a paradigm shift in the way anti-virus applications work. A number of businesses have started migrating from traditional desktop based anti-virus packages to «lighter» software apps that process desktop security on the cloud. At the outset, this change is not entirely apparent – end users will still require to install software on their local desktop systems. However, the processing of information is now getting ported to the cloud.

read more

Redefining Airline-Passenger Experience | @ThingsExpo #IoT #M2M #BigData

Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability.
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, explored the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context with predictive analytics.
They concluded with a look forward to tomorrow’s Smart Airports where airlines use connected baggage to predict plane fuel levels, security is ubiquitous and your seat remembers you.

read more

IoT = Cloud + Big Data + Analytics | @ThingsExpo #Cloud #IoT #BigData

Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions.
Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessions, I wanted to share some of my observations on emerging trends. As cyber security serves as a foundation and necessary defense for customer and cloud solutions, cyber becomes more and more commoditized. The real impact is how do we deliver a confidence or trust to customers and their customer journey with our organizations.

read more

Cisco promises breakthrough software for cloud-scale networking

Network ExpansionCisco claims it has invented a way to integrate and simplify web scale networks to make them twice as cost effective and much more scalable.

The networking vendor has worked with the world’s top hyperscale web companies to help service providers create faster simpler clouds from its IOS XR network operating system using popular IT configuration and management tools.

By making networks more programmable they can create a form of liquidity in cloud services that will allow providers to pool and converge their data centres and wide area network (WAN) architectures, it claims.

As a result of its collaboration, new features will appear in Cisco’s IOS XR software which, it claims, would halve the cost of running today’s network over the course of five years (under the present circumstances) by doubling network efficiency and performance.

However network running costs for cloud operators are expected to soar in future due to predicted surges in data demand. Total global data centre traffic is projected to triple by the end of 2019 (from 3.4 to 10.4 Zettabytes), according to the Cisco Global Cloud Index figures for 2014-2019. With 83% of total data centre traffic expected to come from the cloud by 2019 the improvement in manageability will help to rein in soaring costs, according to Cisco.

The investment in IOS XR will also help cloud and data centre operators to make a smoother, less expensive, transition to cloud-scale networking in future, Cisco claims.

Cisco IOS XR software, which is currently run over 50,000 live network routers, will benefit from a number of technical improvements, including new modularity, more service agility and higher levels of automation convergence with third-party application hosting.

Cisco said its software development kits and the DevNet Developer Program Cisco will encourage service providers to create large-scale automation and predictable network programmability, with higher levels of visibility and control. The aim, says Cisco, is to cater for any data model, any encoding method and any transport method.

“The network is cloud computing’s final frontier, at technology, people and process levels,” said Laurent Lachal, senior analyst of infrastructure solutions at analyst Ovum.  “It needs to be built for scale and ruthlessly automated.”

EMC announces new protection for data as cloud hybrids become the norm

Storage vendor EMC has created a new product range to protect data as it moves in and out of the various parts of a hybrid cloud.

On Tuesday it announced news products and services designed to integrate primary storage and data protection systems across private and public clouds. The aim is to combine the flexibility of public cloud services with the control and security of a private cloud infrastructure.

The new offerings carry out one of three functions, characterised as tiering data across diverse storage infrastructures, protecting data in transit to and from the Cloud and protecting data once its static in the cloud.

EMC says that by integrating its VMAX systems through new improvements to its FAST.X tiering systems it can make it cheaper for customers to prioritise their storage according to the expense of the medium. The new additions to the management system have now automated the tiering of public clouds and cater for both EMC and non-EMC storage systems.

The new levels of protection for data, as it travels in and out of the cloud, is provided by

CloudBoost 2.0. This, claims EMC, will work with EMC’s Data Protection Suite and Data Domain so that private cloud users can move data safely to the cheaper media in the public cloud for long-term data retention.

Once resident in the public cloud, data can be better protected now as a result of new Spanning product features, which can cater for different regional conditions across the European Union. Spanning Backup for Salesforce now offers better SaaS data restoration options so it’s easier restore lost or deleted data. Spanning’s new European data destination option will also aid compliance with European data sovereignty laws and regulations. Meanwhile, the Data Protection as a Service (DPaaS) offering for private clouds now has better capacity management, secure multi-tenancy and a dense shelf configuration that EMC says will ‘dramatically’ cut the cost of ownership.

Meanwhile, EMC also announced a new generation of its NetWorker data protection software.  NetWorker 9 has a new universal policy engine to automate and simplify data protection regardless of where the data resides.

“Tiering is critical to business in our own data centres,” said Arrian Mehis, general manager of VMware Cloud practice at Rackspace, “and in the data centres of our customers.”

ENDS

Gemalto and NetApp to create secure cloud storage hybrid for AWS customers

Cloud storageSecurity vendor Gemalto and NetApp are to jointly create an integrated, encrypted key management system for securing data for Amazon Web Services (AWS) customers. The aim is to save time and improve security for end users, by simplifying the process of securing virtual data.

The two vendors, both AWS network partners, are to blend Gemalto’s SafeNet Virtual KeySecure and NetApp’s Cloud ONTAP as a unified service to be offered on the AWS Marketplace.

The SafeNet Virtual KeySecure for NetApp Cloud ONTAP (SKNCO) service promises to make storing and encrypting data and applications much easier for companies using virtual environments. The system will pay for itself, claim the vendors, through the productivity gains and raised levels of security created when users enjoy more governance over their stored data.

The SVKNCO creates these benefits, it’s claimed, by centralising management and making it easy to create customisable security policies for data access in the cloud. It achieves this by combining NetApp’s modern storage infrastructure with Gemalto’s SafeNet key management. The hybrid of the two systems can protect customers’ data and encryption keys against unauthorised access, while giving them the most cost effective storage options at all times.

It’s about creating top levels of security, but not at ‘any cost’ according to Todd Moore, VP of Data Encryption Product Management at Gemalto. “AWS users can now turn to NetApp to manage, store and protect their data more confidently, while completely owning their encryption keys,” said Moore.

Meanwhile, data centre infrastructure vendor Nutanix has also announced that its Community Edition is to be made available for AWS customers. The free software tool aims to help AWS customers speed up the evaluation process when weighing their options for buying infrastructure.