Category Archives: Storage

Google, Amazon Outages a Real Threat For Those Who Rely On Cloud Storage

Guest Post by Simon Bain, CEO of SearchYourCloud.

It was only for a few minutes, however Google was down. This follows hot on the heels of the other major cloud provider Amazon being down for a couple of hours earlier in August. This even relatively short outage could be a real problem for organizations that rely on these services to store their enterprise information. I am not a great lover of multi-device synchronization, I mean all those versions kicking around your systems! However if done well, it could be one of the technologies that help save ‘Cloud Stores’ from the idiosyncrasies of the Internet and a connected life.

We seem to be currently in the silly season of outages, with Amazon, Microsoft and Google all stating that their problems were cause by a switch being replaced or an update going wrong.

These outages may seem small for the supplier. But they are massive for the customer, who is unable to access sales data or invoices for a few hours.

This however, should not stop people using these services. But it should make them shop around, and look at what is really on offer. A service that does not have synchronization may well sound great. But if you do not have a local copy of your document on the device that you are actually working on, and your connection goes down, for whatever reason, then your work will stop.

SearchYourCloud Inc. has recently launched SearchYourCloud, a new application that enables people to securely find and access information stored in Dropbox, Box, GDrive, Microsoft Exchange, SharePoint or Outlook.com with a single search. Using either a Windows PC or any IOS device, SearchYourCloud will also be available for other clouds later in the year.

SearchYourCloud enables users to not only find what they are searching for, but also protects their data and privacy in the cloud.

Simon Bain

Simon Bain is Chief Architect and CEO of SearchYourCloud, and also serves on the Board of the Sun Microsystems User Group.

Top 10 Ways to Kill Your VDI Project

By Francis Czekalski, Consulting Architect, LogicsOne

Earlier this month I presented at GreenPages’ annual Summit Event. My breakout presentation this year was an End User Computing Super Session. In this video, I summarize the ‘top 10 ways to kill your VDI project.’

If you’re interested in learning more, download this free on-demand webinar where I share some real world VDI battlefield stories.

http://www.youtube.com/watch?v=y9w1o0O8IaI

 

 

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Deutsche Börse Launching Cloud Capacity Trading Exchange

Deutsche Börse says it will launch a trading venue for outsourced cloud storage and cloud computing capacity in the beginning of 2014. Deutsche Börse Cloud Exchange AG is a new joint venture formed together with Berlin-based Zimory GmbH to create the first “neutral, secure and transparent trading venue” for cloud computing resources.

The primary users for the new trading venue will be companies, public sector agencies and also organisations such as research institutes that need additional storage and computing resources, or have excess capacity that they want to offer on the market.

“With its great expertise in operating markets, Deutsche Börse is making it possible for the first time to standardise and trade fully electronically IT capacity in the same way as securities, energy and commodities,” said Michael Osterloh, Member of the Board of Deutsche Börse Cloud Exchange.

Questions Around Uptime Guarantees

Some manufacturers recently have made an impact with a “five nines” uptime guarantee, so I thought I’d provide some perspective. Most recently, I’ve come in contact with Hitachi’s guarantee. I quickly checked with a few other manufacturers (e.g. Dell EqualLogic) to see if they offer that guarantee for their storage arrays, and many do…but realistically, no one can guarantee uptime because “uptime” really needs to be measured from the host or application perspective. Read below for additional factors that impact storage uptime.

Five Nines is 5.26 minutes of downtime per year, or 25.9 seconds a month.

Four Nines is 52.6 minutes/year, which is one hour of maintenance, roughly.

Array controller failover in EQL and other dual controller, modular arrays (EMC, HDS, etc.) is automated to eliminate downtime. That is really just the beginning of the story. The discussion with my clients often comes down to a clarification of what uptime means – and besides uninterrupted connectivity to storage, data loss (due to corruption, user error, drive failure, etc.) is often closely linked in people’s minds, but is really a completely separate issue.

What are the teeth in the uptime guarantee? If the array does go down, does the manufacturer pay the customer money to make up for downtime and lost data?

{Register for our upcoming webinar on June 12th ”What’s Missing in Hybrid Cloud Management- Leveraging Cloud Brokerage“ featuring guest speakers from Forrester and Gravitant}

There are other array considerations that impact “uptime” besides upgrade or failover.

  • Multiple drive failures, since most are purchased in batches, are a real possibility. How does the guarantee cover this?
  • Very large drives must be in a suitable RAID configuration to improve the chances that a RAID rebuild will be completed before another URE (unrecoverable read error) occurs. How does the guarantee cover this?
  • Dual controller failures do happen to all the array makers, although I don’t recall this happening with EQL. Even a VMAX went down in Virginia once, in the last couple of years. How does the guarantee cover this?

 

The uptime “promise” doesn’t include all the connected components. Nearly every environment has something with a single path or SPOF or other configuration issue that must be addressed to insure uninterrupted storage connectivity.

  • Are applications, hosts, network and storage all capable of automated failover at sub-10 ms speeds? For a heavily loaded Oracle database server to continue working in a dual array controller “failure” (which is what an upgrade resembles), it must be connected via multiple paths to an array, using all available paths.
  • Some operating systems don’t support an automatic retry of paths (Windows), nor do all applications resume processing automatically without IO errors, outright failures or reboots.
  • You often need to make temporary changes in OS & iSCSI initiator configurations to support an upgrade – e.g. change timeout value.
  • Also, the MPIO software makes a difference. Dell EQL MEM helps a great deal in a VMware cluster to insure proper path failover, as do EMC PowerPath and Hitachi Dynamic Link Manager. Dell offers a MS MPIO extension and DSM plugin to help Windows recover from a path loss in a more resilient fashion
  • Network considerations are paramount, too.
    • Network switches often take 30 seconds to a few minutes to reboot after a power cycle or reboot.
    • Also in the network, if non-stacked switches are used, RSTP must be enabled. If not, and anything else isn’t configured correctly, connectivity to storage will be lost.
    • Flow Control must be enabled, among other considerations (disable unicast storm control, for example), to insure that the network is resilient enough.
    • Link aggregation, if not using stacked switches, must be dynamic or the iSCSI network might not support failover redundancy

 

Nearly every array manufacturer will say that upgrades are non-disruptive, but that is at the most simplistic level. Upgrades to a unified storage array, for example, will involve disruption to file system presentation, almost always. Clustered or multi-engine frame arrays (HP 3PAR, EMC VMAX, NetApp, Hitachi VSP) can offer the best hope of achieving 5 nines, or even greater. We have customers with VMAX and Symmetrix that have had 100% uptime for a few years, but the arrays are multi-million dollar investments. Dual controller modular arrays, like EMC and HDS, can’t really offer that level of redundancy, and that includes EQL.

If the environment is very carefully and correctly set up for automated failover, as noted above, then those 5 nines can be achieved, but not really guaranteed.

 

DigitalOcean Launches Incentivized Customer Referral Program

DigitalOcean, a New York-based cloud server and hosting provider has launched a commission-based customer referral program. The plan is accessible to registered DigitalOcean customers and will compensate them with a $10 commission for each newly acquired customer that totals $10 in billing. Registered users are provided with their own unique referral code link that allows them to track the customers they’ve brought in, as well as their commission totals.

Boasting over 190,000 Linux–based cloud servers launched since inception, DigitalOcean is a TechStars startup accelerator graduate. Each SSD “Droplet” — the company  term for its cloud servers — provides disk and network performance, coupled with the capability to easily migrate and resize existing Droplets with a single click.

“We love listening to our customers and our new referral program is one way we can give back,” says Ben Uretsky, CEO of DigitalOcean. “Referrals have been a huge driver of success for DigitalOcean. We want to give back to our loyal customers by rewarding them for continuing to spread the word and help our business grow.”

Additional cloud hosting plans include options ranging from the initial 512 MB of RAM starting at $5 per month, to a maximum of capacity of96 GB of RAM and 10 TB of bandwidth transfer.

EMC World 2013 Recap

By Randy Weis, Consulting Architect, LogicsOne

 

The EMC World Conference held last week in Las Vegas demonstrated how EMC has a strong leadership position in the Virtualization, Storage and Software Defined Datacenter markets.

Seriously, this is not the Kool-Aid talking. Before anyone jumps in to point out how all the competitors are better at this or that, or how being a partner or customer of EMC has its challenges, I’d like to refer you to a previous blog I wrote about EMC: “EMC Leads the Storage Market for a Reason.” I won’t recap everything, but that blog talks about business success, not technical wizardry. Do the other major storage and virtualization vendors have solutions and products in these areas? Absolutely, and I promise to bring my opinions and facts around those topics to this blog soon.

What I found exciting about this conference was how EMC is presenting a more cohesive and integrated approach to the items listed below. The ExtremIO product has been greatly improved, some might say so that it is really usable now. I’d say the same about the EMC DR and BC solutions built on RecoverPoint and VPLEX – VPLEX is affordable and ready to be integrated into the VNX line. The VNX product line is mature now, and you can expect announcements around a major refresh this year. I’d say the same about the BRS line – no great product announcements, but better integration and pricing that helps customers and solution providers alike.

There are a few items I’d like to bullet for you:

  1. Storage Virtualization – EMC has finally figured out that DataCore is onto something, and spent considerable time promoting ViPR at EMC World. This technology (while 12 years to market behind DataCore) will open the eyes of the entire datacenter virtualization market to the possibilities of a Storage Hypervisor. What VMware did for computing, this technology will do for storage – storage resources deployed automatically, independent of the array manufacturer, with high value software features running on anything/anywhere. There are pluses and minuses to this new EMC product and approach, but this technology area will soon become a hot strategy for IT spending. Everyone needs to start understanding why EMC finally thinks this is a worthwhile investment and is making it a priority. To echo what I said in that prior blog, “Thank goodness for choices and competition!” Take a fresh look at DataCore and compare it to the new EMC offering. What’s better? What’s worse?
  2. Business Continuity and Highly Available Datacenters: Linking Datacenters to turn DR sites into an active computing resource is within reach of non-enterprise organizations now – midmarket, commercial, healthcare, SMB – however you want to define it.
    1. VPLEX links datacenters together (with some networking help) so that applications can run on any available compute or storage resource in any location – a significant advance in building private cloud computing. This is now licensed to work with VNX systems, is much cheaper and can be built into any quote. We will start looking for ways to build this into various solutions strategies – DR, BC, array migration, storage refreshes, stretch clusters, you name it.  VPLEX is also a very good solution for any datacenter in need of a major storage migration due to storage refresh or datacenter migration, as well as a tool to manage heterogeneous storage.
    2. RecoverPoint is going virtual – this is the leading replication tool for SRM, is integrated with VPLEX and now will be available as a virtual appliance. RP also has developed multi-site capabilities, with up to five sites, 8 RP “appliances” per site, in fan-in or fan-out configurations.
    3. Usability of both has improved, by standardizing management of both in Unisphere editions for both products.
    4. High Performance Storage and Computing – Server-side Flash, Flash Cache Virtualization and workload-crushing all-Flash arrays in the ExtremSF, ExtremSW and ExtremIO product line (formerly known as VFCache). As usual, the second release nails it for EMC. GreenPages was recently recognized as Global leaders in mission critical application virtualization, and this fits right in. Put simply, put an SSD card in a vSphere host and boost SQL/Oracle/EXCH performance over 100% in some cases. The big gap was in HA/DRS/vMotion. The host cache was a local resource, and thus vMotion was broken, along with HA and DRS. The new release virtualizes the cache so that VMs assigned local cache will see that cache even if it moves. This isn’t an all or nothing solution – you can designate the mission critical apps to use the cache and tie them to a subset of the cluster. This make this strategy affordable and granular.
    5. Isilon – This best in class NAS system keeps getting better. Clearly defined use cases, much better VMware integration and more successful implementations makes this product the one to beat in the scale-out NAS market.

 

Another whole article can be written about ViPR, EMC’s brand new storage virtualization tool, and that will be coming up soon. As promised, I’ll also take a look at the competitive offerings of HP and Dell, at least, in the Storage Virtualization, DR/BC, Server-side flash and scale-NAS solutions areas, as well as cloud storage integration strategies. Till then, thanks for reading this and please share your thoughts.

Huh? What’s the Network Have to Do with It?

By Nate Schnable, Sr. Solutions Architect

Having been in this field for 17 years it still amazes me that people always tend to forget about the network.  Everything a user accesses on their device that isn’t installed or stored locally, depends on the network more than any other element of the environment.   It’s responsible for the quick and reliable transport of data. That means the user experience while working with remote files and applications, almost completely depends on the network.

However, this isn’t always obvious to everyone.  Therefore, they will rarely ask for network related services as they aren’t aware the network is the cause of their problems.  Whether it is a storage, compute, virtualization or IP Telephony initiative – all of these types of projects rely heavily on the network to function properly.  In fact, the network is the only element of a customer’s environment that touches every other component. Its stability can make or break the success and all important user experience.

In a VoIP initiative we have to consider, amongst many things, that proper QoS policies be setup –  so let’s hope you are not running on some dumb hubs.  Power over Ethernet (PoE) for the phones should be available unless you want to use bricks of some type of mid-span device (yuck).  I used to work for a Fortune 50 Insurance Company and one day an employee decided to plug both of the ports on their phone into the network because it would make the experience even better – not so much.  They brought down that whole environment.  Made some changes after that to avoid that happening again!

In a Disaster Recovery project we have to take a look at distances and subsequent latencies between locations.  What is the bandwidth and how much data do you need to back up?   Do we have Layer 2 handoffs between sites or is it more of a traditional L3 site to site connection?

If we are implementing a new iSCSI SAN do we need ten gig or one gig?  Do your switches support Jumbo Frames and flow control?  Hope that your iSCSI switches are truly stackable because spanning-tree could cause some of those paths to be redundant, but not active.

I was reading the other day that the sales of smart phones and tablets would reach approximately 1.2 billiion in 2013.  Some of these will most certainly end up on your wireless networks.  How to manage that is definitely a topic for another day.

In the end it just makes sense that you really need to consider the network implications before jumping into almost any type of IT initiative.  Just because those green lights are flickering doesn’t mean it’s all good.

 

To learn more about how GreenPages Networking Practice can help your organization, fill out this form and someone will be in touch with you shortly.

Protecting and Preserving Our Digital Lives is a Task We Want to Have Already Done

I once read that a favorite writer of mine, when told by people he met at cocktail parties how much they “wanted to write,” would reply, “No, you want to have written.”

Protecting and preserving our digital lives is much the same — we want to have already taken care of it. We don’t actually want to go through the hassle of doing it.

An article by Rick Broida in PC World sums it up thus:

There are two kinds of people in the world: Those who have lost critical data, and those who will. In other words, if you use technology long enough and neglect to back up your data, you’re guaranteed to have at least one extremely bad day.

The article goes on to outline “How to build a bulletproof cloud backup system without spending a dime“. There’s a lot to do, it all takes effort, but he’s right. Whether you take all his recommendations or some, it’s a good place to start thinking about the steps you (we) all need to take.

Here’s an idea: Come up with a plan and implement it in pieces until you get to the point where you know you are ready for the digital disaster that is out there waiting for us all.