Archivo de la categoría: Storage

The Notion of the File is Fading Away

The most interesting takeaway from a Wired article on Box’s move to include collaborative editing in its file sharing service:

“…what’s happening now is that the applications are becoming the primary portals to our data, and the notion of the file is fading away. As Levie indicates, you never browse a PC-like file system on your phone. You access your data through applications, and so often, that data resides not on your local device, but on a cloud service somewhere across the net.”

Read the article.

 

Drew Houston’s Y Combinator Pitch for Dropbox

Here are some choice tidbits from Drew Houston’s application for Y Combinator backing:

What is your company going to make?  
Dropbox synchronizes files across your/your team’s computers. It’s much better than uploading or email, because it’s automatic, integrated into Windows, and fits into the way you already work. There’s also a web interface, and the files are securely backed up to Amazon S3. Dropbox is kind of like taking the best elements of subversion, trac and rsync and making them “just work” for the average individual or team. Hackers have access to these tools, but normal people don’t.

There are lots of interesting possible features. One is syncing Google Docs/Spreadsheets (or other office web apps) to local .doc and .xls files for offline access, which would be strategically important as few web apps deal with the offline problem.

What’s new about what you’re doing?  
Most small teams have a few basic needs: (1) team members need their important stuff in front of them wherever they are, (2) everyone needs to be working on the latest version of a given document (and ideally can track what’s changed), (3) and team data needs to be protected from disaster. There are sync tools (e.g. beinsync, Foldershare), there are backup tools (Carbonite, Mozy), and there are web uploading/publishing tools (box.net, etc.), but there’s no good integrated solution.

Dropbox solves all these needs, and doesn’t need configuration or babysitting. Put another way, it takes concepts that are proven winners from the dev community (version control, changelogs/trac, rsync, etc.) and puts them in a package that my little sister can figure out (she uses Dropbox to keep track of her high school term papers, and doesn’t need to burn CDs or carry USB sticks anymore.)

At a higher level, online storage and local disks are big and cheap. But the internet links in between have been and will continue to be slow in comparison. In “the future”, you won’t have to move your data around manually. The concept that I’m most excited about is that the core technology in Dropbox — continuous efficient sync with compression and binary diffs — is what will get us there.

What do you understand about your business that other companies in it just don’t get?  
Competing products work at the wrong layer of abstraction and/or force the user to constantly think and do things. The “online disk drive” abstraction sucks, because you can’t work offline and the OS support is extremely brittle. Anything that depends on manual emailing/uploading (i.e. anything web-based) is a non-starter, because it’s basically doing version control in your head. But virtually all competing services involve one or the other.

With Dropbox, you hit “Save”, as you normally would, and everything just works, even with large files (thanks to binary diffs).

What are people forced to do now because what you plan to make doesn’t exist yet?
Email themselves attachments. Upload stuff to online storage sites or use online drives like Xdrive, which don’t work on planes. Carry around USB drives, which can be lost, stolen, or break/get bad sectors. Waste time revising the wrong versions of given documents, resulting in Frankendocuments that contain some changes but lose others. My friend Reuben is switching his financial consulting company from a PHP-based CMS to a beta of Dropbox because all they used it for was file sharing. Techies often hack together brittle solutions involving web hosting, rsync, and cron jobs.

Want more detail? Read the full application.

Aspera Drive Offers Sharing, Collaboration Platform For Big Data

Aspera, Inc. today announced the beta availability of Aspera Drive, their new unified sharing and collaboration platform for big data, combining complete desktop explorer integration with performance and ease of use, transparent support for on–premise and cloud storage, and with security, management and access control.

The Aspera platform allows for transfer and synchronization of files sets of any size and any number with maximum speed and robustness at any distance, with the full access control, privacy and security of Aspera technology. Its architecture allows the platform to be deployed on-premise, in the cloud, or in a hybrid model.

Aspera Drive brings remote file browsing, transfer, synchronization, and package sending and receiving to the desktop, browser and mobile device. A backend architecture and API allows for fine-grained, centralized control over content access, security and bandwidth, regardless of content storage location – on premise or on cloud.

Think You Know About Storage Devices? Take a Quiz To Find How Much

“From punched cards to disks, from CDs to SSDs, impressive technological progress has enabled us to store more and more data – and access it with increasing speed. But how well do you know these storage devices that make our lives so much easier?”

Tech Week Europe has an online quiz to test your knowledge.

Google, Amazon Outages a Real Threat For Those Who Rely On Cloud Storage

Guest Post by Simon Bain, CEO of SearchYourCloud.

It was only for a few minutes, however Google was down. This follows hot on the heels of the other major cloud provider Amazon being down for a couple of hours earlier in August. This even relatively short outage could be a real problem for organizations that rely on these services to store their enterprise information. I am not a great lover of multi-device synchronization, I mean all those versions kicking around your systems! However if done well, it could be one of the technologies that help save ‘Cloud Stores’ from the idiosyncrasies of the Internet and a connected life.

We seem to be currently in the silly season of outages, with Amazon, Microsoft and Google all stating that their problems were cause by a switch being replaced or an update going wrong.

These outages may seem small for the supplier. But they are massive for the customer, who is unable to access sales data or invoices for a few hours.

This however, should not stop people using these services. But it should make them shop around, and look at what is really on offer. A service that does not have synchronization may well sound great. But if you do not have a local copy of your document on the device that you are actually working on, and your connection goes down, for whatever reason, then your work will stop.

SearchYourCloud Inc. has recently launched SearchYourCloud, a new application that enables people to securely find and access information stored in Dropbox, Box, GDrive, Microsoft Exchange, SharePoint or Outlook.com with a single search. Using either a Windows PC or any IOS device, SearchYourCloud will also be available for other clouds later in the year.

SearchYourCloud enables users to not only find what they are searching for, but also protects their data and privacy in the cloud.

Simon Bain

Simon Bain is Chief Architect and CEO of SearchYourCloud, and also serves on the Board of the Sun Microsystems User Group.

Top 10 Ways to Kill Your VDI Project

By Francis Czekalski, Consulting Architect, LogicsOne

Earlier this month I presented at GreenPages’ annual Summit Event. My breakout presentation this year was an End User Computing Super Session. In this video, I summarize the ‘top 10 ways to kill your VDI project.’

If you’re interested in learning more, download this free on-demand webinar where I share some real world VDI battlefield stories.

http://www.youtube.com/watch?v=y9w1o0O8IaI

 

 

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Deutsche Börse Launching Cloud Capacity Trading Exchange

Deutsche Börse says it will launch a trading venue for outsourced cloud storage and cloud computing capacity in the beginning of 2014. Deutsche Börse Cloud Exchange AG is a new joint venture formed together with Berlin-based Zimory GmbH to create the first “neutral, secure and transparent trading venue” for cloud computing resources.

The primary users for the new trading venue will be companies, public sector agencies and also organisations such as research institutes that need additional storage and computing resources, or have excess capacity that they want to offer on the market.

“With its great expertise in operating markets, Deutsche Börse is making it possible for the first time to standardise and trade fully electronically IT capacity in the same way as securities, energy and commodities,” said Michael Osterloh, Member of the Board of Deutsche Börse Cloud Exchange.

Questions Around Uptime Guarantees

Some manufacturers recently have made an impact with a “five nines” uptime guarantee, so I thought I’d provide some perspective. Most recently, I’ve come in contact with Hitachi’s guarantee. I quickly checked with a few other manufacturers (e.g. Dell EqualLogic) to see if they offer that guarantee for their storage arrays, and many do…but realistically, no one can guarantee uptime because “uptime” really needs to be measured from the host or application perspective. Read below for additional factors that impact storage uptime.

Five Nines is 5.26 minutes of downtime per year, or 25.9 seconds a month.

Four Nines is 52.6 minutes/year, which is one hour of maintenance, roughly.

Array controller failover in EQL and other dual controller, modular arrays (EMC, HDS, etc.) is automated to eliminate downtime. That is really just the beginning of the story. The discussion with my clients often comes down to a clarification of what uptime means – and besides uninterrupted connectivity to storage, data loss (due to corruption, user error, drive failure, etc.) is often closely linked in people’s minds, but is really a completely separate issue.

What are the teeth in the uptime guarantee? If the array does go down, does the manufacturer pay the customer money to make up for downtime and lost data?

{Register for our upcoming webinar on June 12th ”What’s Missing in Hybrid Cloud Management- Leveraging Cloud Brokerage“ featuring guest speakers from Forrester and Gravitant}

There are other array considerations that impact “uptime” besides upgrade or failover.

  • Multiple drive failures, since most are purchased in batches, are a real possibility. How does the guarantee cover this?
  • Very large drives must be in a suitable RAID configuration to improve the chances that a RAID rebuild will be completed before another URE (unrecoverable read error) occurs. How does the guarantee cover this?
  • Dual controller failures do happen to all the array makers, although I don’t recall this happening with EQL. Even a VMAX went down in Virginia once, in the last couple of years. How does the guarantee cover this?

 

The uptime “promise” doesn’t include all the connected components. Nearly every environment has something with a single path or SPOF or other configuration issue that must be addressed to insure uninterrupted storage connectivity.

  • Are applications, hosts, network and storage all capable of automated failover at sub-10 ms speeds? For a heavily loaded Oracle database server to continue working in a dual array controller “failure” (which is what an upgrade resembles), it must be connected via multiple paths to an array, using all available paths.
  • Some operating systems don’t support an automatic retry of paths (Windows), nor do all applications resume processing automatically without IO errors, outright failures or reboots.
  • You often need to make temporary changes in OS & iSCSI initiator configurations to support an upgrade – e.g. change timeout value.
  • Also, the MPIO software makes a difference. Dell EQL MEM helps a great deal in a VMware cluster to insure proper path failover, as do EMC PowerPath and Hitachi Dynamic Link Manager. Dell offers a MS MPIO extension and DSM plugin to help Windows recover from a path loss in a more resilient fashion
  • Network considerations are paramount, too.
    • Network switches often take 30 seconds to a few minutes to reboot after a power cycle or reboot.
    • Also in the network, if non-stacked switches are used, RSTP must be enabled. If not, and anything else isn’t configured correctly, connectivity to storage will be lost.
    • Flow Control must be enabled, among other considerations (disable unicast storm control, for example), to insure that the network is resilient enough.
    • Link aggregation, if not using stacked switches, must be dynamic or the iSCSI network might not support failover redundancy

 

Nearly every array manufacturer will say that upgrades are non-disruptive, but that is at the most simplistic level. Upgrades to a unified storage array, for example, will involve disruption to file system presentation, almost always. Clustered or multi-engine frame arrays (HP 3PAR, EMC VMAX, NetApp, Hitachi VSP) can offer the best hope of achieving 5 nines, or even greater. We have customers with VMAX and Symmetrix that have had 100% uptime for a few years, but the arrays are multi-million dollar investments. Dual controller modular arrays, like EMC and HDS, can’t really offer that level of redundancy, and that includes EQL.

If the environment is very carefully and correctly set up for automated failover, as noted above, then those 5 nines can be achieved, but not really guaranteed.