Category Archives: Storage

The Death of DAS?

 

For over a decade, Direct Attached Storage (DAS) has been a no-brainer for many organizations; simple, fast and cost-effective. But as applications, compute and storage move to the cloud, DAS is looking like less and less of a sure bet. In fact, it’s looking more like a liability. But migrating from traditional DAS models to cloud storage is not as difficult or complex as it seems, and the good news for VARs and service providers is that they can make recommendations to customers with large DAS estates which, given solid integration and lateral thinking, will allow them to get best use out of what may, initially, seem to be redundant technology.

 

In this recent piece published on Channel Pro, John Zanni, vice president of service provider marketing and alliances at Parallels takes a look at the drawbacks of DAS in a cloud environment – and what alternatives are out there.

 

The Death of DAS?


Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Want 100 GB of Free Cloud Storage For Life?

Zoolz is promoting their cloud backup service with an offer to give the first million users 100 GB for free. For life. The catch? It uses AWS Glacier, Amazon’s cheaper alternative to S3. Glacier of course enforces a delay of 3 to 5 hours to retrieve files, and there are limits to monthly retrieval. But for the right purposes (like, “Store & Ignore”) it might be a real deal if you act soon enough. Their intro video explains:

EMC Leads the Storage Market for a Reason

By Randy Weis, Consulting Architect, LogicsOne

There are reasons that EMC is a leader in the market. Is it because they come out first with the latest and greatest technological innovation? No, or at least not commonly. Is it because they rapidly turn over their old technology and do sweeping replacements of their product lines with the new stuff? No. It’s because there is significant investment in working through what will work commercially and what won’t and how to best integrate the stuff that passes that test into traditional storage technology and evolving product lines.

Storage Admins and Enterprise Datacenter Architects are notoriously conservative and resistant to change. It is purely economics that drives most of the change in datacenters, not the open source geeks (I mean that with respect), mad scientists and marketing wizards that are churning out & hyping revolutionary technology. The battle for market leadership and ever greater profits will always dominate the storage technology market. Why is anyone in business but to make money?

Our job as consulting technologists and architects is to match the technology with the business needs, not to deploy the cool stuff because we think it blows the doors off of the “old” stuff. I’d venture to say that most of the world’s data sits on regular spinning disk, and a very large chunk of that behind EMC disk. The shift to new technology will always be led by trailblazers and startups, people who can’t afford the traditional enterprise datacenter technology, people that accept the risk involved with new technology because the potential reward is great enough. Once the technology blender is done chewing up the weaker offerings, smart business oriented CIOs and IT directors will integrate the surviving innovations, leveraging proven manufacturers that have consistent support and financial history.

Those manufacturers that cling to the old ways of doing business (think enterprise software licensing models) are doomed to see ever-diminishing returns until they are blown apart into more nimble and creative fragments that can then begin to re-invent themselves into more relevant, yet reliable, technology vendors. EMC has avoided the problems that have plagued other vendors and continued to evolve and grow, although they will never make everyone happy (I don’t think they are trying to!). HP has had many ups and downs, and perhaps more downs, due to a lack of consistent leadership and vision. Are they on the right track with 3PAR? It is a heck of a lot more likely than it was before the acquisition, but they need to get a few miles behind them to prove that they will continue to innovate and support the technology while delivering business value, continued development and excellent post-sales support. Dell’s investments in Compellent, particularly, bode very well for the re-invention of the commodity manufacturer into a true enterprise solution provider and manufacturer. The Compellent technology, revolutionary and “risky” a few years ago, is proving to be a very solid technology that innovates while providing proven business value. Thank goodness for choices and competition! EMC is better because they take the success of their competitors at HP and Dell seriously.

If I were starting up a company now, using Kickstarter or other venture investment capital, I would choose the new products, the brand new storage or software that promises the same performance and reliability as the enterprise products at a much lower cost, knowing that I am exposed to these risks:

  • the company may not last long (poor management, acts of god, fickle investors) or
  • the support might frankly sucks, or
  • engineering development will diminish as the vendor investors wait for the acquisition to get the quick payoff.

Meanwhile, large commercial organizations are starting to adopt cloud, flash and virtualization technologies precisely for all the above reasons. Their leadership needs to drive profitability into the datacenter technologies to increase speed to market and improve profitability. As the bleeding edge becomes the smart bet as brought to market by the market leading vendors, we will continue to see success where Business Value and Innovation intersect.

Why Apple, Not Dropbox, Amazon or Google Drive, is Dominating Cloud Storage

Apple is dominating the cloud storage wars, followed by Dropbox, Amazon and Google according to Strategy Analytics ‘Cloud Media Services’ survey. Cloud storage is overwhelmingly dominated by music; around 90% of Apple, Amazon and Google’s cloud users store music. Even Dropbox – which has no associated content ecosystem – sees around 45% of its users storing music files. Dropbox’s recent acquisition of Audiogalaxy will add a much needed native music player to the platform in the coming months.

In a recent study of almost 2,300 connected Americans, Strategy Analytics found that 27% have used Apple’s iCloud followed by 17% for Dropbox, 15% for Amazon Cloud Drive and 10% for Google Play (see chart).

Usage of cloud storage is heavily skewed towards younger people, in particular 20-24 year olds, whilst Apple’s service is the only one with more female than male users. Amongst the big four, Google’s is the one most heavily skewed towards males.

“Music is currently the key battleground in the war for cloud domination. Google is tempting users by giving away free storage for 20,000 songs which can be streamed to any Android device, a feature both Amazon and Apple charge annual subscriptions for,” observes Ed Barton, Strategy Analytics’ Director of Digital Media. “However, the growth of video streaming and the desire to access content via a growing range of devices will see services such as the Hollywood-backed digital movie initiative Ultraviolet – currently used by 4% of Americans – increase market share.”

Barton continues, “The cloud’s role in the race to win over consumers’ digital media libraries has evolved from a value added service for digital content purchases to a feature-rich and increasingly device agnostic digital locker for music and movies. Dropbox being used by 1 in 6 Americans shows that an integrated content storefront isn’t essential to build a large user base, however we expect competition to intensify sharply over the coming years.”

Strategy Analytics found that, the big four cloud storage services aside, recognition of other brands was uniformly low. Furthermore 55% of connected Americans have never used a cloud storage service – although, amongst consumers who have used one, one third (33%) had done so in the last week.

“There needs to be considerable investment in evangelizing these services to a potentially willing yet largely oblivious audience,” suggests Barton. “Given the size of bet Hollywood is making with Ultraviolet, this will be essential to their success given a crowded market and widespread apathy. However, more fundamental questions remain – is the use of more than one cloud service going to be too much for consumers to handle and will consolidation in such a fragmented market become inevitable?”

Barton concludes, “Although cloud storage is fast becoming a key pillar of digital platform strategies for the world’s leading device manufacturers and digital content distributors, there’s still a lot of work to do in educating consumers – particularly those over 45. With over half of consumers yet to use any consumer cloud based service, 2013 predictions for the ‘year of the cloud’ seem unrealistic. However given the market influence of the leading players pushing the concept, in particular Apple, Amazon, Google and Ultraviolet, I won’t be surprised to see mainstream adoption and usage spike within the next two to three years in the key US market.”

Google Says Drive Problem Resolved, Wants to Hear From You if You Still Have a Problem

According to Google, the outage for some Google Drive users should be completely resolved.

Still having a problem? Then Google want to hear about it:

The problem with Google Drive should be resolved. We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority at Google, and we are making continuous improvements to make our systems better. If you are still experiencing an issue, please contact us via the Google Help Center.

Google Drive Outage Updates

From the Google App Status Dashboard:

March 18, 2013 7:17:00 AM PDT

We’re investigating reports of an issue with Google Drive. We will provide more information shortly.

 March 18, 2013 8:10:00 AM PDT

We’re aware of a problem with Google Drive affecting a significant subset of users. The affected users are unable to access Google Drive. We will provide an update by March 18, 2013 9:10:00 AM PDT detailing when we expect to resolve the problem. Please note that this resolution time is an estimate and may change.

March 18, 2013 8:55:00 AM PDT

Google Drive service has already been restored for some users, and we expect a resolution for all users within the next 1 hours. Please note this time frame is an estimate and may change.

CloudBerry Adds SFTP to Explorer 3.8

CloudBerry Lab, a provider of backup and management solutions for public cloud storage services, has added secure ftp to the newest release of Cloudberry Explorer version 3.8, an application that allows accessing, moving and managing data in remote locations such as FTP servers and public cloud storage services including Amazon S3, Amazon Glacier, Windows Azure, OpenStack and others.

In the new version of CloudBerry Explorer SFTP server is supported as one of the remote location options. Now users can perform file access, file transfer and file management operations across SFTP server and local storage.

Secure File Transfer Protocol (SFTP) also known as SSH File Transfer Protocol is an extension of the SSH-2 protocol that provides a secure file transfer capability. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

Riverbed’s Whitewater Adds AWS Glacier, Google Storage Support

Riverbed Technology today announced Whitewater Operating System (WWOS) version 2.1 with support for Amazon Glacier storage and Google Cloud storage. WWOS 2.1 increases operational cost savings and high data durability from cloud storage services, improving disaster recovery readiness. In addition, Riverbed introduced larger virtual Whitewater appliances that allow customers to support larger data sets, improve disaster recovery capabilities, and manage multiple Whitewater appliances from a single window with a management console. These enhancements to the Whitewater cloud storage product family help enterprises use cloud storage to meet critical backup requirements, modernize data management strategies, and overcome challenges created by data growth.

“Once created, most unstructured data is rarely accessed after 30-90 days. Leveraging the cloud for storing these data sets makes a lot of sense, particularly given the attractive prices of storage services designed for long-term such as Amazon Glacier,” said Dan Iacono, research director from IDC’s storage practice. “The ability of cloud storage devices to cache locally and provide access to recent data provides real benefits from an operational cost perspective to avoid unnecessary transfer costs from the cloud.”

Cloud Storage Ecosystem Expansion Riverbed is offering customers choice and flexibility for data protection by adding Amazon Glacier and Google Cloud storage to its Whitewater cloud storage ecosystem. Now, Whitewater customers using Amazon Glacier cloud storage have immediate access to recent backup data while enjoying pricing from Amazon as low as one cent per gigabyte per month — approximately eight times cheaper than other currently available cloud storage offerings.

In addition, the extremely high data durability offered by Amazon cloud storage services and the ability to access the data from any location with an Internet connection greatly improves an organization’s disaster recovery (DR) readiness.

Larger Virtual Whitewater Appliances With the introduction of the larger virtual Whitewater appliances, Riverbed allows customers preferring virtual appliances to protect larger data sets as well as simplify disaster recovery. The new virtual Whitewater appliances support local cache sizes of four or eight terabytes and integrate seamlessly with leading data protection applications as well as all popular cloud storage services. To streamline management for enterprise wide deployments, WWOS 2.1 includes new management capabilities that enable monitoring and administration of all Whitewater devices from a single console with one-click drill down into any appliance.

“We have been successfully using Riverbed Whitewater appliances for backup with Amazon S3 in our facilities in Germany, Switzerland, and the U.S. since June 2012,” said Drew Bartow, senior information technology engineer at Tipper Tie. “We were eager to test the Whitewater 3010 appliance with Amazon Glacier and the total time to configure and start moving data to Glacier was just 24 minutes. With Glacier and Whitewater we could potentially save considerably on backup storage costs.”

“The features in WWOS 2.1 and the larger virtual appliances drastically change the economics of data protection,” said Ray Villeneuve, vice president corporate development, at Riverbed. “With our advanced, in-line deduplication and optimization technologies, Whitewater shrinks data stored in the cloud by up to 30 times on average — for example, Whitewater customers can now store up to 100 terabytes of backup data that is not regularly accessed in Amazon Glacier for as little as $2,500.00 per year. The operational cost savings and high data durability from cloud storage services improve disaster recovery readiness and will continue to rapidly accelerate the movement from tape-based and replicated disk systems to cloud storage.”