Archivo de la categoría: Storage

DigitalOcean Launches Incentivized Customer Referral Program

DigitalOcean, a New York-based cloud server and hosting provider has launched a commission-based customer referral program. The plan is accessible to registered DigitalOcean customers and will compensate them with a $10 commission for each newly acquired customer that totals $10 in billing. Registered users are provided with their own unique referral code link that allows them to track the customers they’ve brought in, as well as their commission totals.

Boasting over 190,000 Linux–based cloud servers launched since inception, DigitalOcean is a TechStars startup accelerator graduate. Each SSD “Droplet” — the company  term for its cloud servers — provides disk and network performance, coupled with the capability to easily migrate and resize existing Droplets with a single click.

“We love listening to our customers and our new referral program is one way we can give back,” says Ben Uretsky, CEO of DigitalOcean. “Referrals have been a huge driver of success for DigitalOcean. We want to give back to our loyal customers by rewarding them for continuing to spread the word and help our business grow.”

Additional cloud hosting plans include options ranging from the initial 512 MB of RAM starting at $5 per month, to a maximum of capacity of96 GB of RAM and 10 TB of bandwidth transfer.

EMC World 2013 Recap

By Randy Weis, Consulting Architect, LogicsOne

 

The EMC World Conference held last week in Las Vegas demonstrated how EMC has a strong leadership position in the Virtualization, Storage and Software Defined Datacenter markets.

Seriously, this is not the Kool-Aid talking. Before anyone jumps in to point out how all the competitors are better at this or that, or how being a partner or customer of EMC has its challenges, I’d like to refer you to a previous blog I wrote about EMC: “EMC Leads the Storage Market for a Reason.” I won’t recap everything, but that blog talks about business success, not technical wizardry. Do the other major storage and virtualization vendors have solutions and products in these areas? Absolutely, and I promise to bring my opinions and facts around those topics to this blog soon.

What I found exciting about this conference was how EMC is presenting a more cohesive and integrated approach to the items listed below. The ExtremIO product has been greatly improved, some might say so that it is really usable now. I’d say the same about the EMC DR and BC solutions built on RecoverPoint and VPLEX – VPLEX is affordable and ready to be integrated into the VNX line. The VNX product line is mature now, and you can expect announcements around a major refresh this year. I’d say the same about the BRS line – no great product announcements, but better integration and pricing that helps customers and solution providers alike.

There are a few items I’d like to bullet for you:

  1. Storage Virtualization – EMC has finally figured out that DataCore is onto something, and spent considerable time promoting ViPR at EMC World. This technology (while 12 years to market behind DataCore) will open the eyes of the entire datacenter virtualization market to the possibilities of a Storage Hypervisor. What VMware did for computing, this technology will do for storage – storage resources deployed automatically, independent of the array manufacturer, with high value software features running on anything/anywhere. There are pluses and minuses to this new EMC product and approach, but this technology area will soon become a hot strategy for IT spending. Everyone needs to start understanding why EMC finally thinks this is a worthwhile investment and is making it a priority. To echo what I said in that prior blog, “Thank goodness for choices and competition!” Take a fresh look at DataCore and compare it to the new EMC offering. What’s better? What’s worse?
  2. Business Continuity and Highly Available Datacenters: Linking Datacenters to turn DR sites into an active computing resource is within reach of non-enterprise organizations now – midmarket, commercial, healthcare, SMB – however you want to define it.
    1. VPLEX links datacenters together (with some networking help) so that applications can run on any available compute or storage resource in any location – a significant advance in building private cloud computing. This is now licensed to work with VNX systems, is much cheaper and can be built into any quote. We will start looking for ways to build this into various solutions strategies – DR, BC, array migration, storage refreshes, stretch clusters, you name it.  VPLEX is also a very good solution for any datacenter in need of a major storage migration due to storage refresh or datacenter migration, as well as a tool to manage heterogeneous storage.
    2. RecoverPoint is going virtual – this is the leading replication tool for SRM, is integrated with VPLEX and now will be available as a virtual appliance. RP also has developed multi-site capabilities, with up to five sites, 8 RP “appliances” per site, in fan-in or fan-out configurations.
    3. Usability of both has improved, by standardizing management of both in Unisphere editions for both products.
    4. High Performance Storage and Computing – Server-side Flash, Flash Cache Virtualization and workload-crushing all-Flash arrays in the ExtremSF, ExtremSW and ExtremIO product line (formerly known as VFCache). As usual, the second release nails it for EMC. GreenPages was recently recognized as Global leaders in mission critical application virtualization, and this fits right in. Put simply, put an SSD card in a vSphere host and boost SQL/Oracle/EXCH performance over 100% in some cases. The big gap was in HA/DRS/vMotion. The host cache was a local resource, and thus vMotion was broken, along with HA and DRS. The new release virtualizes the cache so that VMs assigned local cache will see that cache even if it moves. This isn’t an all or nothing solution – you can designate the mission critical apps to use the cache and tie them to a subset of the cluster. This make this strategy affordable and granular.
    5. Isilon – This best in class NAS system keeps getting better. Clearly defined use cases, much better VMware integration and more successful implementations makes this product the one to beat in the scale-out NAS market.

 

Another whole article can be written about ViPR, EMC’s brand new storage virtualization tool, and that will be coming up soon. As promised, I’ll also take a look at the competitive offerings of HP and Dell, at least, in the Storage Virtualization, DR/BC, Server-side flash and scale-NAS solutions areas, as well as cloud storage integration strategies. Till then, thanks for reading this and please share your thoughts.

Huh? What’s the Network Have to Do with It?

By Nate Schnable, Sr. Solutions Architect

Having been in this field for 17 years it still amazes me that people always tend to forget about the network.  Everything a user accesses on their device that isn’t installed or stored locally, depends on the network more than any other element of the environment.   It’s responsible for the quick and reliable transport of data. That means the user experience while working with remote files and applications, almost completely depends on the network.

However, this isn’t always obvious to everyone.  Therefore, they will rarely ask for network related services as they aren’t aware the network is the cause of their problems.  Whether it is a storage, compute, virtualization or IP Telephony initiative – all of these types of projects rely heavily on the network to function properly.  In fact, the network is the only element of a customer’s environment that touches every other component. Its stability can make or break the success and all important user experience.

In a VoIP initiative we have to consider, amongst many things, that proper QoS policies be setup –  so let’s hope you are not running on some dumb hubs.  Power over Ethernet (PoE) for the phones should be available unless you want to use bricks of some type of mid-span device (yuck).  I used to work for a Fortune 50 Insurance Company and one day an employee decided to plug both of the ports on their phone into the network because it would make the experience even better – not so much.  They brought down that whole environment.  Made some changes after that to avoid that happening again!

In a Disaster Recovery project we have to take a look at distances and subsequent latencies between locations.  What is the bandwidth and how much data do you need to back up?   Do we have Layer 2 handoffs between sites or is it more of a traditional L3 site to site connection?

If we are implementing a new iSCSI SAN do we need ten gig or one gig?  Do your switches support Jumbo Frames and flow control?  Hope that your iSCSI switches are truly stackable because spanning-tree could cause some of those paths to be redundant, but not active.

I was reading the other day that the sales of smart phones and tablets would reach approximately 1.2 billiion in 2013.  Some of these will most certainly end up on your wireless networks.  How to manage that is definitely a topic for another day.

In the end it just makes sense that you really need to consider the network implications before jumping into almost any type of IT initiative.  Just because those green lights are flickering doesn’t mean it’s all good.

 

To learn more about how GreenPages Networking Practice can help your organization, fill out this form and someone will be in touch with you shortly.

Protecting and Preserving Our Digital Lives is a Task We Want to Have Already Done

I once read that a favorite writer of mine, when told by people he met at cocktail parties how much they “wanted to write,” would reply, “No, you want to have written.”

Protecting and preserving our digital lives is much the same — we want to have already taken care of it. We don’t actually want to go through the hassle of doing it.

An article by Rick Broida in PC World sums it up thus:

There are two kinds of people in the world: Those who have lost critical data, and those who will. In other words, if you use technology long enough and neglect to back up your data, you’re guaranteed to have at least one extremely bad day.

The article goes on to outline “How to build a bulletproof cloud backup system without spending a dime“. There’s a lot to do, it all takes effort, but he’s right. Whether you take all his recommendations or some, it’s a good place to start thinking about the steps you (we) all need to take.

Here’s an idea: Come up with a plan and implement it in pieces until you get to the point where you know you are ready for the digital disaster that is out there waiting for us all.

 

The Death of DAS?

 

For over a decade, Direct Attached Storage (DAS) has been a no-brainer for many organizations; simple, fast and cost-effective. But as applications, compute and storage move to the cloud, DAS is looking like less and less of a sure bet. In fact, it’s looking more like a liability. But migrating from traditional DAS models to cloud storage is not as difficult or complex as it seems, and the good news for VARs and service providers is that they can make recommendations to customers with large DAS estates which, given solid integration and lateral thinking, will allow them to get best use out of what may, initially, seem to be redundant technology.

 

In this recent piece published on Channel Pro, John Zanni, vice president of service provider marketing and alliances at Parallels takes a look at the drawbacks of DAS in a cloud environment – and what alternatives are out there.

 

The Death of DAS?


Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Want 100 GB of Free Cloud Storage For Life?

Zoolz is promoting their cloud backup service with an offer to give the first million users 100 GB for free. For life. The catch? It uses AWS Glacier, Amazon’s cheaper alternative to S3. Glacier of course enforces a delay of 3 to 5 hours to retrieve files, and there are limits to monthly retrieval. But for the right purposes (like, “Store & Ignore”) it might be a real deal if you act soon enough. Their intro video explains:

EMC Leads the Storage Market for a Reason

By Randy Weis, Consulting Architect, LogicsOne

There are reasons that EMC is a leader in the market. Is it because they come out first with the latest and greatest technological innovation? No, or at least not commonly. Is it because they rapidly turn over their old technology and do sweeping replacements of their product lines with the new stuff? No. It’s because there is significant investment in working through what will work commercially and what won’t and how to best integrate the stuff that passes that test into traditional storage technology and evolving product lines.

Storage Admins and Enterprise Datacenter Architects are notoriously conservative and resistant to change. It is purely economics that drives most of the change in datacenters, not the open source geeks (I mean that with respect), mad scientists and marketing wizards that are churning out & hyping revolutionary technology. The battle for market leadership and ever greater profits will always dominate the storage technology market. Why is anyone in business but to make money?

Our job as consulting technologists and architects is to match the technology with the business needs, not to deploy the cool stuff because we think it blows the doors off of the “old” stuff. I’d venture to say that most of the world’s data sits on regular spinning disk, and a very large chunk of that behind EMC disk. The shift to new technology will always be led by trailblazers and startups, people who can’t afford the traditional enterprise datacenter technology, people that accept the risk involved with new technology because the potential reward is great enough. Once the technology blender is done chewing up the weaker offerings, smart business oriented CIOs and IT directors will integrate the surviving innovations, leveraging proven manufacturers that have consistent support and financial history.

Those manufacturers that cling to the old ways of doing business (think enterprise software licensing models) are doomed to see ever-diminishing returns until they are blown apart into more nimble and creative fragments that can then begin to re-invent themselves into more relevant, yet reliable, technology vendors. EMC has avoided the problems that have plagued other vendors and continued to evolve and grow, although they will never make everyone happy (I don’t think they are trying to!). HP has had many ups and downs, and perhaps more downs, due to a lack of consistent leadership and vision. Are they on the right track with 3PAR? It is a heck of a lot more likely than it was before the acquisition, but they need to get a few miles behind them to prove that they will continue to innovate and support the technology while delivering business value, continued development and excellent post-sales support. Dell’s investments in Compellent, particularly, bode very well for the re-invention of the commodity manufacturer into a true enterprise solution provider and manufacturer. The Compellent technology, revolutionary and “risky” a few years ago, is proving to be a very solid technology that innovates while providing proven business value. Thank goodness for choices and competition! EMC is better because they take the success of their competitors at HP and Dell seriously.

If I were starting up a company now, using Kickstarter or other venture investment capital, I would choose the new products, the brand new storage or software that promises the same performance and reliability as the enterprise products at a much lower cost, knowing that I am exposed to these risks:

  • the company may not last long (poor management, acts of god, fickle investors) or
  • the support might frankly sucks, or
  • engineering development will diminish as the vendor investors wait for the acquisition to get the quick payoff.

Meanwhile, large commercial organizations are starting to adopt cloud, flash and virtualization technologies precisely for all the above reasons. Their leadership needs to drive profitability into the datacenter technologies to increase speed to market and improve profitability. As the bleeding edge becomes the smart bet as brought to market by the market leading vendors, we will continue to see success where Business Value and Innovation intersect.

Why Apple, Not Dropbox, Amazon or Google Drive, is Dominating Cloud Storage

Apple is dominating the cloud storage wars, followed by Dropbox, Amazon and Google according to Strategy Analytics ‘Cloud Media Services’ survey. Cloud storage is overwhelmingly dominated by music; around 90% of Apple, Amazon and Google’s cloud users store music. Even Dropbox – which has no associated content ecosystem – sees around 45% of its users storing music files. Dropbox’s recent acquisition of Audiogalaxy will add a much needed native music player to the platform in the coming months.

In a recent study of almost 2,300 connected Americans, Strategy Analytics found that 27% have used Apple’s iCloud followed by 17% for Dropbox, 15% for Amazon Cloud Drive and 10% for Google Play (see chart).

Usage of cloud storage is heavily skewed towards younger people, in particular 20-24 year olds, whilst Apple’s service is the only one with more female than male users. Amongst the big four, Google’s is the one most heavily skewed towards males.

“Music is currently the key battleground in the war for cloud domination. Google is tempting users by giving away free storage for 20,000 songs which can be streamed to any Android device, a feature both Amazon and Apple charge annual subscriptions for,” observes Ed Barton, Strategy Analytics’ Director of Digital Media. “However, the growth of video streaming and the desire to access content via a growing range of devices will see services such as the Hollywood-backed digital movie initiative Ultraviolet – currently used by 4% of Americans – increase market share.”

Barton continues, “The cloud’s role in the race to win over consumers’ digital media libraries has evolved from a value added service for digital content purchases to a feature-rich and increasingly device agnostic digital locker for music and movies. Dropbox being used by 1 in 6 Americans shows that an integrated content storefront isn’t essential to build a large user base, however we expect competition to intensify sharply over the coming years.”

Strategy Analytics found that, the big four cloud storage services aside, recognition of other brands was uniformly low. Furthermore 55% of connected Americans have never used a cloud storage service – although, amongst consumers who have used one, one third (33%) had done so in the last week.

“There needs to be considerable investment in evangelizing these services to a potentially willing yet largely oblivious audience,” suggests Barton. “Given the size of bet Hollywood is making with Ultraviolet, this will be essential to their success given a crowded market and widespread apathy. However, more fundamental questions remain – is the use of more than one cloud service going to be too much for consumers to handle and will consolidation in such a fragmented market become inevitable?”

Barton concludes, “Although cloud storage is fast becoming a key pillar of digital platform strategies for the world’s leading device manufacturers and digital content distributors, there’s still a lot of work to do in educating consumers – particularly those over 45. With over half of consumers yet to use any consumer cloud based service, 2013 predictions for the ‘year of the cloud’ seem unrealistic. However given the market influence of the leading players pushing the concept, in particular Apple, Amazon, Google and Ultraviolet, I won’t be surprised to see mainstream adoption and usage spike within the next two to three years in the key US market.”