Archivo de la categoría: Disaster Recovery

6 Key Questions When Considering a DRaaS Solution

Questions When Considering a DRaaS Solution

I recently sat down with our very own Tim Ferris—Solutions Architect, Yankees fan (don’t hold that against him 🙂 and DRaaS Guru. We talked about some of the common questions customers ask when considering DRaaS and the common themes Tim sees when helping customers plan and implement DRaaS solutions. Check out our conversation below about the key questions when considering a DRaaS solution. 

  1. Does your company really need a DRaaS solution?

There are a variety of reasons why a DRaaS solution isn’t always the best fit for an organization. Depending on the type of business, an offsite disaster recovery strategy might not be a great match if the business is site dependent.  Another factor is the cost consideration. Companies need to make the business determination if disaster recovery is something that’s strategic for the company to invest in vs. putting the cost into purchasing a powerful insurance policy.

  1. How are DRaaS solutions priced?

Traditionally a huge barrier to offsite DR adoption has been price, however, DRaaS makes DR and the supporting infrastructure much more affordable and attractive to companies. DRaaS cloud billing and pricing is still a challenge though because pricing models vary widely across different companies. This can be a huge point of contention and another reason to use a solution provider who can model out true cost comparisons and estimates across various cloud partners.

  1. Is your company ready to embrace a modern DRaaS strategy?

For many traditional IT organizations, the move to DRaaS can be intimidating since you’re moving your DR environment off-site to a third party. You may also be concerned about losing data stewardship and need to understand the differences that living on a shared infrastructure can pose. In addition, there are some applications that require physical dependencies and can’t be handled by virtual DRaaS, so evaluating your application portfolio is crucial. Finally, eliminating most of your capex cost and turning it into a monthly recurring cost can be valuable to many companies.

  1. How simple is it to implement a DRaaS solution?

There’s a lot of marketing hype around this idea that DRaaS solutions are very simple: “Buy our DRaaS solution and we’ll have you up in an hour!” While many providers can technically get the DRaaS framework up quickly, there are a lot of variables that are unique to each company. (See #5) Because DRaaS is not one-size-fits-all, many companies work with IT solution providers (like GreenPages) to help create and implement a DR migration plan and implementation strategy. Compounding the issue is that the DRaaS solution provider market is very crowded so it can be challenging to navigate the options—it’s important to choose based on your company’s specific requirements.

Tweet This: “Because DRaaS is not one-size-fits-all, many companies work with IT solution providers to help create and implement a DR migration plan and implementation strategy.” via @GreenPagesIT

  1. What sorts of barriers or common problems will I encounter?

You must make sure as an organization that you have created a business impact analysis and overarching disaster requirements before someone can come in and implement the technical solution. Another prerequisite is understanding the interdependencies of all your applications so that you aren’t just replicating VMs, but are protecting business solutions and applications critical to the company. While ongoing management isn’t a barrier to DRaaS, the testing can be challenging no matter what DR solution you implement. (See #6).

  1. Can’t I just have a backup solution rather than a DRaaS solution?

Most companies do have a backup solution but not always a practical DR plan. Restoring from backup tape could take from days to weeks to restore. A true DRaaS system would provide you with recovery within minutes to hours. Backup is vitally important, but you may need the combination of backup with DRaaS to restore your systems properly as these systems can complement each other. Another important thing to keep in mind is that many companies do have a DR plan but have never tested it. Without testing, it’s not a plan, it’s just a theory. In addition, you will learn plenty of helpful and interesting information when you test your plan. Most important, you don’t want to learn that your DRaaS plan was faulty on the day you push the DR button due to an actual emergency.

Tweet This: “Most importantly, you don’t want to learn that your DRaaS plan was faulty on the day you push the DR button due to an actual emergency. ” Tim Ferris, @GreenPagesIT

Thanks for checking out our blog post! If you have any more questions about implementing DRaaS or would like to speak to a technologist, please reach out to us or click below.

By Jake Cryan, Digital Marketing Specialist

The economics of disaster recovery

Disaster Recovery Plan - DRPCompanies increasingly need constant access to data and the cost of losing this access – downtime – can be catastrophic. Large organizations can quickly find themselves in the eye of a storm when software glitches strike. It can result in lost revenue, shaken customer loyalty and significant reputational damage.

In August 2013, the NASDAQ electronic exchange went down for 3 hours 11 minutes, causing the shutdown of trading in stocks like Apple, Facebook, Google and 3,200 other companies. It resulted in the loss of millions of dollars, paralyzing trading in stocks with a combined value of more than $5.9 trillion. The Royal Bank of Scotland has now had five outages in three years including on the most popular shopping day of the year. Bloomberg also experienced a global outage in April 2015 resulting in the unavailability of its terminals worldwide. Disaster recovery for these firms is not a luxury but an absolute necessity.

Yet whilst the costs of downtime are significant, it is becoming more and more expensive for companies to manage disaster recovery as they have more and more data to manage: by 2020 the average business will have to manage fifty times more information than it does today. Downtime costs companies on average $5600 per minute and yet the costs of disaster recovery systems can be crippling as companies build redundant storage systems that rarely get used. As a result, disaster recovery has traditionally been a luxury only deep-pocketed organizations could afford given the investment in equipment, effort and expertise to formulate a comprehensive disaster recovery plan.

Cloud computing is now making disaster recovery available to all by removing the need for a dedicated remote location and hardware altogether. The fast retrieval of files in the cloud allows companies to avoid fines for missing compliance deadlines. Furthermore, the cloud’s pay for use model means organizations need only pay for protection when they need it and still have backup and recovery assets standing by. It also means firms can add any amount of data quickly as well as easily expire and delete data. Compare this to traditional back up methods where it is easy to miss files, data is only current to the last back up (which is increasingly insufficient as more data is captured via web transactions) and recovery times are longer.

Netflix has now shifted to Amazon Web Services for its streaming service after experiencing an outage in its DVD operation in 2008 when it couldn’t ship to customers for three days because of a major database corruption. Netflix says the cloud allows it to meet increasing demand at a lower price than it would have paid if it still operated its own data centres. It has tested Amazon’s systems robustly with disaster recovery plans “Chaos Monkey”, “Simian Army” and “Chaos Kong” which simulated an outage affecting an entire Amazon region.

Traditionally it has been difficult for organizations like Netflix to migrate to the cloud for disaster recovery as they have grappled with how to move petabytes of data that is transactional and hence continually in use. With technology such as WANdisco’s Fusion active replication making it easy to move large volumes of data to the cloud whilst continuing with transactions, companies can now move critical applications and processes seamlessly enabling disaster recovery migration. In certain circumstances a move to the cloud even offers a chance to upgrade security with industry recognized audits making it much more secure than on site servers.

Society’s growing reliance on crucial computer systems mean that even short periods of downtime can result in significant financial loss or in some cases even put human lives at risk. In spite of this, many companies have been reluctant to allocate funding for Disaster Recovery as management often does not fully understand the risks. Time and time again network computing infrastructure has proven inadequate. Cloud computing offers an opportunity to step up to a higher level of recovery capability at a cost that is palatable to nearly any sized business. The economics of disaster recovery in the cloud are such that businesses today cannot afford not to use it.

Written by David Richards, Co-Founder, President and Chief Executive of WANdisco.

Using the Cloud for Disaster Recovery

Here’s a short video I did discussing how we’ve helped clients use the cloud as a disaster recovery site. This can be a less expensive option that allows for test fail over while guaranteeing resources. If you have any questions or would like to talk about disaster recovery in the cloud in more detail, please reach out!

Using the Cloud for Disaster Recovery

Or click to watch on YouTube

 

By Chris Chesley, Solutions Architect

 

Lessons from the Holborn fire: how disaster recovery as a service helps with business continuity

Disaster recovery is creeping up on the priority list for enterprises

Disaster recovery is creeping up on the priority list for enterprises

The recent fire in Holborn highlighted an important lesson in business continuity and disaster recovery (BC/DR) planning: when a prompt evacuation is necessary ‒ whether because of a fire, flood or other disaster ‒ you need to be able to relocate operations without advance notice.

The fire, which was caused by a ruptured gas main, led to the evacuation of 5,000 people from nearby buildings, and nearly 2,000 customers experienced power outages. Some people lost Internet and mobile connectivity as well.

While firefighters worked to stifle the flames, restaurants and theatres were forced to turn away patrons and cancel performances, with no way to preserve their revenue streams. The numerous legal and financial firms in the area, at least, had the option to relocate their business operations. Some did, relying on cloud-based services to resume their operations remotely. But those who depended on physical resources on-site were, like the restaurants and theatres, forced to bide their time while the fire was extinguished.

These organisations’ disparate experiences reveals the increasing role of cloud-based solutions ‒ particularly disaster recovery as a service (DRaaS) solutions ‒ in BC/DR strategies.

The benefits of DRaaS

Today, an increasing number of businesses are turning to the cloud for disaster recovery. The DRaaS market is expected to experience a compounded annual growth rate of 55.2 per cent from 2013 to 2018, according to global research company MarketsandMarkets.

The appeal of DRaaS solutions is that they provide the ability to recover key IT systems and data quickly, which is crucial to meeting your customers’ expectations for high availability. To meet these demands within the context of a realistic recovery time frame, you should establish two recovery time objectives (RTOs): one for operational issues that are specific to your individual environment (e.g., a server outage) and another for regional disasters (e.g., a fire). RTOs for operational issues are typically the most aggressive (0-4 hours). You have a bit more leeway when dealing with disasters affecting your facility, but RTOs should ideally remain under 24 hours.

DRaaS solutions’ centralised management capabilities allow the provider to assist with restoring not only data but your entire IT environment, including applications, operating systems and systems configurations. Typically systems can be restored to physical hardware, virtual machines or another cloud environment. This service enables faster recovery times and eases the burden on your in-house IT staff by eliminating the need to reconfigure your servers, PCs and other hardware when restoring data and applications. In addition, it allows your employees to resume operations quickly, since you can access the environment from anywhere with a suitable Internet connection.

Scalability is another key benefit of DRaaS solutions. According to a survey by 451 Research, the amount of data storage professionals manage has grown from 215 TB in 2012 to 285 TB in 2014. To accommodate this storage growth, companies storing backups in physical servers have to purchase and configure additional servers. Unfortunately, increasing storage capacity can be hindered by companies’ shrinking storage budgets and, in some cases, lack of available rack space.

DRaaS addresses this issue by allowing you to scale your storage space as needed. For some businesses, the solution is more cost-effective than dedicated on-premise data centres or colocation solutions, because cloud providers typically charge only for the capacity used. Redundant data elimination and compression maximise storage space and further minimise cost.

When data needs to be maintained on-site

Standard DRaaS delivery models are able to help many businesses meet their BC/DR goals, but what if your organisation needs to keep data or applications on-site? Perhaps you have rigorous RTOs for specific data sets, and meeting those recovery time frames requires an on-premise backup solution. Or maybe you have unique applications that are difficult to run in a mixture of physical and virtual environments. In these cases, your business can leverage a hybrid DRaaS strategy which allows you to store critical data in an on-site appliance, offloading data to the cloud as needed.

You might be wondering, though, what happens to the data stored in an appliance in the event that you have to evacuate your facility. The answer depends on the type of service the vendor provides for the appliance. If you’re unable to access the appliance, recovering the data would require you to either access an alternate backup stored at an off-site location or wait until you regain access to your facility, assuming it’s still intact. For this reason, it’s important to carefully evaluate potential hybrid-infrastructure DRaaS providers.

DRaaS as part of a comprehensive BC/DR strategy

In order for DRaaS to be most effective for remote recovery, the solution must be part of a comprehensive BC/DR strategy. After all, what good is restored data if employees don’t have the rest of the tools and information they need to do their jobs? These additional resources could include the following:

•         Alternate workspace arrangements

•         Provisions for backup Internet connectivity

•         Remote network access solutions

•         Guidelines for using personal devices

•         Backup telephony solution

The Holborn fire was finally extinguished 36 hours after it erupted, but not before landing a blow on the local economy to the tune of £40 million. Businesses using cloud services as part of a larger business continuity strategy, however, were able to maintain continuity of operations and minimise their lost revenue. With the right resources in place, evacuating your building doesn’t have to mean abandoning your business.

By Matt Kingswood, head of managed services, IT Specialists (ITS)

WordPress whiz Pantheon buys NodeSquirrel in cloud backup play

Pantheon has acquired NodeSquirrel, a cloud backup tech specialist

Pantheon has acquired NodeSquirrel, a cloud backup tech specialist

Pantheon, a large website managemet platform for Drupal and WordPress-based sites has acquired NodeSquirrel, a hosting provider specialising in open source cloud-based data backup technology.

NodeSquirrel provides hosting and data backup and recovery services to over 300,000 websites, and the acquisition will see Pantheon offer NodeSquirrel to its own customers for free. Some of the core NodeSquirrel team will also join Pantheon following the acquisition.

“We have always had a big vision for what could be possible with NodeSquirrel. With Pantheon’s support, those dreams are going to become reality. Our shared vision of great, easy-to-use tools for developers and agencies makes this an incredible opportunity. We are excited to join the Pantheon team,” said Drew Gorton co-founder of NodeSquirrel.

The move will give NodeSquirrel scale and Pantheon a new value-adding service to offer to existing customers. The company also said it plans larger investments into data backup and restore technology designed to handle larger file footprints and incremental backup.

Zack Rosen, Pantheon co-founder and chief executive officer said: “It’s 2015 and people are still storing backups of their website locally. If anything happens, whether that is a security attack or a natural disaster, those websites are not protected. Secure, reliable offsite backups are a fundamental best practice.”

“Acquiring NodeSquirrel gives Pantheon the ability to make secure offsite backups freely available to every Drupal website on the planet,” Rosen added.

Cloud-based data backup and recovery services are being deployed more and more in a bid to complement both online and on-premise systems. Click here to learn more about how to use the cloud for backup.

Cloud Management, Business Continuity & Other 2013 Accomplishments

By Matt Mock, IT Director

It was a very busy year at GreenPages for our internal IT department. With 2013 coming to a close, I wanted to highlight some of the major projects we worked on over the course of the year. The four biggest projects we tackled were using a cloud management solution, improving our business continuity plan, moving our datacenter, and creating and implementing a BYOD policy.

Cloud Management as a Service

GreenPages now offers a Cloud Management as a Service (CMaaS) solution to our clients. We implemented the solution internally late last year, but really started utilizing it as a customer would this year by increasing what was being monitored and managed. We decided to put Exchange under the “Fully Managed” package of CMaaS. Exchange requires a lot of attention and effort. Instead of hiring a full time Exchange admin, we were able to offload that piece with CMaaS as our Managed Services team does all the health checks to make sure any new configuration changes are correct. This resulted in considerable cost savings. Having access to the team 24/7 is a colossal luxury. Before using CMaaS, if an issue popped up at 3 in the morning we would find out about it the next morning. This would require us to try and fix the problem during business hours. I don’t think I need to explain to anyone the hassle of trying to fix an issue with frustrated coworkers who are unable to do their jobs. If an issue arises now in the middle of the night, the problem has already been fixed before anyone shows up to start working. The Managed Services team does research and remediates bugs that come up. This happened to us when we ran into some issues with Apple iOS calendaring. The Managed Services team did the research to determine the cause and went in and fixed the problem. If my team tried to do this it would have taken us 2-3 days of wasted time. Instead, we could be focusing on some of our other strategic projects. In fact, we are holding a webinar on December 19th that will cover strategies and benefits to being the ‘first-to-know,’ and we will also provide a demo of the CMaaS Enterprise Command Center. We also went live with fully automated patching, which requires zero intervention from my team. Furthermore, we leveraged CMaaS to allow us to spin up a fully managed Linux environment. It’s safe to say that if we didn’t implement CMaaS we would not have been able to accomplish all of our strategic goals for this year.

{Download this free whitepaper to learn more about how organizations can revolutionize the way they manage hybrid cloud environments}

Business Plan

We also determined that we needed to update our disaster recovery plan to a true robust business continuity plan. A main driver of this was because of our more diverse office model. Not only were more people working remotely as our workforce expanded, but we now have office locations up and down the east coast in Kittery, Boston, Attleboro, New York City, Atlanta, and Tampa. We needed to ensure that we could continue to provide top quality service to our customers if an event were to occur. My team took a careful look at our then current infrastructure set up. After examining our policies and plans, we generated new ones around the optimal outcome we wanted and then adjusted the infrastructure to match. A large part of this included changing providers for our data and voice, which included moving our datacenter.

Datacenter Move

In 2013 we wanted to have more robust datacenter facilities. Ultimately, we were able to get into an extremely redundant and secure datacenter at the Markley Group in Boston that provided us with cost savings. Furthermore, Markley is also a large carrier hotel which gives us additional savings on circuit costs. With this move we’re able to further our capabilities of delivering to our customers 24/7. Another benefit our new datacenter offered was excess office space. That way, if there ever was an event at one of our GreenPages locations we could have a place to send people to work. I recently wrote a post which describes the datacenter move in more details.

BYOD Policy

As 2013 ends, we are finishing our first full year with our BYOD policy. We are taking this time to look back and see where there were any issues with the policies or procedures and adjusting for the next year. Our plan is to ensure that year two is even more streamlined. I answered questions in a recent Q & A explaining our BYOD initiative in more detail.

I’m pretty happy looking back at the work we accomplished in 2013. As with any year, there were bumps along the way and things we didn’t get to that we wanted to. All in all though, we accomplished some very strategic projects that have set us up for success in the future. I think that we will start out 2014 with increased employee satisfaction, increased productivity of our IT department, and of course noticeable cost savings. Here’s to a successful 2014!

Is your IT team the first-to-know when an IT outage happens? Or, do you find out about it from your end users? Is your expert IT staff stretched thin doing first-level incident support? Could they be working on strategic IT projects that generate revenue? Register for our upcoming webinar to learn more!

 

Huh? What’s the Network Have to Do with It?

By Nate Schnable, Sr. Solutions Architect

Having been in this field for 17 years it still amazes me that people always tend to forget about the network.  Everything a user accesses on their device that isn’t installed or stored locally, depends on the network more than any other element of the environment.   It’s responsible for the quick and reliable transport of data. That means the user experience while working with remote files and applications, almost completely depends on the network.

However, this isn’t always obvious to everyone.  Therefore, they will rarely ask for network related services as they aren’t aware the network is the cause of their problems.  Whether it is a storage, compute, virtualization or IP Telephony initiative – all of these types of projects rely heavily on the network to function properly.  In fact, the network is the only element of a customer’s environment that touches every other component. Its stability can make or break the success and all important user experience.

In a VoIP initiative we have to consider, amongst many things, that proper QoS policies be setup –  so let’s hope you are not running on some dumb hubs.  Power over Ethernet (PoE) for the phones should be available unless you want to use bricks of some type of mid-span device (yuck).  I used to work for a Fortune 50 Insurance Company and one day an employee decided to plug both of the ports on their phone into the network because it would make the experience even better – not so much.  They brought down that whole environment.  Made some changes after that to avoid that happening again!

In a Disaster Recovery project we have to take a look at distances and subsequent latencies between locations.  What is the bandwidth and how much data do you need to back up?   Do we have Layer 2 handoffs between sites or is it more of a traditional L3 site to site connection?

If we are implementing a new iSCSI SAN do we need ten gig or one gig?  Do your switches support Jumbo Frames and flow control?  Hope that your iSCSI switches are truly stackable because spanning-tree could cause some of those paths to be redundant, but not active.

I was reading the other day that the sales of smart phones and tablets would reach approximately 1.2 billiion in 2013.  Some of these will most certainly end up on your wireless networks.  How to manage that is definitely a topic for another day.

In the end it just makes sense that you really need to consider the network implications before jumping into almost any type of IT initiative.  Just because those green lights are flickering doesn’t mean it’s all good.

 

To learn more about how GreenPages Networking Practice can help your organization, fill out this form and someone will be in touch with you shortly.

Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Disaster Recovery in the Cloud, or DRaaS: Revisited

By Randy Weis

The idea of offering Disaster Recovery services has been around as long as SunGard or IBM BCRS (Business Continuity & Resiliency Services). Disclaimer: I worked for the company that became IBM Information Protection Services in 2008, a part of BCRS.

It seems inevitable that Cloud Computing and Cloud Storage should have an impact on the kinds of solutions that small, medium and large companies would find attractive and would fit their requirements. Those cloud-based DR services are not taking the world by storm, however. Why is that?

Cloud infrastructure seems perfectly suited for economical DR solutions, yet I would bet that none of the people reading this blog has found a reasonable selection of cloud-based DR services in the market. That is not to say that there aren’t DR “As a Service” companies, but the offerings are limited. Again, why is that?

Much like Cloud Computing in general, the recent emergence of enabling technologies was preceded by a relatively long period of commercial product development. In other words, virtualization of computing resources promised “cloud” long before we actually could make it work commercially. I use the term “we” loosely…Seriously, GreenPages announced a cloud-centric solutions approach more than a year before vCloud Director was even released. Why? We saw the potential, but we had to watch for, evaluate, and observe real-world performance in the emerging commercial implementations of self-service computing tools in a virtualized datacenter marketplace. We are now doing the same thing in the evolving solutions marketplace around derivative applications such as DR and archiving.

I looked into helping put together a DR solution leveraging cloud computing and cloud storage offered by one of our technology partners that provides IaaS (Infrastructure as a Service). I had operational and engineering support from all parties in this project and we ran into a couple of significant obstacles that do not seem to be resolved in the industry.

Bottom line:

  1. A DR solution in the cloud, involving recovering virtual servers in a cloud computing infrastructure, requires administrative access to the storage as well as the virtual computing environment (like being in vCenter).
  2. Equally important, if the solution involves recovering data from backups, is the requirement that there be a high speed, low latency (I call this “back-end”) connection between the cloud storage where the backups are kept and the cloud computing environment. This is only present in Amazon at last check (a couple of months ago), and you pay extra for that connection. I also call this “locality.”
  3. The Service Provider needs the operational workflow to do this. Everything I worked out with our IaaS partners was a manual process that went way outside normal workflow and ticketing. The interfaces for the customer to access computing and storage were separate and radically different. You couldn’t even see the capacity you consumed in cloud storage without opening a ticket. From the SP side, notification of DR tasks they would need to do, required by the customer, didn’t exist. When you get to billing, forget it. Everyone admitted that this was not planned for at all in the cloud computing and operational support design.

Let me break this down:

  • Cloud Computing typically has high speed storage to host the guest servers.
  • Cloud Storage typically has “slow” storage, on separate systems and sometimes separate locations from a cloud computing infrastructure. This is true with most IaaS providers, although some Amazon sites have S3 and EC2 in the same building and they built a network to connect them (LOCALITY).

Scenario 1: Recovering virtual machines and data from backup images

Scenario 2: Replication based on virtual server-based tools (e.g. Veeam Backup & Replication) or host-based replication

Scenario 3: SRM, array or host replication

Scenario 1: Backup Recovery. I worked hard on this with a partner. This is how it would go:

  1. Back up VMs at customer site; send backup or copy of it to cloud storage.
  2. Set up a cloud computing account with an AD server and a backup server.
  3. Connect the backup server to the cloud storage backup repository (first problem)
    • Unless the cloud computing system has a back end connection at LAN speed to the cloud storage, this is a showstopper. It would take days to do this without a high degree of locality.
    • Provider solution when asked about this.
      • Open a trouble ticket to have the backups dumped to USB drives, shipped or carried to the cloud computing area and connected into the customer workspace. Yikes.
      • We will build a back end connection where we have both cloud storage and cloud computing in the same building—not possible in every location, so the “access anywhere” part of a cloud wouldn’t apply.

4. Restore the data to the cloud computing environment (second problem)

    • What is the “restore target”? If the DR site were a typical hosted or colo site, the customer backup server would have the connection and authorization to recover the guest server images to the datastores, and the ability to create additional datastores. In vCenter, the Veeam server would have the vCenter credentials and access to the vCenter storage plugins to provision the datastores as needed and to start up the VMs after restoring/importing the files. In a Cloud Computing service, your backup server does NOT have that connection or authorization.
    • How can the customer backup server get the rights to import VMs directly into the virtual VMware cluster? The process to provision VMs in most cloud computing environments is to use your templates, their templates, or “upload” an OVF or other type of file format. This won’t work with a backup product such as Veeam or CommVault.

5. Recover the restored images as running VMs in the cloud computing environment (third problem), tied to item #4.

    • Administrative access to provision datastores on the fly and to turn on and configure the machines is not there. The customer (or GreenPages) doesn’t own the multitenant architecture.
    • The use of vCloud Director ought to be an enabler, but the storage plugins, and rights to import into storage, don’t really exist for vCloud. Networking changes need to be accounted for and scripted if possible.

Scenario 2: Replication by VM. This has cost issues more than anything else.

    • If you want to replicate directly into a cloud, you will need to provision the VMs and pay for their resources as if they were “hot.” It would be nice if there was a lower “DR Tier” for pricing—if the VMs are for DR, you don’t get charged full rates until you turn them on and use for production.
      • How do you negotiate that?
      •  How does the SP know when they get turned on?
      • How does this fit into their billing cycle?
    • If it is treated as a hot site (or warm), then the cost of the DR site equals that of production until you solve these issues.
    • Networking is an issue, too, since you don’t want to turn that on until you declare a disaster.
      • Does the SP allow you to turn up networking without a ticket?
      • How do you handle DNS updates if your external access depends on root server DNS records being updated—really short TTL? Yikes, again.
    • Host-based replication (e.g. WANsync, VMware)—you need a host you can replicate to. Your own host. The issues are cost and scalability.

Scenario 3: SRM. This should be baked into any serious DR solution, from a carrier or service provider, but many of the same issues apply.

    • SRM based on host array replication has complications. Technically, this can be solved by the provider by putting (for example) EMC VPLEX and RecoverPoint appliances at every customer production site so that you can replicate from dissimilar storage to the SP IDC. But, they need to set up this many-to-one relationship on arrays that are part of the cloud computing solution, or at least a DR cloud computing cluster. Most SPs don’t have this. There are other brands/technologies to do this, but the basic configuration challenge remains—many-to-one replication into a multi-tenant storage array.
    • SRM based on VMware host replication has administrative access issues as well. SRM at the DR site has to either accommodate multi-tenancy, or each customer gets their own SRM target. Also, you need a host target. Do you rent it all the time? You have to, since you can’t do that in a multi-tenant environment. Cost, scalability, again!
    • Either way, now the big red button gets pushed. Now what?
      • All the protection groups exist on storage and in cloud computing. You are now paying for a duplicate environment in the cloud, not an economically sustainable approach unless you have a “DR Tier” of pricing (see Scenario 2).
      • All the SRM scripts kick in—VMs are coming up in order in protection groups, IP addresses and DNS are being updated, CPU loads and network traffic climb…what impact is this?
      • How does that button get pushed? Does the SP need to push it? Can the customer do it?

These are the main issues as I see it, and there is still more to it. Using vCloud Director is not the same as using vCenter. Everything I’ve described was designed to be used in a vCenter-managed system, not a multi-tenant system with fenced-in rights and networks, with shared storage infrastructure. The APIs are not there, and if they were, imagine the chaos and impact on random DR tests on production cloud computing systems, not managed and controlled by the service provider. What if a real disaster hit in New England, and a hundred customers needed to spin up all their VMs in a few hours? They aren’t all in one datacenter, but if one provider that set this up had dozens, that is a huge hit. They need to have all the capacity in reserve, or syndicate it like IBM or SunGard do. That is the equivalent of thin-provisioning your datacenter.

This conversation, as many I’ve had in the last two years, ends somewhat unsatisfactorily with the conclusion that there is no clear solution—today. The journey to discovering or designing a DRaaS is important, and it needs to be documented, as we have done here with this blog and in other presentations and meetings. The industry will overcome these obstacles, but the customer must remain informed and persistent. The goal of an economically sustainable DRaaS solution can only be achieved by market pressure and creative vendors. We will do our part by being your vigilant and dedicated cloud services broker and solution services provider.

 

 

 

 

 

 

 

 

 

 

IceWEB Adding NovaStor Backup to Storage Appliances

IceWEB Inc., today announced they will bundle NovaSTOR’s Advanced Backup software with IceWEB’s Unified Storage Appliances.

“NovaSTOR is an excellent companion product for our IceWEB appliances because of its broad set of market applications,” said Rob Howe, IceWEB, CEO. “Their products span the same market spaces as ours—small, medium, large and cloud-based enterprises. They cover Windows, Linux, VMWare and Unix servers and clients in a myriad of configurations, mirroring our model. Because it is not always possible or cost-effective for companies to enable total unified storage protocols everywhere in their enterprise, we are providing significant additional value to our customers by enabling them to utilize NovaStor to target an IceWEB appliance in their network in order to satisfy their backup and disaster recovery needs for every location. Enabling them to have the ability to utilize a world-class product like NovaSTOR is yet another area in which IceWEB brings them excellent value when they purchase IceWEB products,” Howe concluded.