Archivo de la etiqueta: cloud

Moving Email to the Cloud, Part 1

By Chris Chesley, Solutions Architect

Many of our clients are choosing to not manage Exchange day to day and not to upgrade it every 3-5 years.  They do this by choosing to have Microsoft host their mail in Office 365.  Is this right for your business?  How do you tie this into your existing infrastructure and still have access to email regardless of the status of your onsite services?

The different plans for Microsoft Office 365 can be confusing. Regardless of what plan you get, the Exchange Online choices boil down to two options.  Exchange Plan 1 offers you 50GB mailboxes per user, ActiveSync, Outlook Web Access, Calendar and all of the other features you are currently getting with an on premises Exchange implementation.  Additionally you also get antivirus and antispam protection.  All of this for 4 dollars a month per user.

Exchange Plan 2 offers the exact same features as plan 1, with the additions of unlimited archiving, legal hod capabilities, compliance support tools and advanced voice support.  This plan is 8 dollars a user per month.

All of the other Office 365 plans that include Exchange are either plan 1 or plan 2.  For example, the E3 plan (Enterprise plan 3) includes Exchange plan 2, SharePoint Plan 2, Lync Plan 2 and Office Professional Plus for 5 devices per user.  You can take any plan and break it down to the component part and fully understand what you’re getting.

If you are looking to move email to the cloud and are currently using Exchange, who better to host your Exchange than Microsoft?  Office 365 is an even better choice if you are using, or plan on using, SharePoint or Lync.  All of these technologies are available in the current plans or individually through Office 365.

I’ve helped many clients make this transition so if you have any questions or if there’s any confusion around the Office 365 plans feel free to reach out.

My next blog will be on the 3 different authentication methods in Office 365.

Top 10 Ways to Kill Your VDI Project

By Francis Czekalski, Consulting Architect, LogicsOne

Earlier this month I presented at GreenPages’ annual Summit Event. My breakout presentation this year was an End User Computing Super Session. In this video, I summarize the ‘top 10 ways to kill your VDI project.’

If you’re interested in learning more, download this free on-demand webinar where I share some real world VDI battlefield stories.

http://www.youtube.com/watch?v=y9w1o0O8IaI

 

 

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

Technical enablement of Microsoft SMB cloud services is easy. Selling to SMBs is not. Parallels has the solution.

 

Starting a hosting or cloud business is easy. Whether you are a small hoster or web designer, infrastructure provider, managed service provider or telco, it just takes buying the right software to automate those services. Parallels has a whole portfolio for the small to the large business. That does not mean you will be successful.

 

Service providers fail because they think that putting a catalog up is sufficient. It is not. The most successful providers have an end to end marketing and sales enablement plan to go with the offer. They take into account up-sell and cross-sell scenarios. They have a strong feedback loop so that they can tune their offers.

 

We, at Parallels, believe that it is part of our responsibility to give you the tools you need to be successful. That is why we created PartnerNet. This is the location for our partners to get all the best practices and information needed to be successful.

 

We will be bringing a taste of PartnerNet to you at Microsoft WPC. Come meet with Birger Steen, our CEO, Mauro Meanti, SVP and GM SP Business and of course, yours truly while we present “Succeeding in the SMB Cloud with Microsoft and Parallels” on Monday, July 8 at 4:30 PM in the Hilton Americas. After that session, you will know what it takes to sell those Microsoft cloud services and more.  You can also come to the booth and drop off your business card for a chance to win a Surface!

 

See you there!

 

John Zanni, Vice President SP Marketing and Alliances

 

 

 

 

How RIM Can Improve Efficiency and Add Value To Your IT Ops

This is a guest post from Chris Joseph, VP, Product Management & Marketing, NetEnrich

 

Cloud, virtualization and hybrid IT technologies are being used in small and large IT enterprises everywhere to both modernize, and achieve business goals and objectives. As such, a top concern for today’s IT leaders is whether the investments being made in these technologies are delivering on the promise of IT modernization. Another concern is finding ways to free up IT funds currently spent on routine maintenance of IT infrastructure, so that they can invest in these new and strategic IT modernization projects.

Don’t Waste Time, Money and Talent on Blinking Lights

Everyone knows that IT organizations simply can’t afford to have a team of people dedicated to watching for blinking lights and waiting for something to fix.  It’s a waste of talent and will quickly burn through even the most generous of IT budgets. Yet, according to a Gartner study, 80% of an enterprise IT budget is generally spent on routine IT, while only 20% is spent on new and strategic projects.

If this scenario sounds familiar, then you may want to consider taking a long and hard look at third-party Remote Infrastructure Management (RIM) services for your IT infrastructure management. In fact, RIM services have been shown to reduce spending on routine IT operations by 30-40%, but how is this possible?

(1)     First of all, RIM services rationalize, consolidate and integrate the tools that are used to power the functionality of the monitoring and management of IT infrastructure within an enterprise.  According to Enterprise Management Associates, a leading IT and data management research and consulting firm, a typical enterprise has nearly 11 such tools running in its environment, and these typically include IT Operations Management (ITOM) tools and IT Service Management (ITSM) tools. As any IT professional can attest to, while there is significant overlap, some of these tools tend to be deficient in their capabilities, and they can be a significant source of noise and distraction, especially when it comes to false alerts and tickets. Yet, through RIM, IT organizations can eliminate many of these tools and consolidate their IT operations into a single pane of glass view, which can result in significant cost savings.

(2)     Secondly, by leveraging RIM, IT teams can be restructured and organized into shared services delivery groups, which can result in better utilization of skilled resources, while supporting the transformation of IT into a new model that acts as a service provider to business units.  Combine these elements of RIM with remote service delivery, and not only will you improve economies of scale and scope, but you will also promote cost savings.

(3)     Thirdly, RIM services consistently look to automation, analytics, and best practices to promote cost savings in the enterprise. Manual processes and runbooks are not only costly, but also time consuming and error prone. Yet, to automate processes effectively, IT organizations must rely on methodologies, scripts, and tools. This is where RIM comes into play. In fact, within any enterprise, 60-80% of manual processes and runbooks can easily be automated with RIM.

Download this free whitepaper to learn how to avoid focusing on ”keeping the lights on” to allow your team to focus on strategic initiatives

Beyond Cost Savings and Greater Efficiency: Building a Case for RIM

In addition to reducing routine spending and improving the efficiency of your IT operations, there are several other benefits to leveraging third-party RIM services:

  • 24×7 IT operations support.  Third-party RIM services often provide 24×7 IT ops support.  IT organizations benefit from around the clock monitoring and management of their IT infrastructures without additional headcount, or straining internal resources, which saves operating costs.
  • Be the first to know. 24×7 IT operations support means that you are always the first to know when customer-facing IT systems such as the company’s website, online shopping portal, mobile apps and cloud-based solutions go down. And, the issue is resolved in many cases by RIM services teams before the end-user has time to notice.
  • Skills and expertise. Third-party RIM services can provide your IT organization with certified engineers in various IT infrastructure domains. These engineers are responsible for monitoring, alerting, triaging, ticketing, incident management, and the escalation of critical outages or errors to you and your IT staff, if they cannot be immediately resolved. In addition, they may also be available on an on-demand basis if you are looking for skills and expertise in a specific domain.

The bottom line: by leveraging RIM services, IT organizations like yours can not only enhance their service capabilities and bolster service levels, but they can also can say goodbye to the fire drills and late night calls that plague your IT staff.  Proactive management of your IT infrastructure through RIM ensures that it is always running at peak performance.

To hear more from Chris, visit the NetEnrich blog

To learn more about how GreenPages can help you monitor and manage your IT Operations fill out this form

Part 2: Want to Go Cloud? What’s the Use Case?

By Lawrence Kohan, Senior Consultant, LogicsOne

 

Recap:

In Part 1 of this blog post, I started by reiterating the importance of having a strategy for leveraging the Cloud before attempting to migrate services to it in order to achieve the best results.  Using an example use case, I showed the basic pros and cons of considering moving a company’s e-mail services to the Cloud.  Then, delving further into the additional factors to consider, based on the size and breadth of the company, I showed that in that particular scenario, that an e-mail migration to the Cloud would provide more benefit to small businesses and startups instead of medium to large enterprises; wherein such a migration may actually be more detrimental than helpful.

Use the Cloud to level the playing field!

Historically, a small business is typically at a disadvantage to their larger counterparts, as they generally have less capital to work with.  However, the Cloud Era may prove to be the great equalizer.  The nimbleness and portability of a small business may prove to be quite an advantage when it comes to reducing operating costs to give the small business a competitive edge.  A small business with a small systems footprint may be able to consider strategies for moving most—if not all—of their systems to the Cloud.  A successful migration would greatly reduce company overhead, administrative burden, and increased office space and real estate by repurposing decommissioned server rooms.  Thus, a small business is able to leverage the Cloud in a way to gain a competitive advantage in a way that is (most likely) not an option for a medium or large enterprise.

So, what is a good Cloud use case for a medium to large business?

The Cloud can’t be all things to all people.  However, the Cloud can be many things to many people.  While the enterprise may not have the same options as the small business, they still have many options available to them to reduce their costs or expand their resources to accommodate their needs in a cost-effective way.

Enterprise Use Case 1: Using IaaS for public website hosting

A good low-risk Cloud option that an enterprise can readily consider: moving non-critical, non-confidential informational data to the Cloud.  A good candidate for initial Cloud migration would be a corporate website with marketing materials or information about product or service offerings.  It is important that a company’s website containing product photos, advertising information, hours of operation and location and contact information is available 24/7 for customer and potential customer access.  In this case, the enterprise can leverage a Cloud Service Provider’s Infrastructure as a Service (IaaS) in order to host their website.  For a monthly service fee, the Cloud Service Provider will host the enterprise’s website on redundant, highly available infrastructure and proactively monitor the site to ensure maximum uptime.  (The enterprise should consider the Cloud Service Provider’s SLA when determining their uptime needs).

By this strategy, the enterprise is able to ensure maximum uptime for it important revenue-generating web materials, while offloading the costs associated with hosting and maintenance of the website.  At the same time, the data being presented online is not confidential in nature, so there is little risk in having it hosted externally.  This is an ideal use case of a Public Cloud.

In addition to the above, a Hybrid Cloud approach can also be adopted: the public-facing website could conduct e-commerce transactions by redirecting purchase requests to privately hosted e-commerce applications and customer databases that are secure and PCI compliant.  Thus, we have an effective, hybrid use of Cloud resources to leverage high availability, while still keeping confidential customer and credit card data secure and internally hosted. We’ll actually be hosting a webinar tomorrow with guest speakers from Forrester Research and Gravitant that will talk about hybrid cloud management. If you’re interested in learning more about how to properly manage your IT environment, I’d highly recommend sitting in.

Enterprise Use Case 2: Using Cloud Bursting to accommodate increased resource demands as needed

Another good Public Cloud use case: let’s say a company, operating at maximum capacity, has periodic or seasonal needs to accommodate spikes in workload.  This could either be increased demands on applications and infrastructure, or needing extra staff to perform basic clerical or administrative functions on a limited basis.  It would be a substantial investment to procure additional office space and computer hardware for limited use—not to mention the additional expenses of maintaining the hardware and office space.  In such a case, an enterprise using a Cloud Service Provider’s IaaS would be able to rapidly provision virtual servers and desktops that can be accessed via space-saving thinclients, or even remotely.  Once the project is completed, those virtual machines can be deleted.  Upon future need, new virtual machines could easily be provisioned in the same way.  And most importantly, the company only pays for what it needs, when it needs it.  This is another great way for an enterprise to leverage the Cloud’s elasticity to accommodate its dynamic needs!

Enterprise Use Case 3: Fenced testing environments for application development

Application teams often need to simulate production conditions for testing, while not effecting actual production.  When dealing with traditional hardware infrastructure, setting up a dedicated development infrastructure could be an expensive and time consuming proposition.  In addition, the Apps team may require many identical setups for multiple teams’ testing, or to simulate many scenarios using the same parameters such as IP and MAC addresses.  With traditional hardware setups, this is an extremely difficult task to achieve in a productive, isolated manner.  However, with Cloud services, such as VMware’s vCloud Suite, isolated fenced applications can be provisioned and mass-produced quickly for an Apps team’s use without affecting production, and then can be rapidly decommissioned as well.  In this particular example use case of the vCloud Suite, VMware’s Chargeback Manager can also be used to get a handle on the costs associated with development environment setup, which can then provide showback and chargeback reports to a department, organization, or other business entity.  This is yet another good example of an efficient and cost-effective use of the Cloud to solve a complex business need.

 

Consider your strategy first!  Then, use the Cloud to your advantage!

So, as we have seen, the Cloud offers various time-saving, flexible, efficient solutions, that can accommodate businesses of any size or nature.  However, the successful transition to the Cloud depends—more than anything else—on the initial planning and strategy that goes into its adoption.

Of course, there are many other options and variables to consider in a Cloud adoption strategy, such as choice of providers, consulting services, etc.  However, before even looking into the various Cloud vendors and options, start out by asking the important internal questions, first:

  • What are our business goals?
  • What are our intended use case(s) for the Cloud?
  • What are we looking to achieve from its use?
  • What is the problem that we are trying to solve?  (And is the Cloud the right choice for that particular problem?)
  • What type of Cloud service would address our need? (Public, Private, Hybrid?)
  • What is our timetable for transition to the Cloud?
  • What is our plan?  Is it feasible?
  • What is our contingency plan?  (How do we backup and/or back-out?)

When a company has solid answers for question such as the above, they are ready to begin their own journey to the cloud.

 

Last chance to register for tomorrow’s webinar on leveraging cloud brokerage. Speakers from GreenPages, Forrester Research, and Gravitant.

Is There Such a Thing as Just-In-Time IT?

By Praveen Asthana, Chief Marketing Officer, Gravitant

 

The concept of “Just-in-Time” was pioneered in the manufacturing supply chain as a critical way to reduce costs by minimizing inventory.   Implementing a just-in-time system that can handle unexpected demand is not a trivial undertaking.  It requires the confluence of a number of disciplines such as analytics, statistics, sourcing, procurement, production management, brokerage and economics.

An interesting new idea is to take this concept pioneered in manufacturing and apply it to Information Technology resources.  Doing this can provide an effective way to meet dynamically changing needs while minimizing the inventory of unused IT resources across a set of cloud services platform and providers.

Case Study:  Election Day 2012.

With the growing popularity of e-voting and use of the Internet as an information resource on candidates and issues, the Secretary of State’s office for one of the most populous U.S. states knew that demand for IT resources would go up significantly on election day.  But they didn’t know exactly how much, and they didn’t want to buy extra infrastructure for a temporary surge in demand.  Even if they could come up with a good guess for the demand, deploying the right amount of resources in a timely manner would be challenging.  Given the time it normally took (months) to deploy and provision new servers, the Secretary of State’s office knew they couldn’t use traditional means to procure compute and storage capacity to meet this demand.

As it turned out, demand went up over 1000% to over five million hits on the state voting web site by noon on Election Day.

Praveen

Fortunately the state had deployed a novel capability based on a cloud brokerage and management platform to seamlessly provision IT resources in real time from multiple public cloud sources to meet the variability in demand.  As a result, this demand was fully met without needing to do complicated planning or buy unneeded infrastructure. I’ll actually be speaking on a webinar with Chris Ward, CTO at GreenPages-LogicsOne and Dave Bartoletti, a Senior Analyst at Forrester Research on June 12th to talk about leveraging cloud brokerage and the impact it can have on managing your IT environment.

Minutes, not months—that’s what enterprise users want when it comes to having I.T. resources available to meet changing business needs or develop new applications.

However users find this to be an extraordinary challenge—most IT departments today struggle with rigid processes, a round-robin of tasks and approvals across multiple silos and departments, and manual provisioning steps.  All this adds significant time to the deployment of I.T. resources resulting in users waiting for months before the resources they need become available.

How do users respond to such delays?  By going around their IT departments and directly accessing cloud services.  Often termed ‘rogue IT’ or ‘shadow IT,’ such out of process actions expose the company to financial risk, security risks, and operational risk.

The Solution: Just-in-time IT with Real-Time Governance

Just-in-time IT is not merely about using private or public cloud services.   It is about engineering the end-to-end IT supply chain so it can be agile and respond immediately to dynamic business needs.  To achieve this in practice, you need:

  1. Effective assessment and strategy
  2. Self-service catalog of available IT resources
  3. Collaborative solution design
  4. Rapid approval work flow
  5. Sourcing platform that allows you to select the right supply chain partners for your business need or workload profile.
  6. Single button provisioning of resources
  7. Transparency across the IT supply chain
  8. Sophisticated supply-demand analytics
  9. Elastic source for resources
  10. Governance—dynamic control of resources based on goal based optimization of budget, resource usage and SLAs.

 

The first critical aspect of real time supply chain is identifying, sourcing and procurement of best fit cloud platforms and providers (internal or external) to meet your unique business needs.

The second critical aspect of ensuring just-in-time IT is effective is real-time governance, for this is the mechanism by which you truly manage the elasticity of cloud resources and ensure that IT resource inventory is minimized.   This also has the additional benefit of eliminating shadow or rogue I.T.

As I mentioned above, if you’re interested in learning more on this topic I would highly recommend registering for the upcoming webinar “What’s Missing In Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage” being held on June 12th. This should be a great session and there will be time for Q & A at the end.

About the Author:

Praveen Asthana is Chief Marketing Officer of Gravitant (www.gravitant.com), a cloud services brokerage and management company.  Prior to joining Gravitant, Praveen was Vice President of Marketing and Strategy for Dell’s $13B Enterprise Solutions Division.

Adobe’s Out of Box Thinking and Into the Cloud

By Rob O’Shaughnessy, Software Licensing Specialist

 

I attended Adobe’s MAX conference in rainy, LA, California last week and I felt bad, as a local, that a lot of travelers had to witness our once a quarter rainfall, however with all the forest fires ranging around SoCal it was an unexpected relief.  Adobe put some fires out on their own by providing some great insight as to what they are doing to the software community.

It was the first time that partners and Adobe sales team members were invited to this mostly technical event.   The room was divided between the cool hipster “Creatives” and the button-up suit with no tie looking sales people.  It was a 7th grade dance before the first slow song was played, but we were all there for the same purpose; to find out what’s going on with Creative Cloud.

So let’s backup if you haven’t heard of Creative Cloud.  Several months ago Adobe began offering a subscription-based licensing model for their creative products.  The Creative Cloud is essentially everything that’s included in the Creative Suite Master Collection.    It’s a subscription-based licensing model which gives you all the Adobe creative products for a monthly fee.  Like Creative Suite, it’s also an on-premise product so ultimately the big difference between the two boils down to how you want to purchase it – to subscribe to it or own it.

The biggest announcement at MAX was that moving forward Adobe will no longer provide future releases of Creative Suite or other CS products.  Like Rocky, Creative Suite has ended at version 6, so moving forward if you wanted to obtain the latest and greatest technology and features you will need to move to the Creative Cloud.  Also if you like box product, Adobe will no longer be offering shrink-wrap as well.  Customer will now need to purchase a volume license or jump into the Cloud.

In my opinion this is a good thing, because as a Creative it’s important to be up to date with all the latest enhancements that Adobe provides as it will allow access to all the cutting edge technology instantly as it comes out, instead of waiting every 18 months for Adobe to compile a list of enhancements and release an upgrade.  Plus the promo price till August 31st ($39.99 per month) is less than what I spend at the local pub, err I mean coffee shop.

 

If you’re interested in Creative Cloud and want to learn more about subscribing new users and co-terming future users, please fill out this form.

 

 

The Death of DAS?

 

For over a decade, Direct Attached Storage (DAS) has been a no-brainer for many organizations; simple, fast and cost-effective. But as applications, compute and storage move to the cloud, DAS is looking like less and less of a sure bet. In fact, it’s looking more like a liability. But migrating from traditional DAS models to cloud storage is not as difficult or complex as it seems, and the good news for VARs and service providers is that they can make recommendations to customers with large DAS estates which, given solid integration and lateral thinking, will allow them to get best use out of what may, initially, seem to be redundant technology.

 

In this recent piece published on Channel Pro, John Zanni, vice president of service provider marketing and alliances at Parallels takes a look at the drawbacks of DAS in a cloud environment – and what alternatives are out there.

 

The Death of DAS?


Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.