Archivo de la etiqueta: cloud

The Death of DAS?

 

For over a decade, Direct Attached Storage (DAS) has been a no-brainer for many organizations; simple, fast and cost-effective. But as applications, compute and storage move to the cloud, DAS is looking like less and less of a sure bet. In fact, it’s looking more like a liability. But migrating from traditional DAS models to cloud storage is not as difficult or complex as it seems, and the good news for VARs and service providers is that they can make recommendations to customers with large DAS estates which, given solid integration and lateral thinking, will allow them to get best use out of what may, initially, seem to be redundant technology.

 

In this recent piece published on Channel Pro, John Zanni, vice president of service provider marketing and alliances at Parallels takes a look at the drawbacks of DAS in a cloud environment – and what alternatives are out there.

 

The Death of DAS?


Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Cloud Corner Video- Keys to Hybrid Cloud Management

http://www.youtube.com/watch?v=QIEGDZ30H2Q

 

GreenPages CEO Ron Dupler and LogicsOne Executive Vice President and Managing Director Kevin Hall sit down to talk about the current state of the cloud market, challenges IT decision makers are facing today in regards to hybrid cloud environments, as well as a revolutionary new Cloud Management as a Service Offering.

If you’re looking for more information on hybrid cloud management, download this free whitepaper.

 

Or, if you would like someone to contact you about GreenPages Cloud Management as a Service offering, fill out this form.

Breaking Down the Management Barriers to Adopting Hybrid Cloud Technologies

By Geoff Smith, Sr. Solutions Architect

It is inarguable that change is sweeping the IT industry.  Over the last five years a number of new technologies that provide huge technological advantages (and create management headaches) have been developed.  We have attempted to leverage these advances to the benefit of our organizations, while at the same time struggling with how to incorporate them into our established IT management methodologies.  Do we need to throw out our mature management protocols in order to partake in the advantages provided by these new technologies, or can we modify our core management approaches and leverage similar advances in management methodologies to provide a more extensible platform that enables adoption of advanced computing architectures?

Cloud computing is one such advance.  One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  Technically, cloud services are beyond our direct control because we do not “own” the underlying infrastructure and have limited say in how those services are designed and deployed.  But are they beyond our ability to evaluate and influence?

There are the obvious challenges in enabling these technologies within our organizations.  Cloud services are provided by and managed by those whom we consume them from, not within our four-walled datacenter.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and service consumption crosses that void beyond our current management capabilities?

{Download this free whitepaper to learn more about GreenPages Cloud Management as a Service offering; a revolutionary way organizations can manage hybrid cloud environments}

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  We still have to manage and maintain our own datacenters as they exist today.  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage externally consumed resources as part of your overall architecture, you step outside of the physical and virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT services.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems not our own.

So the root question is, how do we accomplish this?  There are four distinct areas of change that we need to consider:

  • Tools – the toolsets we utilize to perform our management processes need to both understand these new technologies, and expand our end-to-end visibility and evaluation capabilities
  • Techniques – we need to modify the way we perform our daily IT functions and apply our organizational policies in order to consider the new computing platforms we will be consuming.  Our ability to validate, influence and directly control IT consumption will vary, however our underlying responsibilities to deliver effective and efficient services to our organizations should not
  • Talent – we are faced with adopting not only new technologies, but also new sets of responsibilities within our IT support organizations.  The entire lifecycle of IT is moving under the responsibility of the support organization.  We can develop the appropriate internal talent or we can extend our teams with external support organizations, but in either case the talent needed will expand in proportion to the capabilities of the platforms we are enabling
  • Transparency – the success of enabling new technologies will be gauged on how well those technologies meet business needs.  Through comprehensive analysis, reporting and auditing, IT will be able to demonstrate the value of both the technology decisions and the management structures

First and foremost, we must modify our concepts of what is critical to monitor and manage.  We need to be able to move our viewpoints from individual silos of technology to a higher level of awareness.  No longer can we isolate what is happening at the network layer from what is transpiring within our storage facilities.  The scope of what we are responsible for is expanding, and the key metrics are changing.  No longer is availability the key success factor.  Usability is how our teams will be judged.

In the past, a successful IT team may have strived for five 9s of availability.  In this new paradigm, availability is now a foundational expectation.  The ability of our delivered services to be used in a manner that enables the business to meet its objectives will become the new measuring stick.  Business units will define what the acceptable usability metrics are, basing them on how they leverage these services to complete their tasks.  IT will in fact be driven to meet these service level agreements.

Secondly, we have to enable our support teams to work effectively with these new technologies.  This is a multifaceted issue, consisting of providing the right tools, processes and talent.   Tools will need to expand our ability to view, interface and influence systems and services beyond our traditional reach.  Where possible, the tools should provide an essential level of management across all platforms regardless of where those services are delivered from (internal, SaaS, PaaS, IaaS).  Likewise, our processes for responding to, managing, and remediating events will need to change.  Tighter enforcement of service level commitments and the ability to validate them will be key.  Our staff will need to be authorized to take appropriate actions to resolve issues directly, limiting escalations and handoffs.  And we will need to provide the talent (internally or via partners) necessary to deliver on the entire IT lifecycle, including provisioning, de-provisioning and procurement.

Last, IT will be required to prove the effectiveness not only of their support teams, but also of the selection of cloud-based service providers.  Because we consume external services does not release us from the requirements of service delivery to our organizations.  Our focus will need to shift toward demonstrating that service usability requirements have been met.  This will require transparency between our internally delivered systems and our externally consumed services.

This is a transition, not a light-switch event.  And as such, our approach to management change must mirror that pace.  Our priorities and focus will need to shift in concert with our shift from delivered services toward consumed services.

Would you like to learn more about our Cloud Management as a Service offering? Fill out this form and we will get in touch with you shortly!

Guest Post: Why Midmarket Business Needs Cloud Services in 2013

Guest Post: Grant Davis

This is a guest post and does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

The global market is becoming more and more competitive by the second, thus requiring businesses to operate very efficiently with regards to organizational structure. Businesses, specifically midmarket size, are faced with tall tasks in 2013. With a growing enterprise, information increases as the operations do. A growing company requires higher level data management, and this leads to more intricate demands when it comes to IT organization and communication.

If a midmarket is firing on all cylinders, acquiring new clients and consumers by the day, there is only so much that an unorganized or incapable operations model can withstand. The IT department can only cope with so many networks and so much data. A commonality among growing business in America throughout modern society is the implementation of cloud services. Cloud offers a way to outsource data and network management with the ability to focus resources and time on more intricate and fundamental aspects of the business.

Below I list the main ways that midmarket businesses can benefit from the utilization of cloud services in 2013, and the critical reasons for the argument.

1.       Cost.

Cloud services can be financially viable in the right situation. Using an outsourced data storage center can decrease the cost of real estate, software and employee payroll. For one, a midmarket that works with a cloud vendor does not have to physically house as much data. This is substantial benefit, mainly because of the physical space but also the operational costs of a larger company with high energy consumption.

Secondly, a cloud provider would be responsible for software agreements and also network operations. This is a huge burden off of a midmarket, as serious growth tends to take focus away from standard processing issues.  This responsibility being shifted to the cloud provider alleviates cost in the sense that a business can reduce or relocate IT staff for better efficiency. It can also benefit an enterprise to not have to worry about multiple SLAs with various software providers. It can save money to have the agreement consolidated and maintained by the cloud vendor.

Why this is crucial: Midmarket business can only reach maximum efficiency if all of the parts are in place. Part of this is allocating resources in a way that gets the most out of each aspect of the company. If a Data Modeler or System Admin can be utilized more effectively in this crucial phase of business development, maybe it’s better to outsource their daily role to a cloud vendor. It’s possible that their creativity and focus needs to be distributed in another area of the business different than process management. Innovation is key right now, and this is part of the process.

2.       Flexibility.

Cloud services may be a good idea for IT decision makers within a midmarket because employees are able to be more flexible. For instance, a cloud vendor allows for immediate access to business information from various portals, including mobile devices. In 2013 a typical cloud vendor seamlessly supplies business leadership and operations teams with the ability to access information from all angles of daily routines. This is a huge benefit in modern society where nearly everything is immediate and in constant real-time.

Also, because midmarket business is often trying to compete and outreach in a competitive market, traveling off location will be much less detrimental to work efficiency. If the company CIO is going to a tradeshow in Phoenix, they should still be able to access any processes being maintained by the cloud vendor.

Why this is crucial: Midmarket business in 2013 requires collaboration to be successful. With information being stored in a cloud storage center, information can be accessed from diverse locations. This increases both internal and external business collaboration. Modern society is far too demanding to have anything that is inefficient, and flexibility is directly related to efficiency when it comes to a growing business and data management.

3.       Scalability.

IT is the backbone of most business operations.  Modern information is too complex to handle manually, and we rely on computers and networks to transport and maintain data. An additional advantage of a midmarket acquiring cloud services is that the business can upscale or downscale IT services based on specific need. For instance, if the midmarket has stagnant growth over the holiday season, they can scale back their service agreement with the vendor to save money during that time. Similarly, if business continues to grow, the cloud service can easily expand and accommodate the new volume of data management that it performs for the business. This is not as viable with internal data management, as new software and hardware will need to be purchased with each major alteration in IT requirements. This can lead to wasted money and lost resources.

Why this is crucial: Business in general is too unpredictable in the current economy to assume anything, even when it comes to IT requirements. Cloud vendors allow for leeway with regards to data storage and this is important when a midmarket is concerned because often times these businesses don’t have the margin of error to make up for any inefficiency. They need the exact amount of storage they need, when they need it.

Conclusions

Midmarket business is important to the United States economy and affects the lives of many people. Usually these operations are on the cusp of doing something significant, and proper organization within IT can help allocate resources in the right areas for maximum production and business model maintenance. It’s time for IT and business leadership to make note of this and move to action in early 2013.

 

Grant Davis is a Data Modeler by day and a writer by night. His passion for computers started when he discovered instant messaging in junior high school. When Grant isn’t trying to climb through the computer screen he writes for BMC, a leading mainframe management provider.

Are you a midmarket organization looking to bring your IT environment to the next level? Click to learn more about how GreenPages can help!

 

Guest Post: Who Controls the Cloud Market – Providers or Consumers?

Guest Post: Ilyas Iyoob, Director, Advanced Analytics and Sr. Research Scientist, PhD at Gravitant

We first went from reserving cloud capacity to securing capacity on-demand, and then we even started to bid for unused capacity in the spot market – all in an effort to decrease cost in the cloud.  Can we take this one step further?  Instead of us bidding for capacity, wouldn’t it be interesting if we can get providers to bid for our demand?

Retail Supply Chain Market Analogy

In fact, this is a common phenomena in the retail supply chain industry.  For example, Walmart has a large amount of freight that needs to be shipped between different cities over the course of the year.  So, every year an auction is conducted in which Walmart lists all their shipments, and carriers such as JB Hunt, Schneider, Yellow etc. bid for the opportunity to carry these shipments using their fleet of trucks.  The reason carriers are bidding for retailer demand is because in general, capacity exceeds demand in the retail industry.

Cloud Computing Market

Keeping this in mind, let us now take a look at the Cloud Computing Market.  Does capacity exceed demand or is it the other way around?  A quick way to find out is by observing spot prices in the cloud market.  In today’s market, Amazon’s Spot Instances are 86% cheaper than their on-demand instances, and Enomaly’s SpotCloud also shows lower spot prices across the board.  This leads us to believe that capacity exceeds demand in the cloud market as well.  A related indicator is the predominance of data center consolidation initiatives in both the commercial and government marketplaces.

Since capacity exceeds demand, consumers have an upper hand and are in control of the cloud market at the moment.  Moreover, they should be able to replicate what is being done in the retail supply chain industry.  In other words, cloud consumers should be able to auction off their demand to the best fit lowest price cloud provider.

So, …

Consumers should seize the opportunity and control the market while the odds are in their favor i.e. Demand < Capacity.  At the same time, Service Integrators and Value Added Resellers can help Enterprise IT consumers in this process by conducting Primary-Market auctions using Cloud Service Brokerage technology.

This post was originally published on Gravitant’s blog.

Be Nimble, Be Quick: A CRN Interview with GreenPages’ CEO

CRN Senior Editor and industry veteran Steve Burke sat down with GreenPages’ CEO Ron Dupler to discuss shifts in ideology in the industry as well as GreenPages new Cloud Management as a Service (CMaaS) offering. The interview, which was originally posted on CRN.com, is below. What are your thoughts on Ron’s views of the changing dynamics of IT?

 

CRN:Talk about your new cloud offering.

Dupler:It is available today. We can support physical, virtual and cloud-based infrastructure through a single pane of glass today. We are actually using the technology internally as well.

There is another part of CMaaS that goes into cloud governance and governance models in a cloud world and cloud services brokerage. That is what we are integrating and bringing to market very soon.

CRN:How big a game-changer is CMaaS?

Dupler:I think we are going to be well out in front of the market with this. I personally believe we can go have discussions right now and bring technologies to bear to support those discussions that no one else in the industry can right now.

That said, we know that the pace of innovation is rapid and we expect other organizations are trying to work on these types of initiatives as well. But we believe we’ll be out front certainly for this year.

CRN:How does the solution provider business model change from 2013 to 2018?

Dupler:The way we are looking at our job and the job of the solution provider channel over the next several years through 2018 is to provide IT plan, build, run and governance services for the cloud world.

The big change is that the solution provider channel for many years has made their money off the fact that infrastructure fundamentally doesn’t work very well. And it has been all about architecting and integrating physical technologies and software platforms to support the apps and data that really add value for the business.

When we move to the cloud world, this is now about integrating service platforms as opposed to physical technologies. So it is about architecting and integrating on-premise and cloud service platforms really to create IT-as-a-Service to support the apps and data for the platform. That is the transition that is under way.

CRN:Does the GreenPages brand become bigger than the vendor brand and how does that affect vendor relations in the CMaaS era?

Dupler:We continue to closely evaluate all our key partner relationships. That is managed very closely. What we try to do is make sure we are partnered with the right companies that are really leading this transformation. And our number one partner because they are driving this transformation is VMware. With this whole software-defined data center concept and initiative, VMware has really laid out a great vision for where this market is going.

NEXT: Does Size Matter?

CRN:There is a prevailing view that solution providers need to go big or go home, with many solution providers selling their businesses. Do you see scale becoming more important — that you need to scale?

Dupler:No. People have been saying that for years. It is all about customer value and the talent of your team, if you are adding value for clients. You need to be able to service the client community. And they care about quality of service and the ability of your team. Not necessarily that you are huge. I have been down the M&A road and, as you know, we do M&A here on a smaller scale. And I will tell you there are pros and cons to it. You aggregate talent, but you also have got the inertia of pulling companies together and integrating companies and people and executive teams and getting through that.

I absolutely do not subscribe and never have subscribed to the fact that size in itself gives competitive advantage. There are some advantages, but there are also costs to doing that.

CRN:What is the ultimate measure for success in this new world?

Dupler:It is a combination of three things: technology, and I will firmly say it doesn’t have to be homegrown. It could be homegrown or it could be commercial off-the-shelf. It is the way the technology is leveraged and having the technologies with the ability to drive the services you are trying to provide. What we are trying to do with CMaaS is single pane of glass management for the physical, virtual and cloud infrastructure, which I have mentioned, as well as cloud service brokerage and cloud governance services. You can either develop those on your own or integrate partner technologies or both, but you need the supporting technology base and you need people and you need process.

CRN:How big a transition is this and what percentage of VARs do you think will make it to 2018?

Dupler:The companies that I think are going to have a huge challenge are the big product-centric organizations right now. The DMR [direct marketer] community. They have some big challenges ahead of them over time. All these guys are trying to come up with cloud strategies as well.

Right now there is a premium on being nimble. That is the word of the day for me in 2013. Nimble. You need nimble people and you need a nimble business organization because things are moving faster than they ever have. You just have to have a culture and people that can change quickly.

Going back to is it good just to be big? Sometimes it is hard to maintain [that agility] as you get really big. The magnitude of the change that is required to succeed over the next five years is extremely significant. And people that aren’t already under way with that change have a big challenge ahead of them.

CRN:What is the pace of change like managing in this business as a CEO vs. five years ago?

Dupler:It is exponential.

CRN:Is it tougher to manage in an environment like this?

Dupler:You say it is tougher, but there is more opportunity than ever because of the pace of change to really differentiate yourself. So it can be challenging but it is also very stimulating and exciting.

CRN:Give me five tips you need to thrive in 2018.

Dupler:First of all, you need hybrid cloud management capabilities.

Number two, you need cloud services brokerage capabilities. It is ultimately an ability to provide a platform for clients to acquire as-a-service technologies from GreenPages. To be able to sell the various forms of infrastructure, platform and software as a service.

Number three is cloud architecture and integration capabilities.

Fourth is product revenue and profit streams are not central to supporting the business. The service model needs to become a profitable, thriving stand-alone entity without the product revenue streams.

The fifth thing and it is the biggest challenge. One thing is migrating your technology organization. Then the next thing you need to do is create a services-based sales culture.

CRN:Talk about how big a change that is.

Dupler:It is a huge change. Again, if people are not already under way with this change they have a huge challenge ahead of them. Everybody I speak with in the industry — whether it is at [UBM Tech Channel’s] BoB conference or at partner advisory councils — everybody is challenged with this right now. The sales force in the solution provider industry has been old paradigm physical-technology-based and needs to move into a world where it is leading with professional and managed services. And that game is very different. So I think there are two ways to address that: one is hiring new types of talent or helping the talent we all have transform. It is going to be a combination of both that gets us ultimately where we need to be.

CRN:What do you think is the biggest mistake being made right now by competitors or vendors?

Dupler:What I see is people that are afraid to embrace the change that is under way and are really hanging on to the past. The biggest mistake I see right now is people continuing to evangelize solutions to customers that aren’t necessarily right by the customer, but conform to what they know and drive the most profit for their organizations.

Short-term gain isn’t going to drive long-term customer value. And we need to lead the customers forward through this transformation as opposed to perpetuating the past. The market needs leadership right now. The biggest challenge for people is not moving fast enough to transform their businesses.

This interview was originally posted on CRN.com

To learn more about GreenPages’ CMaaS offering click here!

Disaster Recovery in the Cloud, or DRaaS: Revisited

By Randy Weis

The idea of offering Disaster Recovery services has been around as long as SunGard or IBM BCRS (Business Continuity & Resiliency Services). Disclaimer: I worked for the company that became IBM Information Protection Services in 2008, a part of BCRS.

It seems inevitable that Cloud Computing and Cloud Storage should have an impact on the kinds of solutions that small, medium and large companies would find attractive and would fit their requirements. Those cloud-based DR services are not taking the world by storm, however. Why is that?

Cloud infrastructure seems perfectly suited for economical DR solutions, yet I would bet that none of the people reading this blog has found a reasonable selection of cloud-based DR services in the market. That is not to say that there aren’t DR “As a Service” companies, but the offerings are limited. Again, why is that?

Much like Cloud Computing in general, the recent emergence of enabling technologies was preceded by a relatively long period of commercial product development. In other words, virtualization of computing resources promised “cloud” long before we actually could make it work commercially. I use the term “we” loosely…Seriously, GreenPages announced a cloud-centric solutions approach more than a year before vCloud Director was even released. Why? We saw the potential, but we had to watch for, evaluate, and observe real-world performance in the emerging commercial implementations of self-service computing tools in a virtualized datacenter marketplace. We are now doing the same thing in the evolving solutions marketplace around derivative applications such as DR and archiving.

I looked into helping put together a DR solution leveraging cloud computing and cloud storage offered by one of our technology partners that provides IaaS (Infrastructure as a Service). I had operational and engineering support from all parties in this project and we ran into a couple of significant obstacles that do not seem to be resolved in the industry.

Bottom line:

  1. A DR solution in the cloud, involving recovering virtual servers in a cloud computing infrastructure, requires administrative access to the storage as well as the virtual computing environment (like being in vCenter).
  2. Equally important, if the solution involves recovering data from backups, is the requirement that there be a high speed, low latency (I call this “back-end”) connection between the cloud storage where the backups are kept and the cloud computing environment. This is only present in Amazon at last check (a couple of months ago), and you pay extra for that connection. I also call this “locality.”
  3. The Service Provider needs the operational workflow to do this. Everything I worked out with our IaaS partners was a manual process that went way outside normal workflow and ticketing. The interfaces for the customer to access computing and storage were separate and radically different. You couldn’t even see the capacity you consumed in cloud storage without opening a ticket. From the SP side, notification of DR tasks they would need to do, required by the customer, didn’t exist. When you get to billing, forget it. Everyone admitted that this was not planned for at all in the cloud computing and operational support design.

Let me break this down:

  • Cloud Computing typically has high speed storage to host the guest servers.
  • Cloud Storage typically has “slow” storage, on separate systems and sometimes separate locations from a cloud computing infrastructure. This is true with most IaaS providers, although some Amazon sites have S3 and EC2 in the same building and they built a network to connect them (LOCALITY).

Scenario 1: Recovering virtual machines and data from backup images

Scenario 2: Replication based on virtual server-based tools (e.g. Veeam Backup & Replication) or host-based replication

Scenario 3: SRM, array or host replication

Scenario 1: Backup Recovery. I worked hard on this with a partner. This is how it would go:

  1. Back up VMs at customer site; send backup or copy of it to cloud storage.
  2. Set up a cloud computing account with an AD server and a backup server.
  3. Connect the backup server to the cloud storage backup repository (first problem)
    • Unless the cloud computing system has a back end connection at LAN speed to the cloud storage, this is a showstopper. It would take days to do this without a high degree of locality.
    • Provider solution when asked about this.
      • Open a trouble ticket to have the backups dumped to USB drives, shipped or carried to the cloud computing area and connected into the customer workspace. Yikes.
      • We will build a back end connection where we have both cloud storage and cloud computing in the same building—not possible in every location, so the “access anywhere” part of a cloud wouldn’t apply.

4. Restore the data to the cloud computing environment (second problem)

    • What is the “restore target”? If the DR site were a typical hosted or colo site, the customer backup server would have the connection and authorization to recover the guest server images to the datastores, and the ability to create additional datastores. In vCenter, the Veeam server would have the vCenter credentials and access to the vCenter storage plugins to provision the datastores as needed and to start up the VMs after restoring/importing the files. In a Cloud Computing service, your backup server does NOT have that connection or authorization.
    • How can the customer backup server get the rights to import VMs directly into the virtual VMware cluster? The process to provision VMs in most cloud computing environments is to use your templates, their templates, or “upload” an OVF or other type of file format. This won’t work with a backup product such as Veeam or CommVault.

5. Recover the restored images as running VMs in the cloud computing environment (third problem), tied to item #4.

    • Administrative access to provision datastores on the fly and to turn on and configure the machines is not there. The customer (or GreenPages) doesn’t own the multitenant architecture.
    • The use of vCloud Director ought to be an enabler, but the storage plugins, and rights to import into storage, don’t really exist for vCloud. Networking changes need to be accounted for and scripted if possible.

Scenario 2: Replication by VM. This has cost issues more than anything else.

    • If you want to replicate directly into a cloud, you will need to provision the VMs and pay for their resources as if they were “hot.” It would be nice if there was a lower “DR Tier” for pricing—if the VMs are for DR, you don’t get charged full rates until you turn them on and use for production.
      • How do you negotiate that?
      •  How does the SP know when they get turned on?
      • How does this fit into their billing cycle?
    • If it is treated as a hot site (or warm), then the cost of the DR site equals that of production until you solve these issues.
    • Networking is an issue, too, since you don’t want to turn that on until you declare a disaster.
      • Does the SP allow you to turn up networking without a ticket?
      • How do you handle DNS updates if your external access depends on root server DNS records being updated—really short TTL? Yikes, again.
    • Host-based replication (e.g. WANsync, VMware)—you need a host you can replicate to. Your own host. The issues are cost and scalability.

Scenario 3: SRM. This should be baked into any serious DR solution, from a carrier or service provider, but many of the same issues apply.

    • SRM based on host array replication has complications. Technically, this can be solved by the provider by putting (for example) EMC VPLEX and RecoverPoint appliances at every customer production site so that you can replicate from dissimilar storage to the SP IDC. But, they need to set up this many-to-one relationship on arrays that are part of the cloud computing solution, or at least a DR cloud computing cluster. Most SPs don’t have this. There are other brands/technologies to do this, but the basic configuration challenge remains—many-to-one replication into a multi-tenant storage array.
    • SRM based on VMware host replication has administrative access issues as well. SRM at the DR site has to either accommodate multi-tenancy, or each customer gets their own SRM target. Also, you need a host target. Do you rent it all the time? You have to, since you can’t do that in a multi-tenant environment. Cost, scalability, again!
    • Either way, now the big red button gets pushed. Now what?
      • All the protection groups exist on storage and in cloud computing. You are now paying for a duplicate environment in the cloud, not an economically sustainable approach unless you have a “DR Tier” of pricing (see Scenario 2).
      • All the SRM scripts kick in—VMs are coming up in order in protection groups, IP addresses and DNS are being updated, CPU loads and network traffic climb…what impact is this?
      • How does that button get pushed? Does the SP need to push it? Can the customer do it?

These are the main issues as I see it, and there is still more to it. Using vCloud Director is not the same as using vCenter. Everything I’ve described was designed to be used in a vCenter-managed system, not a multi-tenant system with fenced-in rights and networks, with shared storage infrastructure. The APIs are not there, and if they were, imagine the chaos and impact on random DR tests on production cloud computing systems, not managed and controlled by the service provider. What if a real disaster hit in New England, and a hundred customers needed to spin up all their VMs in a few hours? They aren’t all in one datacenter, but if one provider that set this up had dozens, that is a huge hit. They need to have all the capacity in reserve, or syndicate it like IBM or SunGard do. That is the equivalent of thin-provisioning your datacenter.

This conversation, as many I’ve had in the last two years, ends somewhat unsatisfactorily with the conclusion that there is no clear solution—today. The journey to discovering or designing a DRaaS is important, and it needs to be documented, as we have done here with this blog and in other presentations and meetings. The industry will overcome these obstacles, but the customer must remain informed and persistent. The goal of an economically sustainable DRaaS solution can only be achieved by market pressure and creative vendors. We will do our part by being your vigilant and dedicated cloud services broker and solution services provider.

 

 

 

 

 

 

 

 

 

 

Guest Post: A Wrinkle in the IT Universe

By Kai Gray, VP of Operations at Carbonite

I feel like tectonic plates are shifting beneath the IT world. I’ve been struggling to put my finger on what it is that is making me feel this way, but slowly things have started to come into focus. These are my thoughts on how cloud computing has forever changed the economics of IT by shifting the balance of power.

The cloud has fundamentally changed business models; it has shifted time-to-market, entry points and who can do what. These byproducts of massive elasticity are wrapped up in an even greater evolutionary change that is occurring right now: The cloud is having a pronounced impact on the supply chain, which will amount to a tidal wave of changes in the near-term that will cause huge pain for some and spawn incredible innovation and wealth for others. As I see it, the cloud has started a chain of events that will change our industry forever:

1) Big IT used to rule the datacenter. Not long ago, large infrastructure companies were at the heart of IT. The EMCs, Dells, Ciscos, HPs and IBMs were responsible for designing, sourcing, supplying and configuring the hardware that was behind nearly all of the computing and storage power in the world. Every server closest was packed full of name-brand equipment and the datacenter was no different. A quick tour of any datacenter would – and still will – showcase the wares of these behemoths of the IT world. These companies developed sophisticated supply and sales channels that produced great margins businesses built on some very good product. This included the OEMs and ODMs that produced bent metal to the VARs and distributors who then sold their finish products. Think of DeBeers, the diamond mine owner and distributor. What are the differences between a company like HP and DeBeers? Not very much, but the cloud began to change all that.

2) Cloud Computing. Slowly we got introduced to the notion of cloud computing. We started using products that put the resource away from us, and (slowly) we became comfortable with not needing to touch the hardware. Our email “lived” somewhere else, our backups “lived” somewhere else and our computing cycles “lived” somewhere else. With each incremental step, our comfort levels rose until it stopped being a question and turned into an expectation. This process set off a dramatic shift in supply chain economics.

3) Supply Chain Economics. The confluence of massive demand coupled with near-free products (driven by a need to expand customer acquisition) changed how people had to think about infrastructure. All of a sudden, cloud providers had to think about infrastructure in terms of true scalability. This meant acquiring and managing massive amounts of infrastructure at the lowest possible cost. This was/is fundamentally different from the way the HPs and Dells and Ciscos thought about the world. All of a sudden, those providers were unable to address the needs of this new market in an effective way. This isn’t to say that the big IT companies can’t, just that it’s hard for them. It’s hard to accept shrinking margin and “openness.”  The people brave enough to promote such wild ideas are branded as heretics and accused of rocking the boat (even as the boat is sinking). Eventually the economic and scale requirements forced cloud providers to tackle the supply chain and go direct.

4) Going Direct. As cloud providers begin to develop strong supply chain relationships and build up their competencies around hardware engineering and logistics, they begin to become more ingrained with the ODMs (http://en.wikipedia.org/wiki/Original_design_manufacturer) and other primary suppliers. Huge initiatives came into existence from the likes of Amazon, Google and Facebook that are focused on driving down the cost of everything. For example, Google began working directly with Intel and AMD to develop custom chipsets that allow them to run at efficiency levels never before seen, and Facebook started the Open Compute Project that seeks to open-source design schematics that were once locked in vaults.

In short, the supply chain envelope gets pushed by anyone focused on cost and large-scale.

…and here it gets interesting.

Cloud providers now account for more supplier revenue than the Big IT companies. Or, maybe better stated — cloud providers account for more hope of revenue (HoR) than Big IT. So, what does that mean? That means that the Big IT companies no longer receive the biggest discounts available from the suppliers. The biggest discounts are going to the end users and the low-margin companies built solely on servicing the infrastructure needs of cloud providers. This means that Big IT is at even more of a competitive disadvantage than they already were. The cycle is now in full swing. If you think this isn’t what is happening, just look at HP and Dell right now. They don’t know how to interact with a huge set of end users without caving in their margins and cannibalizing their existing businesses. Some will choose to amputate while others will go down kicking, but margin declines and openness of information will take their toll with excruciating pain.

What comes of all this? I don’t know. But here are my observations:

1) Access to the commodity providers (ODMs and suppliers) is relatively closed. To be at all interesting to ODMs and suppliers you have to be doing things at enough volume that it is worthwhile for them to engage with you. That will change. The commodity suppliers will learn how to work in different markets but there will be huge opportunity for companies that help them get there. When access to ODMs and direct suppliers gets opened up to traditional Enterprise companies so they can truly and easily take advantage of commodity hardware through direct access to suppliers then, as they say, goodnight.

2) Companies that provide some basic interfaces between the suppliers and the small(er) consumers will do extremely well. For me, this means configuration management of some sort, but it could be anything that helps accelerate the linkage between supplier and end-user . The day will come when small IT shops have direct access to suppliers and are able to custom-build hardware in same way that huge cloud providers do today. Some might argue that there is no need for small shops to do this — that they can use other cloud providers, that it’s too time consuming to do it on their own, and that their needs are not unique enough to support such a relationship. Yes, yes, and yes… for right now. Make it easy for companies to realize the cost and management efficiencies of direct supplier access and I don’t know of anyone that wouldn’t take you up on that. Maybe this is the evolution of the “private cloud” concept but all I know is that, right now, the “private cloud” talk is being dominated by the Big IT folks so the conflict of interest is too great.

3) It’s all about the network. I don’t think the network is being addressed in the same way as other infrastructure components. I almost never hear about commodity “networks,” yet I constantly hear about commodity “hardware.” I’m not sure why. Maybe Cisco and Juniper and the other network providers are good at deflecting or maybe it’s too hard of a problem to be solved or maybe the cost isn’t a focal point (yet). Whatever the reason, I think this is a huge problem/opportunity. Without the network, everything else can just go away. Period. The entire conversation driving commodity-whatever is predicated around delivering lots of data to people at very low-cost. The same rules that drive commoditization need to be applied to the network and right now I only know of 1 or 2 huge companies that are even thinking in these terms.

There are always multiple themes in play at any given time that, when looking back, we summarize as change. People say that the Internet changed everything. And, before that, the PC changed everything. What we’re actually describing is a series of changes that happened over a period of time that have the cumulative effect of making us say, “How did we ever do X without Y?” I believe that the commoditization of infrastructure is just one theme among the change that will be described as Cloud Computing. I contend, however, the day is almost upon us when everybody, from giant companies to the SMB, will say, “Why did we ever buy anything but custom hardware directly from the manufacturer?”

This post originally appeared on kaigray.com.  It does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

To Learn more about GreenPages Cloud Computing Practice click here.

Just providing best- of – breed is no longer good enough

By John Zanni, Vice President, Marketing and Alliances, Parallels

 

In this ever changing cloud environment, service providers are telling us that whenever they think they have a handle on what SMBs want, SMBs indicate their “wants” are expanding. What this means is that service providers cannot linger on what was a key service last year. SMBs are constantly trying to grow their business and furthering their understanding of their customers, so as their customers branch out into new territories whether accounting, health care, entertainment, retail (you get the picture), SMBs will look to service providers to be nimble enough to accommodate those developments with cloud offerings they can use – and use with ease. In fact, best-of-breed is no longer as relevant or as meaningful as are specificity and ease of use.

 

Luckily for everyone, need generates innovation and development. There is a burgeoning of cloud services applications for a world of vertical markets, and many SMBs are looking for the application that specifically serves their needs rather than the most well-known or most often used applications.

 

From open source applications to complex solutions, through Application Packaging Standards, ISVs can create any applications that are needed or can be invented. (You can learn more about APS at appstandards.org.) Of note, this is an open standard, and Parallels does not need to approve an application for it to become available through APS.

 

There are a number of examples of such offerings in the Parallels APS catalogue, including:

  • ·         MoySklad – a Russian business that produces a contact resource management and accounting service
  • ·         SpamExperts – an Anti-virus/anti-spam/archiving solution very popular throughout Europe
  • ·         BackupAgent – produces backup services for hosters and service providers and is popular in Europe and Asia.

 

Service Providers have access to these cloud services and applications and can easily enable them on Parallels Plesk Panel or Parallels Automation service providers; it then simply becomes a matter of marketing those applications to their customers with those particular requirements.   

 

For service providers looking for more information on how to grow their business through the bundling of new applications that live in the cloud, Parallels Summit 2013, Feb 4-6, at Caesars Palace, Las Vegas is the place to be. Hundreds of ISVs with be demonstrating their services. There will be technical, developer, and business tracks on how to enable and promote applications in the cloud along with best practices on working with Parallels products that push your business up the ladder. Be there to experience and assess what you could use for your customers.