Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Telecom Reaps Benefits from Service Virtualization

TTNET, the largest internet service provider in Turkey, with six million subscribers, significantly improved applications deployment while cutting costs and time to delivery.
What was the situation there before you became more automated, before you started to use more software tools?
We’re the leading ISP in Turkey. We deploy more than 200 applications per year, and we have to provide better and faster services to our customers every week, every month. Before HP SV, we had to use the other test infrastructures in our test cases.

read more

Cloud Computing and 4G: Perfect Together

Just as a team is only as strong as its weakest player, the more robust a mobile broadband service, the more beneficial it is to accessing and running applications in the cloud.
There’s a strong connection between cloud computing services and mobile broadband, including 4G services.
Mobile broadband provides a connection to anyone anywhere within its coverage area. And the latest mobile broadband technology, Long Term Evolution (LTE), often referred to as 4G, provides a mobile broadband connection that is sufficiently fast for most cloud applications, according to an article on CFOWorld.com.
A Canadian consultancy, Maravedis, put forth a study called “4G + Cloud = Innovation & Profits?” It focused on “how 4G wireless combined with cloud computing will drive a new wave of innovation and ROI for operator content, services and applications.”
It identified the main challenges to offering cloud services as being “strategy, investment, and business model, followed closely by network performance.”
It suggested that mobile network operators seeking to exploit the synergies between 4G and cloud computing should focus on “industries that are communication intensive and/or positioned for transformation through innovative telecom and mobility solutions.”

read more

Expertise-as-a-Service: How Much Is It Worth?

Simply outsourcing your cloud technology isn’t where the benefits your company can access end. By leveraging your cloud provider for more than just the technology, you can gain a wealth of experience and know-how (read: tech support) that can help your business get more value from your vendor relationship.

The tendency for companies to pursue this path is well established at this point, but in the cloud era, managed service providers have become ground zero for both technological and experiential value for the companies that use them.
That’s where IT strategy is now, but how does this compare to what IT used to be?

read more

Welcome to Data Land

Many software vendors, analyst and journalist are overusing the term “Data Governance” in today’s complex business and IT environments. However, it has become one of the primary goals and drivers for data-related IT projects while at the same time being one of the most difficult to define, measure and quantify. What real meaning can we give to the concept of Data Governance? What is its importance, impact and meaning for the enterprise?
To try returning some meaning and context to Data Governance, let’s go back to the semantics through an analogy understandable by everyone, and insightful in the smallest detail.
Welcome to Data Land… If data are its citizens, the governance of such a country would aim at ensuring that these data co-exist in a peaceful way, stayed healthy, enriched themselves, were not living on top of each other, did not destroy each other in the case of conflict, and most importantly work together every year at improving the GDP of Data Land. This means creating value by the use and action of everyone. Of course, bioethics laws would prevent the cloning or duplication of its inhabitants. Data governance would then define itself as a framework that intends to ensure the efficient management of the data in the enterprise. Putting data under governance prevents its chaotic generation and use.

read more

Why cloud computing is accelerating in the enterprise

Translating time into dollars matters far more to many CEOs I’ve spoken with versus what platform their applications are running on.

What matters most is getting all they can out of every hour their business is operating.  They are all focused on getting beyond the constraints that held their growth back in the past – everyone wants a growth accelerator today. 

For manufacturers especially, this includes applications with depth of functionality that can be quickly deployed regionally, and in more cases than ever, globally as well.  Line-of-business leaders want applications that make an immediate impact on their entire value chain.

Just having a cloud strategy is not enough for any enterprise software company anymore. Owning the pain prospects and customers go through daily to get work done is all that matters. 

Every application and platform component needs to contribute to the goal of reducing customer’s challenges of doing business …

Is the CFO the last man standing when it comes to the cloud?

The use of cloud technology by corporations has mushroomed in recent years. As the concept of cloud has gained greater acceptance at a consumer level, companies have increasingly recognised the benefits that cloud-based systems can bring to their businesses.

As a result, many have moved away from technology which is installed in-house and instead have adopted systems like Salesforce.com for customer relationship management and NetSuite for ERP.

Yet while the use of cloud technology has become mainstream elsewhere in the corporate environment, the areas of finance and treasury have tended to lag behind. All too often the CFO is the last person in the organisation to take their technology into the cloud.

Barriers to adoption

With cloud technology becoming a mainstream option for other corporate functions, why are CFOs holding back?

For one thing, they tend to be more risk averse than their peers across the organisation. The CFO …

Shared Hosting Transformation and the Control Panel

 

Structure Research, an independent research and consulting firm with a focus on the hosting and cloud segments within the Internet infrastructure market, recently released an opinion piece titled “Shared hosting transformation and the control panel.”

 

The blog article provides fresh insight into the future of shared hosting with single-server panel technology vs. multi-server technology, such as Parallels Plesk Automation. Structure Research shares how management tools have not kept up with growth of many hosters. Check out Structure Research’s insights on why hosters’ ability to “scale through operational efficiency will be paramount.”

 

Good stuff.

 

 

Cloud Expo New York: Rethink IT and Reinvent Business with IBM SmartCloud

On today’s Smarter Planet, enterprises are turning to the cloud to revamp existing business models and manage an unprecedented rate of change. In fact, the number of enterprises moving to cloud computing will double within the next few years as they seek to transition IT from a cost center to a strategic center of business innovation.
In his General Session at the 12th International Cloud Expo, Craig Sowell, Vice President for IBM SmartCloud Marketing, will discuss how the IBM SmartCloud family, including PureSystems, helps enterprises optimize their traditional IT environments while driving rapid business innovation through new engagement models. In addition, he will illustrate through specific client examples the value of open standards initiatives like OpenStack and TOSCA, and how they are crucial to an interoperable and agile cloud computing infrastructure.

read more