Archivo de la categoría: Featured

EMC World 2013 Recap

By Randy Weis, Consulting Architect, LogicsOne

 

The EMC World Conference held last week in Las Vegas demonstrated how EMC has a strong leadership position in the Virtualization, Storage and Software Defined Datacenter markets.

Seriously, this is not the Kool-Aid talking. Before anyone jumps in to point out how all the competitors are better at this or that, or how being a partner or customer of EMC has its challenges, I’d like to refer you to a previous blog I wrote about EMC: “EMC Leads the Storage Market for a Reason.” I won’t recap everything, but that blog talks about business success, not technical wizardry. Do the other major storage and virtualization vendors have solutions and products in these areas? Absolutely, and I promise to bring my opinions and facts around those topics to this blog soon.

What I found exciting about this conference was how EMC is presenting a more cohesive and integrated approach to the items listed below. The ExtremIO product has been greatly improved, some might say so that it is really usable now. I’d say the same about the EMC DR and BC solutions built on RecoverPoint and VPLEX – VPLEX is affordable and ready to be integrated into the VNX line. The VNX product line is mature now, and you can expect announcements around a major refresh this year. I’d say the same about the BRS line – no great product announcements, but better integration and pricing that helps customers and solution providers alike.

There are a few items I’d like to bullet for you:

  1. Storage Virtualization – EMC has finally figured out that DataCore is onto something, and spent considerable time promoting ViPR at EMC World. This technology (while 12 years to market behind DataCore) will open the eyes of the entire datacenter virtualization market to the possibilities of a Storage Hypervisor. What VMware did for computing, this technology will do for storage – storage resources deployed automatically, independent of the array manufacturer, with high value software features running on anything/anywhere. There are pluses and minuses to this new EMC product and approach, but this technology area will soon become a hot strategy for IT spending. Everyone needs to start understanding why EMC finally thinks this is a worthwhile investment and is making it a priority. To echo what I said in that prior blog, “Thank goodness for choices and competition!” Take a fresh look at DataCore and compare it to the new EMC offering. What’s better? What’s worse?
  2. Business Continuity and Highly Available Datacenters: Linking Datacenters to turn DR sites into an active computing resource is within reach of non-enterprise organizations now – midmarket, commercial, healthcare, SMB – however you want to define it.
    1. VPLEX links datacenters together (with some networking help) so that applications can run on any available compute or storage resource in any location – a significant advance in building private cloud computing. This is now licensed to work with VNX systems, is much cheaper and can be built into any quote. We will start looking for ways to build this into various solutions strategies – DR, BC, array migration, storage refreshes, stretch clusters, you name it.  VPLEX is also a very good solution for any datacenter in need of a major storage migration due to storage refresh or datacenter migration, as well as a tool to manage heterogeneous storage.
    2. RecoverPoint is going virtual – this is the leading replication tool for SRM, is integrated with VPLEX and now will be available as a virtual appliance. RP also has developed multi-site capabilities, with up to five sites, 8 RP “appliances” per site, in fan-in or fan-out configurations.
    3. Usability of both has improved, by standardizing management of both in Unisphere editions for both products.
    4. High Performance Storage and Computing – Server-side Flash, Flash Cache Virtualization and workload-crushing all-Flash arrays in the ExtremSF, ExtremSW and ExtremIO product line (formerly known as VFCache). As usual, the second release nails it for EMC. GreenPages was recently recognized as Global leaders in mission critical application virtualization, and this fits right in. Put simply, put an SSD card in a vSphere host and boost SQL/Oracle/EXCH performance over 100% in some cases. The big gap was in HA/DRS/vMotion. The host cache was a local resource, and thus vMotion was broken, along with HA and DRS. The new release virtualizes the cache so that VMs assigned local cache will see that cache even if it moves. This isn’t an all or nothing solution – you can designate the mission critical apps to use the cache and tie them to a subset of the cluster. This make this strategy affordable and granular.
    5. Isilon – This best in class NAS system keeps getting better. Clearly defined use cases, much better VMware integration and more successful implementations makes this product the one to beat in the scale-out NAS market.

 

Another whole article can be written about ViPR, EMC’s brand new storage virtualization tool, and that will be coming up soon. As promised, I’ll also take a look at the competitive offerings of HP and Dell, at least, in the Storage Virtualization, DR/BC, Server-side flash and scale-NAS solutions areas, as well as cloud storage integration strategies. Till then, thanks for reading this and please share your thoughts.

Huh? What’s the Network Have to Do with It?

By Nate Schnable, Sr. Solutions Architect

Having been in this field for 17 years it still amazes me that people always tend to forget about the network.  Everything a user accesses on their device that isn’t installed or stored locally, depends on the network more than any other element of the environment.   It’s responsible for the quick and reliable transport of data. That means the user experience while working with remote files and applications, almost completely depends on the network.

However, this isn’t always obvious to everyone.  Therefore, they will rarely ask for network related services as they aren’t aware the network is the cause of their problems.  Whether it is a storage, compute, virtualization or IP Telephony initiative – all of these types of projects rely heavily on the network to function properly.  In fact, the network is the only element of a customer’s environment that touches every other component. Its stability can make or break the success and all important user experience.

In a VoIP initiative we have to consider, amongst many things, that proper QoS policies be setup –  so let’s hope you are not running on some dumb hubs.  Power over Ethernet (PoE) for the phones should be available unless you want to use bricks of some type of mid-span device (yuck).  I used to work for a Fortune 50 Insurance Company and one day an employee decided to plug both of the ports on their phone into the network because it would make the experience even better – not so much.  They brought down that whole environment.  Made some changes after that to avoid that happening again!

In a Disaster Recovery project we have to take a look at distances and subsequent latencies between locations.  What is the bandwidth and how much data do you need to back up?   Do we have Layer 2 handoffs between sites or is it more of a traditional L3 site to site connection?

If we are implementing a new iSCSI SAN do we need ten gig or one gig?  Do your switches support Jumbo Frames and flow control?  Hope that your iSCSI switches are truly stackable because spanning-tree could cause some of those paths to be redundant, but not active.

I was reading the other day that the sales of smart phones and tablets would reach approximately 1.2 billiion in 2013.  Some of these will most certainly end up on your wireless networks.  How to manage that is definitely a topic for another day.

In the end it just makes sense that you really need to consider the network implications before jumping into almost any type of IT initiative.  Just because those green lights are flickering doesn’t mean it’s all good.

 

To learn more about how GreenPages Networking Practice can help your organization, fill out this form and someone will be in touch with you shortly.

Cloud Computing and the Changing Role of IT

By John Dixon, Consulting Architect, LogicsOne

On Tuesday April 29th, I participated in another tweetchat hosted by Cloud Commons. As usual, it was an hour of rapid fire conversation and discussion amongst some really smart people. This time, the topic was based around “cloud computing and the changing role of IT,” and there were some great takeaways from the dialogue.  Below are the six questions that were asked during the chat as well as a quick summary of my thoughts and the thoughts of the group as a whole.

  1. How is cloud computing changing the role of IT?
  2. Besides cloud, what other trends are influential in changing the role of IT?
  3. What steps should the IT department take to become a trusted advisor to the business?
  4. How should the IT department engage with the business on cloud purchases?
  5. Should the IT department make reining in rogue cloud services a top priority?
  6. How can the CIO promote innovation in the era of lower IT spending?

 

Question 1: How is cloud computing changing the role of IT?

  • The main point I wanted to get across in question one was that corporate IT is no longer just a provider of technology, but, rather, they are a provider of IT services.
  • IT needs to be relevant to the business. They can do this by developing valuable service products
  • IT now needs to be extremely proactive. No more sitting around waiting for something to go wrong… instead get out in front of demands from the business – understand the business’s specific issues, and proactively evaluate emerging technology that may be of benefit
  • All in all, I’d say most of the group was on the same page for this answer

 

Question 2: Besides cloud, what other trends are influential in changing the role of IT?

  • The most popular answers from participants were: big data, analytics, virtualization, mobility, BYOD, and DevOps. It seemed like every answer had at least one of these included in it.
  • A couple others I threw out were distributed workforce and telecommuters, social media, and the overall increased reliance on IT for everything

 

Question 3: What steps should the IT department take to become a trusted advisor to the business?

  • The key here is that IT should not try to ALIGN to the business’s demands…IT should PARTNER with the business
  • Another point I brought up was that IT needs to show the business that IT is another provider in a competitive market – corporate IT needs to shows that they deliver more value than alternative providers. After giving this answer, I got a couple questions wondering why IT should compete with 3rd parties rather than leverage them? My point was that cloud opens up competition in the market for IT services and that the business now has a choice of where and how to procure services. At this point it’s a reality, corporate IT is just another competitor in a cloud world.
  • A great answer from Jackie Kahle (@jackiekahle) was to tell the business something they don’t know about their customers by providing data-driven insights. In her opinion, and I agree, this will encourage the business to turn to cororate IT more often.
  • Another good answer from George Hulme (@georgevhulme) was to give users and the business viable alternatives with clear risk/reward/benefits delineated.

 

Question 4: How should the IT department engage with the business on cloud purchases?

  • My first answer was that IT should source their products and services with the “provider of best fit.” I got the following reply: “that implies choosing best of breed vs. integrated. Cloud practically makes best of breed a foregone conclusion.” The point I was trying to make, and the answer I provided, was that there are varying levels of cloud providers out there so IT departments still need to choose wisely.
  • Andi Mann (@AndiMann) suggested departments need to honestly evaluate their own ability to deliver. He stated in-house IT is not always best and that organizations need to proactively look for cloud to do better. Again, a point I agreed with.

 

Question 5: Should the IT department make reining in rogue cloud services a top priority?

  • No! Enable and harness their creativity by asking them to use a cloud portal sponsored by corporate IT!
  • IT should treat the business like a customer.
  • The majority of the group agreed that embracing rogue IT was the correct strategy here…not attempting to rein it in.

 

Question 6: How can the CIO promote innovation in the era of lower IT spending?

  • Ah, the CIO’s favorite saying…”Doing more with less”
  • Provide a means for “safe” Rogue IT (more on that in my summary)
  • Another concept that was echoed by some members of the chat was the idea of adopting a fail-fast culture. Cloud can enable faster deployments, which allows you to try more things quickly, and if you do fail, you can move on. This increases the pace of innovation by enabling the business to take on more “risky” projects – the software development projects that are great ideas but may not have a clear ROI.

 

My summary

Especially during the past year, in tweetchats and various other forums, consensus on the use and benefits of cloud computing is gaining unanimity. The most significant points:

  • Corporate IT should be a provider of whole IT services “products” and not just technology – and cloud computing can enable this
  • Cloud opens up the business to a competitive market for IT services, of which traditional corporate IT is only one option (thus the role of corporate IT evolves from technology center to order-taker to broker of services)
  • Rogue IT is not necessarily a bad thing; some of the best solutions may come out of rogue projects

 

GreenPages has been having internal discussions, and discussions with customers, around the concepts highlighted in this tweetchat for some time now.  Because of where the market is heading (as voiced by the thought leaders who took part in this chat) we have developed our Cloud Management as a Service (CMaaS) offering. The product addresses the top issues that are now coming to light – transforming corporate IT into a provider in a competitive market, allowing for a safe place to innovate without being encumbered by policy and process (addressing rogue IT), and, going a step further, enabling consistent management across cloud environments. The premise behind CMaaS is to turn cloud inside out – to manage your internal environment as if it was already deployed in a cloud environment. Glance at this whitepaper I wrote about the concepts behind cloud management as a service and let me know what you think. I’d be very interested to hear people’s takes on whether or not a product like this can address some of the needs in the marketplace today.

 

If you would like to learn more about CMaaS, fill out this form and someone will be in touch with you shortly.

 

Catching up with Chuck Hollis: A Storage Discussion

Things are moving fast in the IT world. Recently, we caught up with Chuck Hollis (EMC’s Global Marketing CTO and popular industry blogger) to discuss a variety of topics including datacenter federation, Solid State Drives, and misperceptions surrounding cloud storage.

JTC: Let’s start off with Datacenter federation…what is coming down the road for running active/active datacenters with both HA and DR?

Chuck: I suppose the first thing that’s worth pointing out is that we’re starting to see using multiple data centers as an opportunity, as opposed to some sort of problem to overcome. Five years ago, it seems that everyone wanted to collapse into one or two data centers. Now, it’s pretty clear that the pendulum is starting to move in the other direction – using a number of smaller locations that are geographically dispersed.

The motivations are pretty clear as well: separation gives you additional protection, for certain applications users get better experiences when they’re close to their data, and so on. And, of course, there are so many options these days for hosting, managed private cloud services and the like. No need to own all your data centers anymore!

As a result, we want to think of our “pool of resources” as not just the stuff sitting in a single data center, but the stuff in all of our locations. We want to load balance, we want to failover, we want to recover from a disaster and so on – and not require separate technology stacks.

We’re now at a point where the technologies are coming together nicely to do just that. In the EMC world, that would be products like VPLEX and RecoverPoint, tightly integrated with VMware from an operations perspective. I’m impressed that we have a non-trivial number of customers that are routinely doing live migrations at metro distances using VPLEX or testing their failover capabilities (not-disruptively and at a distance) using RecoverPoint.

The costs are coming down, the simplicity and integration is moving up – meaning that these environments are far easier to justify, deploy and manage than just a few years ago. Before long, I think we’ll see active-active data centers as sort of an expected norm vs. an exception.

JTC: How is SSD being leveraged in total data solutions now, with the rollout of the various ExtremeIO products?

Chuck: Well, I think most people realize we’re in the midst of a rather substantial storage technology shift. Flash (in all its forms) is now preferred for performance, disks for capacity.

The first wave of flash adoption was combining flash and disk inside the array (using intelligent software), usually dubbed a “hybrid array”. These have proven to be very, very popular: with the right software, a little bit of flash in your array can result in an eye-popping performance boost and be far more cost effective than trying to use only physical disks to do so. In the EMC portfolio, this would be FAST on either a VNX or VMAX. The approach has proven so popular that most modern storage arrays have at least some sort of ability to mix flash and disk.

The second wave is upon us now: putting flash cards directly into the server to deliver even more cost-effective performance. With this approach, storage is accessed at bus speed, not network speed – so once again you get an incredible boost in performance, even as compared to the hybrid arrays. Keep in mind, though: today this server-based flash storage is primarily used as a cache, and not as persistent and resilient storage – there’s still a need for external arrays in most situations. In the EMC portfolio, that would be the XtremSF hardware and XxtremSW software – again, very popular with the performance-focused crowd.

The third wave will get underway later this year: all-flash array designs that leave behind the need to support spinning disks. Without dragging you through the details, if you design an array to support flash and only flash, you can do some pretty impactful things in terms of performance, functionality, cost-effectiveness and the like. I think the most exciting example right now is the XtremIO array which we’ve started to deliver to customers. Performance-wise, it spans the gap between hybrid arrays and server flash, delivering predictable performance largely regardless of how you’re accessing the data. You can turn on all the bells and whistles (snaps, etc.) and run them at full-bore. And data deduplication is assumed to be on all the time, making the economics a lot more approachable.

The good news: it’s pretty clear that the industry is moving to flash. The challenging part? Working with customers hand-in-hand to figure out how to get there in a logical and justifiable fashion. And that’s where I think strong partners like GreenPages can really help.

JTC: How do those new products tie into FAST on the array side, with software on the hosts, SSD cards for the servers and SSD arrays?

Chuck: Well, at one level, it’s important that the arrays know about the server-side flash, and vice-versa.

Let’s start with something simple like management: you want to get a single picture of how everything is connected – something we’ve put in our management products like Unisphere. Going farther, the server flash should know when to write persistent data to the array and not keep it locally – that’s what XtremSW does among other things. The array, in turn, shouldn’t be trying to cache data that’s already being cached by the server-side flash – that would be wasteful.

Another way of looking at it is that the new “storage stack” extends beyond the array, across the network and into the server itself. The software algorithms have to know this. The configuration and management tools have to know this. As a result, the storage team and the server team have to work together in new ways. Again, working with a partner that understands these issues is very, very helpful.

JTC: What’ the biggest misperception about cloud storage right now?

Chuck: Anytime you use the word “cloud,” you’re opening yourself up for all sorts of misconceptions, and cloud storage is no exception. The only reasonable way to talk about the subject is by looking at different use cases vs. attempting to establish what I believe is a non-existent category.

Here’s an example: we’ve got many customers who’ve decided to use an external service for longer-term data archiving: you know, the stuff you can’t throw away, but nobody is expected to use. They get this data out of their environment by handing it off to a service provider, and then take the bill and pass it on directly to the users who are demanding the service. From my perspective, that’s a win-win for everyone involved.

Can you call that “cloud storage”? Perhaps.

Or, more recently, let’s take Syncplicity, EMC’s product for enterprise sync-and-share. There are two options for where the user data sits: either an external cloud storage service, or an internal one based on Atmos or Isilon. Both are very specific examples of “cloud storage,” but the decision as to whether you do it internally or externally is driven by security policy, costs and a bunch of other factors.

Other examples include global enterprises that need to move content around the globe, or perhaps someone who wants to stash a safety copy of their backups at a remote location. Are these “cloud storage?”

So, to answer your question more directly, I think the biggest misconception is that – without talking about very specific use cases – we sort of devolve into a hand-waving and philosophy exercise. Is cloud a technology and operational model, or is it simply a convenient consumption model?

The technologies and operational models are identical for everyone, whether you do it yourself or purchase it as a service from an external provider.

JTC: Talk about Big Data and how EMC solutions are addressing that market (Isilon, GreenPlum, what else?).

Chuck: If you thought that “cloud” caused misperceptions, it’s even worse for “big data.” I try to break it down into the macro and the micro.

At the macro level, information is becoming the new wealth. Instead of it being just an adjunct to the business process, it *is* the business process. The more information that can be harnessed, the better your process can be. That leads us to a discussion around big data analytics, which is shaping up to be the “killer app” for the next decade. Business people are starting to realize that building better predictive models can fundamentally change how they do business, and now the race is on. Talk to anyone in healthcare, financial services, retail, etc. – the IT investment pattern has clearly started to shift as a result.

From an IT perspective, the existing challenges can get much, much more challenging. Any big data app is the new 800 pound gorilla, and you’re going to have a zoo-full of them. It’s not unusual to see a 10x or 100x spike in the demand for storage resources when this happens. All of the sudden, you start looking for new scale-out storage technologies (like Isilon, for example) and better ways to manage things. Whatever you were doing for the last few years won’t work at all going forward.

There’s a new software stack in play: think Hadoop, HDFS, a slew of analytical tools, collaborative environments – and an entirely new class of production-grade predictive analytics applications that get created. That’s why EMC and VMware formed Pivotal from existing assets like Greenplum, GemFire et. al. – there was nothing in the market that addressed this new need, and did it in a cloud-agnostic manner.

Finally, we have to keep in mind that the business wants “big answers”, and not “big data.” There’s a serious organizational journey involved in building these environments, extracting new insights, and operationalizing the results. Most customers need outside help to get there faster, and we see our partner community starting to respond in kind.

If you’d like a historical perspective, think back to where the internet was in 1995. It was new, it was exotic, and we all wondered how things would change as a result. It’s now 2013, and we’re looking at big data as a potentially more impactful example. We all can see the amazing power; how do we put it to work in our respective organizations?

Exciting time indeed ….

Chuck is the Global Marketing CTO at EMC. You can read more from Chuck on his blog and follow him on Twitter at @chuckhollis.

Cloud Security: From Hacking the Mainframe to Protecting Identity

By Andi Mann, Vice President, Strategic Solutions at CA

Cloud computing, mobility, and the Internet of Things are leading us towards a more technology-driven world. In my last blog, I wrote about how the Internet of Things will change our everyday lives, but with these new technologies comes new risks to the organization.

To understand how recent trends are shifting security, let’s revisit the golden age of hacking movies from the ‘80s and ‘90s. A recent post by Alexis Madrigal of The Atlantic sums up this era of Hollywood hackers by saying that “the mainframe was unhackable unless [the hackers] were in the room, in which case, it was simple.” That’s not far off from how IT security was structured in those years. Enterprises secured data by keeping everything inside a corporate firewall and only granting accessed to employees within the perimeter. Typically, the perimeter extended as far as the walls of the building.

When the cloud emerged on the scene, every IT professional said that it was too risky and introduced too many points of vulnerability. They weren’t wrong, but the advantages of the cloud, such as increased productivity, collaboration, and innovation, weren’t about to be ignored by the business. If the IT department just said no to cloud, the business could go elsewhere for their IT services – after all, the cloud doesn’t care who signs the checks. In fact, a recent survey revealed that in 60% of organizations, the business occasionally “circumvents IT and purchases technology on their own to support a project,” a practice commonly referred to as rogue IT, and another recent study found a direct correlation between rogue IT and data loss. This is obviously something that the IT department can’t ignore.

Identity is the New Perimeter

The proliferation of cloud connected devices and users accessing data from outside the firewall demands a shift in the way we secure data. Security is no longer about locking down the perimeter – it’s about understanding who is accessing the information and the data they’re allowed to access. IT needs to implement an identity-centric approach to secure data, but according to a recent Ponemon study, only 29% of organizations are confident that they can authenticate users in the cloud. At first glance, that appears to be a shockingly low number, but if you think about it, how do you verify identity? Usernames and passwords, while still the norm, are not sufficient to prove identity and sure, you can identify a device connected to the network, but can you verify the identity of the person using the device?

In a recent @CloudCommons tweetchat on cloud security, the issue of proving the identity of cloud users kept cropping up:

 Andi Mann

Today’s hackers don’t need to break into your data center to steal your data. They just need an access point and your username and password. That’s why identity and access management is such a critical component of IT security. New technologies are emerging to meet the security challenge, such as strong authentication software that analyzes risk and looks for irregularities when a user tries to access data. If a user tries to access data from a new device, the strong authentication software will recognize that it’s a new device and extra authentication flows kick in that require the user to further verify their identity.

What IT should be doing now to secure identity

To take advantage of cloud computing, mobility, and the Internet of Things in a secure way, the IT department needs to implement these types of new and innovative technologies that focus on verifying identity. In addition to implementing new technologies, the IT department needs to enact a broader cloud and mobile device strategy that puts the right policies and procedures in place and focuses on educating employees to minimize risk. Those in charge of IT security must also establish a trust framework that enforces how you identify, secure and authenticate new employees and devices.

Cloud computing, mobile devices, and the Internet of Things can’t be ignored by IT and the sooner a trust framework and a cloud security strategy is established, the sooner your organization can take advantage of new and innovative technologies, allowing the business to reap the benefits of cloud, mobile, and the Internet of Things, while keeping the data safe and sound. And to me, that sounds like a blockbuster for IT.

 

Andi Mann is vice president of Strategic Solutions at CA Technologies. With over 25 years’ experience across four continents, Andi has deep expertise of enterprise software on cloud, mainframe, midrange, server and desktop systems. Andi has worked within IT for global corporations, with software vendors, and as a leading industry analyst. He has been published in the New York Times, USA Today, Forbes, CIO, Wall Street Journal, and more, and has presented worldwide on virtualization, cloud, automation, and IT management. Andi is a co-author of the popular handbook, ‘Visible Ops – Private Cloud’, and the IT leader’s guide to business innovation, ‘The Innovative CIO’. He blogs at https://pleasediscuss.com/andimann and tweets as @AndiMann.

 

 

 

Cloud Corner Video- Keys to Hybrid Cloud Management

http://www.youtube.com/watch?v=QIEGDZ30H2Q

 

GreenPages CEO Ron Dupler and LogicsOne Executive Vice President and Managing Director Kevin Hall sit down to talk about the current state of the cloud market, challenges IT decision makers are facing today in regards to hybrid cloud environments, as well as a revolutionary new Cloud Management as a Service Offering.

If you’re looking for more information on hybrid cloud management, download this free whitepaper.

 

Or, if you would like someone to contact you about GreenPages Cloud Management as a Service offering, fill out this form.

Cloudviews Recap: The Enterprise Cloud

By John Dixon, Consulting Architect, LogicsOne

A few weeks ago, I took part in another engaging tweetchat on Cloud Computing. The topic: the enterprise cloud. Transcript here: http://storify.com/CloudCommons/cloudviews-tweetchat-enterprise-cloud

I’ll be recapping the responses to each question posed in the Tweetchat and giving an expanded response from the GreenPages perspective. As usual with tweetchats hosted by CloudCommons, five questions are presented a few days in advance of the event. This time around, the questions were:

  1. How should an enterprise get started with cloud computing?
  2. Is security still the “just because” reason for not migrating to the cloud?
  3. Who is responsible for setting the cloud strategy in the enterprise?
  4. What’s the best way for enterprises to measure cloud ROI?
  5. What are the top 3 factors enterprises should consider before moving to a cloud model?
  6. How should an enterprise measure the success of its cloud implementation?

Before we jump in to each question, let me say that the Cloud Commons Tweetchats are getting better and better. I try to participate in each one, and I find the different perspectives very interesting. The dynamic on Twitter makes these conversations pretty intense, and we always cover a lot of ground in just 60 minutes. Thanks to all of the regulars that participate. And if you haven’t been able to participate yet, I encourage you to have a look.

How should an enterprise get started with cloud computing?

I’m sure you’d agree that there are lots of different perspectives on cloud computing, especially now that adoption is gaining momentum. Consumers are regularly using cloud services. Organizations large and small are using cloud computing in different ways. Out of the gate, these different perspectives came to the surface. Here’s a recap of the first responses (with my take in parentheses). I don’t disagree with any of them; I think they’re all valid:

  1. “Ban new development that doesn’t use cloud … as a means to help development teams begin to learn the new paradigm” (maybe a little harsh, but I can see some policy and governance coming through in that point – after all, many corporate IT shops have a virtualization policy that kind of works this way, don’t they?)
  2. 2.       “Inventory applications, do some analysis, and find cloud candidates” (this is definitely one way to go, and maybe the most risk-averse; this perspective holds “the cloud” as a destination)
  3. 3.       “Use SaaS” (certainly the easiest and quickest way to start using cloud, if that’s a mandate from management)
  4. 4.       “Enterprises are already using cloud, next question” (I definitely agree with this one, enterprises are already using cloud for some things, no doubt about it)
  5. 5.       “Look at rogue IT, then enhance and wrap some governance around the best ideas” (again, I definitely agree with this one as a valid method; in fact, I did a recent blog post on the same concept,
  6. 6.       “Know what you need from a cloud provider and be prepared” (the Boy Scout model, be prepared! I definitely agree with this one as well! In fact, look here.)
  7. 7.       “Partner with the business to determine how cloud fits in the COMPANY strategy, not the IT strategy” (this was from me; and maybe it is obvious by now that cloud has huge business benefits, not just benefits for corporate IT)

 

There was lots of talk about the idea of identifying the “rogue IT” groups and embracing the unique things they have done in the cloud. All in all, these are ALL great ways to get started with cloud. In hindsight, I would add in another method of my own:

  1. Manage your internal infrastructure as if it were already deployed to the cloud. Some tools emerging now have this capability – to manage infrastructure through one interface whether it is deployed in your datacenter, with Rackspace, or even Amazon. This way, if and when you do decide to move some IT services to an external provider, the same tools and processes can be applied.

 

Your organization may have some additional methods to get started with cloud (and I’d love to hear about them!). So, why not use all of these methods in one concerted effort to evaluate cloud computing technology?

 

Is security still the “just because” reason for not migrating to the cloud?

The short recap on this topic: yes, organizations do use security as a convenient way to avoid acting on something. The security justification is more prevalent in large organizations, for obvious reasons. I’d like to point out one of the first responses though:

“…or is security becoming a reason to move to the cloud? Are service providers able to hire better security experts?”

I think this is a fantastic, forward looking response. History has seen that specialization in markets does occur. Call this industrial specialization: eventually…

  • “The price of infrastructure services will be reduced as the market becomes more competitive. Providers will compete in the market by narrowing their focus on providing infrastructure in a secure and reliable way – they specialize or go out of business.” To compete, service providers will find/attract the best people who can help them design, build, and test infrastructure effectively
  • Thus, the best people in IT security (a.k.a., the people most interested in security) will be attracted to the best jobs with service providers

Who is responsible for setting the cloud strategy in the enterprise?

Common answer was C-level, either CIO or even CEO. Cloud computing should enable the strategy of the overall business, not only IT. I think that IT should own the cloud strategy, but that more business-oriented thinkers should be in IT!

 

What’s the best way for enterprises to measure cloud ROI?

Lots of perspectives popped up on these topics. I don’t think the group stood behind a single answer. Here are some of the interesting responses for measuring ROI of cloud:

  • IT staff reduction
  • Business revenue divided by IT operations expense
  • Improving time to market for new applications

Measuring the value of IT services is, excuse the pun, tricky business. I think cloud adoption will undoubtedly accelerate once there is a set of meaningful metrics that is applicable across industries. Measuring ROI of a virtualization effort was fairly easy math – reduction in servers, networking, datacenter floor space, etc. Measuring ROI of cloud is much more difficult, but the prize is up for grabs!

 

What are the top 3 factors enterprises should consider before moving to a cloud model?

This goes back to the Boy Scout model of proper preparation, which I wrote about a few months ago. I saw a few responses that were especially interesting, and yet unsolved:

  • Repatriation, or portability of applications
  • Organizational change (shouldn’t cloud be transparent?)
  • Investments in legacy technology, goes to timing and WHEN to consider cloud
  • Security, location of data, etc.

 

At GreenPages, we think of cloud computing more as a management paradigm, and as a New Supply Chain for IT. Considering that perspective, the points above are less of an issue. GreenPages  Cloud Management as a Service (CMaaS) offering was designed specifically for this – to view cloud computing as the New Supply Chain for IT. In a world of consumers (enterprises) and providers (the likes of Amazon, Rackspace, and Terremark), where competition drives prices down, cloud computing, like other supply chains, can be thought of as the way to take advantage of market forces to benefit the business.

Thanks to Cloud Commons for another great conversation…looking forward to the next one!

Colocation: 55+ living for your IT equipment

I recently sat on a planning call with an extremely smart and agreeable client. We had discussed a modest “data center” worth of equipment to host the environment he’s considering putting into production. I asked the simple enough question of “where are you going to deploy this gear?” I have to admit not being very surprised when he responded: “Well, I’ve cleaned out a corner of my office.” Having spent some early days of my IT career working in a server closet, I knew that if the hum of the equipment fans didn’t get to him quickly, the heat output would for sure. This is not an uncommon conversation. Clearly the capital expense of building out a “data center” onsite was not an appealing topic. So, if building isn’t an option, why not rent?

In a similar vein, not too far back I watched several “senior” members of my family move into 55+ communities after years of resisting. Basically, they did a “capacity planner” and realized the big house was no longer needed. They figured somebody else could worry about the landscaping, snow plowing and leaky roofs. The same driving forces should have many IT pros considering a move into a colocation facility.

The opportunities to move into a hosted data center (colo facility) are plentiful today. You simply don’t have as much gear any longer (assuming you’re mostly virtualized). Your desire to “do it all” yourself has waned (let someone else worry about keeping the lights on and network connected). The added bonus of providing redundant network paths, onsite security and almost infinite expansion are driving many “rental” conversations today. Colos are purpose-built facilities which are ideal for core data center gear such as servers, storage (SANs), routers and core switches, to name a few.  Almost all of them have dual power feeds, backup battery systems and generators. HVAC (heating, ventilation, and air-conditioning) units keep appropriate environmental conditions for the operation of this critical equipment.

Many businesses don’t fully realize just how much power consumption is required to operate a data center. The energy bills achieved for just the cooling component alone can leave many IT managers, well, frosted. Even still, the need to see the healthy status green blinking lights is like a digital comfort blanket. Speaking with many IT execs, we hear over and over “This was the best move we could have made.” From our own experience, we’ve seen our internal IT team shift focus to strategic initiatives and end user support.

While it is certainly not a one-size-fits-all endeavor, there is something for most organizations when it comes to colo. Smaller organizations with one rack of equipment have seen tremendous advantages as have clients approaching the “enterprise” size with dozens of cabinets of gear. Redundancy, security, cost control, predictable budgets and 7x24x365 support are all equally attractive reasons to move into a “colo.” Call it a “colominium” if you will. Colo could be the right step toward a more efficient and effective IT existence.