Archivo de la categoría: Cloud computing

How IT Operations is Like Auto Racing

By John Dixon, Consulting Architect, LogicsOne

 

If you’ve ever tried your hand at auto racing like I did recently at Road Atlanta, you’ll know that putting up a great lap time is all about technique. If you’ve ever been to a racing school, you’ll also remember that being proactive and planning your corners is absolutely critical in driving safely. Lets compare IT operations to auto racing now. Everyone knows how to, essentially, drive a car. Just as every company, essentially, knows how to run IT. What separates a good driver from a great driver? Technique, preparation, and knowing the capabilities of your driver and equipment.

 

The driver = your capabilities

The car = your technology

The track = your operations as the business changes

 

Preparation

Lets spend a little bit of time on “preparation.” As we all know, preparation time is often a luxury. From what I have seen consulting over the past few years, preparation is not just installed in the culture of IT. But we’d all agree that more preparation leads to better outcomes (for almost everything, really). So, how do we get more preparation time? This is where the outsourcing trend gained momentum – outsource the small stuff to get more time back to work on strategic projects. Well, this didn’t always work out very well, as typical outsourcing arrangements moved large chunks of IT to an outside provider. Why didn’t we move smaller chunks first? That’s what we do in auto racing – the reconnaissance lap! Now we have the technology and arrangements to do a reconnaissance lap of sorts. For example, our Cloud Management as a Service (CMaaS) has this philosophy built-in – we can manage certain parts of infrastructure that you select, and leave others alone. Maybe you’d like to have your Exchange environment fully managed but not your SAP environment. We’ve built CMaaS with the flexible technology and arrangements to do just that.

Technique

 

Auto Racing IT Operations
Safety   first! Check your equipment before heading out, let the car warm up before   increasing speed Make sure   your IT shop can perform as a partner with the business
Know where   to go slow! You can’t take every turn with full throttle. Even if you can,   its worth it to “throw away” some corners in preparation for straight   sections Know where   to allocate investment in IT – its all about producing results for the   business
First lap:   reconnaissance (stay on the track) Avoid   trying to tackle very complex problems with brand new technology (e.g., did   you virtualize Exchange on your very first P2V?)
Last lap:   cool down (stay on the track) An easy   one, manage the lifecycle of your applications and middleware to avoid be   caught by a surprise required upgrade
Know where   to go fast! You can be at full throttle without any brake or steering inputs   (as in straight sections), so dig in! Recognize   established techniques and technologies and use them to the max advantage
Smooth =   fast. Never stab the throttle or the brakes! Sliding all over the track with   abrupt steering and throttle inputs is not the fastest way (but it IS fun and   looks cool) Build   capabilities gradually and incrementally instead of looking to install a   single technology to solve all problems today.
Know the   capabilities of your car – brakes, tires, clutch, handling. Exceed the   capabilities of your equipment and see what happens. Take the   time to know your people, processes, and technology – which things work well   and which could be improved? This depends greatly on your business, but there   are some best practices to run a modern IT shop.
Improve   time with each lap This is   all about continuous improvement – many maneuvers in IT should be repeatable   (like handling a trouble ticket), so do it better every time.
Take a   deep breath, check your gauges, check your harnesses, check your helmet Monitoring   is important, but it is not an endgame for most of us. Be aware of things   that could go wrong, how you could mitigate risk, which workarounds you could   implement, etc.
Carry   momentum around the track. A high horsepower car with a novice driver will   always lose to a great driver in a sedan Technology   doesn’t solve everything. You need proper technique and preparation.
Learn from   your mistakes – they aren’t the end of the world With   well-instrumented monitoring, performance blips or mistakes are opportunities   to improve

 

Capabilities

A word on capabilities. Capabilities are not something you simply install with software or infrastructure. Just as an aspiring racecar driver can’t simply obtain the capability required to win a professional F1 race with a weekend class. You need assets (e.g., infrastructure, applications, data) and resources (e.g., dollars) to build capabilities. What exactly is a capability? In racing, it’s the ability to get around a track, any track, quickly and safely. In IT, this would be the ability to handle a helpdesk call and resolve the issue to completion, for a basic example. An advanced IT capability in a retail setting might be to produce a report on how frequently shoppers from a particular zip code purchase a certain product. Or, perhaps, it’s an IT governance capability to understand the costs of providing a particular IT service. One thing I’ve seen in consulting with various shops is that organizations could do a better job of understanding their capabilities.

Now picture yourself in the in the driver’s seat (of your IT shop). Know your capabilities, but really think about your technique and continuously improving your “lap times.”

  1. Where are your straight sections – where you can just “floor it” and hang on? These might be well-established processes, projects, or tasks that pay obvious benefits. Can you take some time to create more straight sections?
  2. How much time do you have for preparation? How much time do you spend “studying the track” and “knowing your equipment?” Do you know your capabilities? Can you create time that you can use for preparation?
  3. Where are your slow sections? The processes that require careful attention to detail. This is probably budget planning time for many of us. Hiring time is probably another slow section.
  4. Do you understand your capabilities? Defining the IT services that you provide your customer is a great place to start. If you haven’t done this yet, you should — especially if you’re looking at cloud computing. GreenPages and our partners have some well-established techniques to help you do this successfully.

 

As always, feel free to reach out if you’d like to have a conversation just to toss around some ideas on this topic.

 

Now for the fun part, a video that a classmate of mine recorded of a hot lap around Road Atlanta. The video begins in turn 11 (under the bridge in this video).

  1. Turn 11 is important because it is a setup to the front straight section. BUT, it is pretty dangerous too as it leads downhill to turn 12 (the entrance to the straight). Position the car under the RED box on the bridge and give a small amount of right steering input. Build speed down the hill.
  2. Clip the apex of turn 11 and pull the car into turn 12. Be gentle with turn 12 – upset the car over the gators and you could easily lose control.
  3. Under the second bridge and onto the front straight section. Grab 5th gear if you can. Up to ~110mph. Position the car out to the extreme left side of the track for turn 1.
  4. Show no mercy to the brakes for turn 1! Engage ABS, downshift, then trail brake into the right hander, pull the car in to the apex of the turn in 4th gear, carrying 70-80mph.
  5. Uphill for turn 2. Aim the nose of the car at the telephone pole in the distance, as turn 2 is blind. Easy on the throttle!
  6. Collect the apex at turn 2 and downhill for turn 3. Use a dab of brakes to adjust speed as you turn slight right for turn 3.
  7. Turn slight left for turn 4, hug the inside
  8. Track out and downhill for “the esses” – roll on the throttle easily, you’ve got to keep momentum for the uphill section at turn 5.
  9. The esses are a fast part of the track but be careful not to upset the car
  10. Brake slightly uphill for turn 5. It is the entrance to a short straight section where you can gain some speed
  11. Stay in 4th gear for turn 6 and bring the car to the inside of the turn
  12. Track way out to the left for the crucial turn 7 – a slow part of the track. Brake hard and downshift to third gear. Get this one right as it is the entrance to the back straight section.
  13. Build speed on the straight – now is the time to floor it!
  14. Grab 5th gear midway down the straight for 110+ mph. Take a deep breath! Check your gauges and harnesses.
  15. No mercy for the brakes at turn 10a! Downshift to 4th gear, downshift to 3rd gear and trail brake as you turn left
  16. Slight right turn for turn 10b and head back uphill to the bridge – position the car under the RED box and take another lap!

 

Moving Email to the Cloud, Part 1

By Chris Chesley, Solutions Architect

Many of our clients are choosing to not manage Exchange day to day and not to upgrade it every 3-5 years.  They do this by choosing to have Microsoft host their mail in Office 365.  Is this right for your business?  How do you tie this into your existing infrastructure and still have access to email regardless of the status of your onsite services?

The different plans for Microsoft Office 365 can be confusing. Regardless of what plan you get, the Exchange Online choices boil down to two options.  Exchange Plan 1 offers you 50GB mailboxes per user, ActiveSync, Outlook Web Access, Calendar and all of the other features you are currently getting with an on premises Exchange implementation.  Additionally you also get antivirus and antispam protection.  All of this for 4 dollars a month per user.

Exchange Plan 2 offers the exact same features as plan 1, with the additions of unlimited archiving, legal hod capabilities, compliance support tools and advanced voice support.  This plan is 8 dollars a user per month.

All of the other Office 365 plans that include Exchange are either plan 1 or plan 2.  For example, the E3 plan (Enterprise plan 3) includes Exchange plan 2, SharePoint Plan 2, Lync Plan 2 and Office Professional Plus for 5 devices per user.  You can take any plan and break it down to the component part and fully understand what you’re getting.

If you are looking to move email to the cloud and are currently using Exchange, who better to host your Exchange than Microsoft?  Office 365 is an even better choice if you are using, or plan on using, SharePoint or Lync.  All of these technologies are available in the current plans or individually through Office 365.

I’ve helped many clients make this transition so if you have any questions or if there’s any confusion around the Office 365 plans feel free to reach out.

My next blog will be on the 3 different authentication methods in Office 365.

Journey to the Cloud: An Insider’s Perspective

By Ben Stephenson, Journey to the Cloud

Our Journey to the Cloud blog has been live for a little over two years now, and I’ve had the privilege of running and managing it from the start. I wanted to touch base about the site, share my unique perspective from managing it, and hear from our readers about what we can do to make it even better.

Our goal from the very beginning was to establish ourselves as thought leaders in the industry by providing high quality content that was relevant and beneficial to IT decision makers. We wanted to make sure we let our authors keep their opinions and voice, while at the same time taking an unbiased, agnostic approach. The last thing we wanted to do was start blathering on about what a great company GreenPages is or bragging about the most recent award we won (it was being named to the Talkin’ Cloud 100 if you were wondering…).  Over the course of the two years, we’ve posted over 200 blogs and seen the number of page views and shares across various social media sites increase drastically. We’ve brought in some big time guest bloggers such as ConnectEDU CTO Rick Blaisdell, CA’s Andi Mann, the Director, Advanced Analytics and Sr. Research Scientist at Gravitant, and more. We’ve incorporated a lot of video as well – in fact for whatever strange reason someone thought it was a good idea to let me host our Cloud Corner Series. We’ve covered topics ranging from cloud, virtualization, end user computing, BYOD, network infrastructure, storage, disaster recovery, shadow IT, project management, and much more.

Have there been challenges along the way? Absolutely. Have I had to go after people and chase them down, scratching and clawing until I get a blog to post? Yes. Have tears been shed? Has blood been shed? We’ll keep that to ourselves as it’s generally frowned upon by HR. And, yes, I have had to give William Wallace-like speeches to attempt to rally the troops. While there have been some challenges, all in all there’s been a great amount of enthusiasm and support from our writers to produce a high quality publication. For me, being in the industry for two years now with no previous technological background, the amount I’ve learned is ridiculous. Before starting at GreenPages, I would have rather listened to a Ben Stein Lecture or Bill Lumbergh explaining TPS Reports than read an article on software defined networking and the impact it will have on businesses in the next 5-10 years. I can see why our customers get excited to work with our consultants because they truly love and believe in the technology they talk about. I completely buy into their enthusiasm and passion and it makes me genuinely interested in the topics we cover. I’m in my mid-twenties and have, sadly, found myself out drinking at a bar with my friends having a great time before somehow winding up in a heated debate over the pros and cons of moving to a hybrid cloud architecture.

 

So, in case, for whatever deranged reason, you haven’t read all 200 of our posts, I’m going to list out my top ten from the past two years (in no particular order). Take a look and let me know what you think:

 

 

To close this out…I want to hear from you. What can we do to make Journey to the Cloud better? Are there any specific topics you’d like to hear more about? Any specific authors you’d like to hear more from? How about any features or functionality of the site you’d like added, changed or improved? What have you seen on other sites that you like that we don’t have? Leave a comment here or tweet us at @GreenPagesIT or @benstephenson1

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Rapid Fire Summary of Opening Keynote at VMworld 2013

By Chris Ward, CTO, LogicsOne

For those of you who aren’t out in San Francisco at the 10th annual VMworld event, here is a quick overview of what was covered in the opening keynote delivered by CEO Pat Gelsinger’s opening:

  • Social, Mobile, Cloud & Big Data are the 4 largest forces shaping IT today
  • Transitioned from Mainframe –>Client Server –>Mobile Cloud
  • Pat sets the stage that the theme of this year’s event is networking – basically setting the stage for a ton of Nicira/NSX information. I think VMware sees the core of the software defined datacenter as networking-based, and they are in a very fast race to beat out the competition in that space
  • Pat also mentioned that his passion is to get every x86 application/workload 100% virtualized. He drew parallels to Bill Gates saying his dream was a PC on every desk in every home that runs Microsoft software.
  • Next came announcements around vSphere 5.5 & vCloud Suite 5.5…here are some of the highlights:
    • 2x CPU and Memory limits and 32x storage capacity per volume to support mission critical and big applications
    • Application Aware high availability
    • Big Data Extensions – multi-tenant Hadoop capability via Serengeti
    • vSAN officially announced as public beta and will be GA by 1st half of 2014
    • vVOL is now in tech preview
    • vSphere Flash Read Cache included in vSphere 5.5

Next, we heard from Martin Casado. Martin is the CTO – Networking at VMware and came over from the Nicira acquisition and was speaking about VMware NSX. NSX is a combination of vCloud Network and Security (vCNS) and Nicira. Essentially, NSX is a network hypervisor that abstracts the underlying networking hardware just like ESX abstracts underlying server hardware.

Other topics to note:

  • IDC names VMware #1 in Cloud Management
  • VMware hypervisor fully supported as part of OpenStack
  • Growing focus on hybrid cloud. VMware will have 4 datacenters soon (Las Vegas, Santa Clara, Sterling, & Dallas). Also announcing partnerships with Savvis in NYC & Chicago to provide vCHS services out of Savvis datacenters.
  • End User Computing
    • Desktop as a Service on vCHS is being announced (I have an EUC Summit Dinner later on tonight so I will be able to go into more detail afterward that).

So, all-in-all a good start to the event. Network virtualization/NSX is clearly the focus of this conference and vCHS is a not too distant 2nd. Something that was omitted from the keynote was the rewritten SSO engine for vCenter 5.5. The piece was weak for 5.1 and has been vastly improved with 5.5…this could be addressed tomorrow as most of the tech staff is in Tuesday’s general session.

If you’re at the event…I’ll actually be speaking on a panel tomorrow at 2:30 about balancing agility with service standardization. I’ll be joining Khalid Hakim and Kurt Milne of VMware, along with Dave Bartoletti of Forrester Research and Ian Clayton of Service Management 101. I will also be co-presenting on Wednesday with my colleague John Dixon at 2:30-3:30 in the Moscone West Room 2011 about deploying a private cloud service catalogue. Hopefully you can swing by.

More to come soon!

 

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

Cloud Corner Series – Unified Communications in the New IT Paradigm

http://www.youtube.com/watch?v=XHp6Q5RMMR8

 

In this segment of Cloud Corner, former CEO of Qoncert, and new GreenPages-LogicsOne employee, Lou Rossi answers questions around how unified communications fits into the new IT paradigm moving forward.

We’ll be hosting a free webinar on 8/22: How to Securely Enable BYOD with VMware’s Next Gen EUC Platform. Register Now!

Predict Cloud Revenue With This One Simple Trick

We’ve all seen the ads touting “one simple/crazy trick” to lower your insurance (or weight, or electric bill, or whatever). Now GigaOm has it’s own variant for cloud vendors to predict annual revenue:

Just take your July revenue and multiply it by 12.  Or if you want to get even trickier, take your daily revenue on July 15 and multiply it by 365.

They’re both embarrassingly simple, but surprisingly accurate. For a subscription business with a consistent trajectory, it’ll get you extremely close to the ultimate answer – usually within a couple percentage points.

There is more to it of course.

Seeking Better IT Mileage? Take a Hybrid Out for a Spin

Guest Post by Adam Weissmuller, Director of Cloud Solutions at Internap

As IT pros aim to make the most efficient use of their budgets, there is a rapidly increasing range of infrastructure options at their disposal. Gartner’s prediction that public cloud spending in North America will increase from $2 billion in 2011 to $14 billion in 2016, and 451 Research’s expectation that colocation demand will outpace supply in most of the top 10 North American markets through 2014 are just two examples of the growing need for all types of outsourced IT infrastructure.

While public cloud services in particular have exploded in popularity, especially for organizations without the resources to operate their own data centers, a “one size fits all” myth has also emerged, suggesting that this is the most efficient and cost-effective option for all scenarios.

In reality, the public cloud may be the sexy new sports car – coveted for its horsepower and handling – but sometimes a hybrid model can be the more sensible approach, burning less gas and still getting you where you need to go.  It all depends on what kind of trip you’re taking. Or, put in data center terminology, the most effective approach depends on the type of application or workload and is often a combination of infrastructure services – ranging from public, private and “bare metal” cloud to colocation and managed hosting, as well as in-house IT resources.

The myth of cloud fuel economy
Looking deeper into the myth of “cloud costs,” as part of a recent “Data Center Services Landscape” report, Internap recently surveyed 100 IT decision makers to gain a cross-sectional view into their existing current and future use of IT infrastructure. Almost 65 percent of respondents said they are considering public cloud services, and 41 percent reported they are doing so in order to reduce costs.

But when you compare the “all-in” costs of operating thousands of servers over several years in a well-run corporate data center or colocating in a multi-tenant data center against the cost of attaining that same capacity on a pay-as-you-go basis via public cloud, the cloud service will lose out nearly every time.

The fact that colocation can be more cost-efficient than cloud often comes as a surprise to organizations and is something of a dirty little secret within the IaaS industry. But for predictable workloads and core infrastructure that is “always on,” the public cloud is a more expensive option because the customer ultimately pays a premium for pay-as-you-go pricing and scalable capacity that they don’t need – similar to driving a gas-guzzling truck even when there’s nothing you need to tow.

Balancing the racecar handling of cloud with the safety of a family hybrid
This is not to suggest that cloud is without its benefits. Public cloud makes a lot of sense for unpredictable workloads. Enterprises can leverage it to expand capacity on-demand without incurring capital expenditures on new servers. Workloads with variable demand and significant traffic peaks and valleys, such as holiday shopping spikes for online retailers or a software publisher rolling out a new product, are generally well-suited for public clouds because the customer doesn’t pay for compute capacity that they don’t need or use on a consistent basis.

One of the biggest benefits of cloud services is agility. This is where the cloud truly shines, providing accessibility and immediacy to the end-user.  However, the need for a hybrid approach also arises here, when agility comes at the expense of security and control. For example, the agility vs. control challenge is often played out in some version of the following use case:  A CIO becomes upset when she finds out that employees within most of the company’s business units are leveraging public cloud services – without her knowledge. This is especially unsettling, given that she has just spent millions of dollars building two new corporate data centers that were only half full. Something has gone wrong here, and it’s related to agility.

A major contributing factor to the surprise popularity of public cloud services is the perceived lack of agility of internal IT organizations. For example, it’s not uncommon for it to take IT executives quite some time to turn up new servers in corporate data centers. And this isn’t necessarily the fault of IT since there are a number of factors that can, and often do, present roadblocks, such as the need to seek budgetary approval, place orders, get various sign-offs, install the servers, and finally release the infrastructure to the appropriate business units – a process that can easily take several months. As a result, employees and business units often begin to side-step IT altogether and go straight to public cloud providers, corporate credit card in hand, in an effort to quickly address IT issues. The emergence of popular cloud-based applications made this scenario a common occurrence, and it illustrates perfectly how the promise of agility can end up pulling the business units toward the public cloud – at the risk of corporate security.

The CIO is then left scrambling to regain control, with users having bypassed many important processes that the enterprise spent years implementing. Unlike internal data centers or colocation environments, with a public cloud, enterprises have little to no insight into the servers, switches, and storage environment.

So while agility is clearly a big win for the cloud, security and control issues can complicate matters. Again, a hybrid, workload-centric approach can make sense. Use the cloud for workloads that aren’t high security, and consider the economics of the workload in the decision, too. Some hybrid cloud solutions even allow enterprises to reap the agility benefits of the cloud in their own data center – essentially an on-premise private cloud.

As businesses continue to evolve, it will be critical to go beyond the industry’s cloud hype and instead build flexible, centrally-managed architectures that take a workload-centric approach and apply the best infrastructure environment to the job at hand. Enterprises will find such a hybrid solution is usually of greater value than the sum of its individual parts.

Carpooling with “cloudy colo”
One area that has historically been left out of the hybridization picture is colocation. While organizations can already access hybridized public and private and even “bare metal” cloud services today, colocation has always existed in a siloed environment, without the same levels of visibility, automation and integration with other infrastructure that are often found in cloud and hosting services.

But these characteristics are likely to impact the way colocation services are managed and delivered in the future. Internap’s survey found strong interest in “cloudy colo” – colocation with cloud-like monitoring and management capabilities that provides remote visibility into the colocation environment and seamless hybridization with cloud and other infrastructure, such as dedicated and managed hosting.

Specifically, a majority of respondents (57 percent) cited interest in hybrid IT environments; and, combined with 72 percent of respondents expressing interest in hybridizing their colocation environment with other IT infrastructure services via an online portal, the results show strong emerging interest in data center environments that can support hybrid use cases as well as unified monitoring and management via a “single pane of glass.”

Driving toward a flexible future
A truly hybrid architecture – one that incorporates a full range of infrastructure types, from public and private cloud to dedicated and managed hosting, and even colocation – will provide organizations with valuable, holistic insight and streamlined monitoring and management of all of their resources within the data center, as well as consolidated billing.

For example, through a single dashboard, organizations could perform tasks, such as: remotely manage bandwidth, inventory, and power utilization for their colocation environment; rapidly move a maturing application from dedicated hosting to colocation; turn cloud services up and down as needed or move a cloud-based workload to custom hosting. Think of it as your hybrid’s in-car navigation system with touchscreen controls for everything from radio to air conditioning to your rear view camera.

The growing awareness of the potential benefits of hybridizing IT infrastructure services reflects the onset of a shift in how cloud, hosting and even colocation will be delivered in the future. The cloud model, with its self-service features, is one of the key drivers for this change, spurring interest among organizations in maximizing visibility and efficiency of their entire data center infrastructure ecosystem.

AdamWeissmuller

Adam Weissmuller is the Director of Cloud Solutions at Internap, where he led the recent launch of the Internap cloud solution suite. A 10-year veteran of the hosting industry, he recently presented on “Overcoming Latency: The Achilles Heel of Cloud Computing” at Cloud Expo West.

Deutsche Börse Launching Cloud Capacity Trading Exchange

Deutsche Börse says it will launch a trading venue for outsourced cloud storage and cloud computing capacity in the beginning of 2014. Deutsche Börse Cloud Exchange AG is a new joint venture formed together with Berlin-based Zimory GmbH to create the first “neutral, secure and transparent trading venue” for cloud computing resources.

The primary users for the new trading venue will be companies, public sector agencies and also organisations such as research institutes that need additional storage and computing resources, or have excess capacity that they want to offer on the market.

“With its great expertise in operating markets, Deutsche Börse is making it possible for the first time to standardise and trade fully electronically IT capacity in the same way as securities, energy and commodities,” said Michael Osterloh, Member of the Board of Deutsche Börse Cloud Exchange.