Archivo de la etiqueta: cloud

Evolving to a Broker of Technology Services: Planning the Solution

By Trevor Williamson, Director, Solutions Architecture

A 3-Part Series:

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again

Part 2: Planning the Solution

As I wrote before and continuing with part 2 of this 3-part series, let’s talk about how we plan the solution for automating IT services and service management within your organization so that you can develop, deliver, and support services in a more disciplined way—which means that your customers will trust you. Of course this doesn’t mean that they won’t pursue outsourced, cloud, or other third-party services—but they will rely on you to get the most out of those services.  And once you do go through this process, some of the major benefits for implementing an automated service management infrastructure are:

  • Improved staff productivity that allows your business to become more competitive. Your time is too valuable to be spent fighting fires and performing repetitive tasks. If you prevent the fires and automate the repetitive tasks, you can focus on new projects and innovation instead. When you apply automation tools to good processes, productivity skyrockets to a level unachievable by manual methods.
  • Heightened quality of service that improves business uptime and customer experience. Consistent execution according to a well-defined change management process, for example, can dramatically reduce errors, that in turn improves uptime and customer experience because in today’s age of continuous operations and unrelenting customer demand, downtime can erode your competitive edge quickly. Sloppy change management can cause business downtime that prevents customers from buying online or reduces the productivity of your workforce.
  • Reduced operational costs to reinvest in new and innovative initiatives. It’s been said that keeping the lights on—the costs to maintain ongoing operations, systems, and equipment—eats up roughly 80% of the overall IT budget rather than going to new or innovative projects. With more standardized and automated processes, you can improve productivity and reduce operational costs allowing you the freedom to focus on more strategic initiatives.
  • Improved reputation with the business. Most self-aware IT organizations acknowledge that their reputation with business stakeholders isn’t always sterling. This is a critical problem, but you can’t fix it overnight—changing an organization’s culture, institutionalized behaviors, and stereotypes takes time and energy. If you can continue to drive higher productivity and quality through automated service management, your business stakeholders will take notice.

A very important aspect of planning this new infrastructure is to look toward, in fact assume, that the range of control will necessarily span both internal and external resources…that you will be stretching into public cloud spaces—not that you will always know you’re there until after the fact—and that you will be managing them (at least monitoring them) with the same level of granularity that you do with your traditional resources.

This includes integrating the native functionality of those off-premises services—reserving virtual machines and groups of machines, extending reservations, cloning aggregate applications, provisioning storage, etc., and connecting them to an end-to-end value chain of IT services that can be assessed, monitored and followed from where the data resides to where it is used by the end user:

It is through this holistic process—rationalized, deconstructed, optimized, reconstituted and ultimately automated—that the system as a whole can be seen as a fully automated IT services management infrastructure, but believe me when I say that this is not nor will it ever be an easy task.  When you are looking to plan how you automate your service management infrastructure, you need a comprehensive approach that follows a logical and tightly controlled progression.  By whatever name you call the methodology (and there are many out there) it needs to be concise, comprehensive, capable, and, above all else, controlled:

1. Identify the trends, justify the business case, and assess your maturity. Before investing in an automated service management infrastructure, you have to assess the opportunity, build the business case, and understand the current state. This phase will answer the following questions:

o    Why is automated service management important to my business?

o    What are the business and IT benefits?

o    How prepared is my organization to tackle this initiative?

2.  Develop your strategic plan, staffing plan, and technology roadmaps. You translate what you learn from the prior phase into specific automated service management strategies. The goal of this phase is to help you answer these key questions:

o    Do I have the right long-term strategic vision for automated service management?

o    What are my stakeholders’ expectations, and how can I deliver on them?

o    What technologies should I invest in and how should I prioritize them?

3.  Invest in your skills and staff, policies and procedures, and technologies and services. This phase is designed to execute on your automated service management strategies. This phase will answer the following people, process, and technology questions:

o    What specific skills and staff will I need, and when?

o    What policies and procedures do I need to develop and enforce?

o    Should I build and manage my own technology capabilities or use external service providers?

o    What specific vendors and service providers should I consider?

4.  Manage your performance, develop metrics, and communicate and train. Finally, to help you refine and improve your automated service management infrastructure, the goal in this phase is to help you answer these key questions:

o    How should I adjust my automated service management plans and budgets?

o    What metrics should I use to track my success?

o    How should I communicate and train stakeholders on new automated service management policies and technologies?

These phases and the associated questions to be answered are just a taste of what is required when you are thinking of moving toward an automated service management infrastructure—and of course GreenPages is here to help—especially when you are in the planning stages.  The process is not painless and it is certainly not easy but the end result, the journey in fact, is well worth the time, effort and investment to accomplish it.

Next…Part 3: Executing the Solution, again and again…

If you’re looking for more information, we will be holding free events in Boston, NYC, and Atlanta to discuss cloud computing, virtualization, VDI, clustered datacenters, and more. We’ll have a bunch of breakout sessions, and it will also be a great opportunity to network with peers.

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

The Evolution from a Provider of Technology Components to a Broker of Technology Services

A 3 Part Series from Trevor Williamson

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again…

Part 1: Understanding the Dilemma

IT teams are increasingly being challenged as bring-your-own-technology (BYOD) policies and “as-a-service” software and infrastructure multiply in mainstream organizations.  In this new reality, developers still need compute, network and storage to keep up with growth…and workers still need some sort of PC or mobile device to get their jobs done…but they don’t necessarily need corporate IT to give it to them.  They can turn to a shadow IT organization using Amazon, Rackspace and Savvis or using SAS applications or an unmanaged desktop because when all is said and done, if you can’t deliver on what your users and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.

Much of this shift toward outside services comes down to customer experience, or how your customers—your users—perceive their every interaction with IT, from your staff in the helpdesk to corporate applications they access every day.  If what you are delivering (or not delivering as the case may be) is more burdensome, more complicated or doesn’t react as fast as other service providers (like Amazon, Office 365, or Salesforce, etc.), then they will turn (in droves) toward those providers.

Now the question hanging heavy in the air is what do those providers have, except of course scale, that your IT organization doesn’t have?  What is the special sauce for them to be able to deliver those high-value services, quicker and at a lower cost than you can?

In a few words; IT Service Management (ITSM)…but wait!…I know the first reaction you might have is that ITSM has become a sour subject and that if you hear ITIL chanted one more time you’re going to flip out.  The type of ITSM I’m talking about is really the next generation and has only passing similarities to the service management initiatives of the past.  While it is agreed that ITSM has the potential to deliver the experiences and outcomes your developers and users need and want, today’s ITSM falls far short of that idea.  Process for process sake you’ve probably heard…but whatever, we are still measuring success based on internal IT efficiencies, not customer or financial value or even customer satisfaction. We still associate ITSM exclusively with ITIL best practices and we continue to label ourselves as providers of technology components.

As it turns out, the adage “You cannot fix today’s problems with yesterday’s solutions” is as right as it ever was.  We need to turn ITSM on its head and create a new way forward based on customer centricity, services focus, and automated operations.  We have to rethink the role we play and how we engage with the business.  Among the most significant transformations of IT we need to complete is from a provider of technology components to a broker of technology services. We have relied on ITSM to drive this transformation, but ITSM needs to change in order to be truly effective in the future. Here’s why:

   The roots of service management focus was on the customer:  “Service management” originated within the product marketing and management departments and from the beginning, service management placed the customer at the center of all decision making within the service provider organization. It’s the foundation to transform product-oriented organizations into service providers where the customer experience and interaction are designed and managed to cost-effectively deliver customer results and satisfaction.

  But when we applied service management to IT, we lost customer focus: Applying service management to information technology produced the well-known discipline of ITSM but, unfortunately, IT professionals associated it exclusively with the IT infrastructure library (ITIL) best practices, which focus on processes for managing IT infrastructure to enable and support services. What’s missing is the customer perspective.

   In the age of the customer, we need to proactively manage services via automation: In the age of the customer, technology-led disruption (virtualization, automation, orchestration, operating at scale, etc.) erodes traditional competitive barriers making it easier than ever for empowered employees and app developers to take advantage of new devices and cloud-based software. To truly function as a service provider, IT needs to first and foremost consider the customer and the customer’s desired outcome in order to serve them faster, cheaper, and at a higher quality.  In today’s world, this can only be accomplished via automation.

When customers don’t trust a provider to deliver quality products or services, they seek alternatives. That’s a pretty simple concept that everyone can understand but, what if the customer is a user of IT services that you provide?  Where do they go if they don’t like the quality of your products or services?  Yep, Amazon, Rackspace, Terremark, etc., and any other service provider who offers a solution that you can’t…or that you can’t in the required time or for the required price.

The reason why these service providers can do these seemingly amazing things and offer such diverse and, at times, sophisticated services, is because they have eliminated (mostly) the issues associated with humans doing “stuff” by automating commodity IT activities and then orchestrating those automated activities toward delivering aggregate IT services.  They have evolved from being providers of technology components to brokers of technology services.

If you’re looking for more information on BYOD, register for our upcoming webinar “BYOD Webinar- Don’t Fight It, Mitigate the Risk with Mobile Management

Next…Part 2: Planning the Solution

 

Mind the Gap – Quality of Experience: Beyond the Green light/Red light Datacenter.

By Geoff Smith, Senior Solutions Architect

If you have read my last three blogs on the changing landscape of IT management, you can probably guess by now where I’m leaning in terms of what should be a key metric in determining success:  the experience of the user.

As any industry progresses from its infancy to mainstream acceptance, the focus for success invariably transitions from being the “wizard-behind-the-curtain” towards transparency and accountability.  Think of the automobile industry.  Do you really buy a car anymore, or do you buy a driving experience?  Auto manufacturers have had to add a slew of gizmos (some which have absolutely nothing to do with driving) and services (no-cost maintenance plans, loaners, roadside assistance) that were always the responsibility of the consumer before.

It is the same with IT today.  We can no longer just deliver a service to our consumers; we must endeavor to ensure the quality of the consumer’s experience using that service.  This pushes the boundaries for what we need to see, measure, and respond to beyond the obvious green light/red light blinking in the datacenter.  As IT professionals, we need to validate that the services we deliver are being consumed in a manner that enables the user to be productive for the business.

In other words, knowing you have 5 9s of availability for your ERP system is great, but does it really explain the whole story?   If a system is up and available, but the user experience is poor enough to affect productivity, and results in a lower than expected output from that population, what is the net result?

Moving our visibility out to this level is not easy.  We have always relied upon the user to initiate the process and have responded reactively.  With the right framework, we can expand our proactive capabilities, alerting us to potential efficiency issues before the user experience degrades to the point of visibility.  In this way, we move our “cheese” from systems availability to service usability.  The business can then see a direct correlation between what we provided and the actual business value what we provided has delivered.

Some of the management concepts here are not entirely new, but the way they are leveraged may be. Synthetic transactions, round-trip analytics, and bandwidth analysis are a few of the vectors to consider.  But as important is how we react to events in these streams, and how quickly we can return usability to “Normal State.” Auto discovery and re-direction play key roles and parallel process troubleshooting tools can minimize experience impact.

As we move forward, we need to jettison the old concepts of inside-out monitoring and management and a datacenter focus, and move toward service-oriented metrics and measurement across infrastructure layers from delivery engine to consumption point.

Mind the Gap – Service-Oriented Management

IT management used to be about specialization.  We built skills in a swim-lane approach – deep and narrow channels of talent where you could go from point A to B and back in a pretty straight line, all the time being able to see the bottom of the pool.  In essence, we operated like a well-oiled Olympic swim team.  Each team member had a specialty in their specific discipline, and once in a while we’d all get together for a good ole’ medley event.

And because this was our talent base, we developed tools that would focus their skills in those specific areas.  It looked something like this:

"Mind the Gap"

But is this the way IT is actually consumed by the business?  Consumption is by the service, not by the individual layer.  Consumption looks more like this:

"Mind the Gap"

From a user perspective, the individual layers are irrelevant.  It’s about the results of all the layers combined, or to put a common term around it, it’s about a service.  Email is a service, so is Saleforce.com, but both of those have very different implications from a management perspective.

A failure in any one of these underlying layers can dramatically affect to user productivity.  For example, if a user is consuming your email service, and there is a storage layer issue, they may see reduced performance.  The same “result” could be seen if there is a host, network layer, bandwidth or local client issue.  So when a user requests assistance, where do you start?

Most organizations will work from one side of the “pool” to the other using escalations between the lanes as specific layers are eliminated, starting with Help Desk services and ending up in the infrastructure team.  But is this the most efficient way to provide good service to our customers?  And what if the service was Salesforce.com and not something we fully manage internally? Is the same methodology still applicable?

Here is where we need to start looking at a service-level management approach.  Extract the individual layers and combine them into an operating unit that delivers the service in question.  The viewpoint should be from how the service is consumed, not what individually makes up that service.  Measurement, metrics, visibility and response should span the lanes in the same direction as consumption.  This will require us to alter the tools and processes we use to respond to events.

Some scary thoughts here, if you consider the number of “services” our customers consume, and the implications of a hybrid cloud world.  But the alternative is even more frightening.  As platforms that we do not fully manage (IaaS, PaaS, SaaS) become more integral to our environments, the blind spots in our vision will expand.  So, the question is more of a “when” do we move in this direction rather than an “if.”  We can continue to swim our lanes, and maybe we can shave off a tenth of a second here or there.  But, true achievement will come when we can look across all the lanes and see the world from the eyes of our consumers.

 

Cloud Isn’t Social, It’s Business

Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS.

Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model.

While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization.

By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations.

A service subscription business model:

  • Makes it easier to project costs across entire infrastructure
    Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project.
  • Easier to justify cost of infrastructure
    Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not.
  • Enables business to manage costs over time 
    Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not.
  • Easier to start up a project/application and ramp up over time as associated revenue increases
    Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets.
  • Enables consistent comparison with off-premise cloud computing
    A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required.

The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand.

That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well.

One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT.


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

VMworld Recap: Day One

Day 1 at VMworld 2012 has been pretty action packed.  The first order of business was the official handing over of the reins from Paul Maritz to Pat Gelsinger as CEO of VMware.  Paul will remain involved as he is taking the Chief Strategist role at EMC which owns 80% of VMware so I would not expect his influence to go away anytime soon.  From conversations I’ve had with others both inside and outside of VMware, the primary reason for this move seems to be purely operational.  Paul is an absolute visionary and has taken VMware to some fantastic heights over his four-year tenure, however there have been some challenges on the operational side in executing on the great visions.  This is where Pat comes into the picture as he’s historically been a pure operations guy so I envision the team of Paul and Pat to do some great things for VMware going forward.

Some other key highlights from the Keynote are as follows:

  1. It is estimated that 60% of all x86 server workloads in the world are now virtualized and 80% of that 60% are virtualized on ESX/vSphere.
  2. There are now 125,000 VCP certified engineers worldwide, almost a 5-fold increase from 4 years ago
  3. The dreaded vRAM allocation licensing model for vSphere 5 is now officially dead with the release of vSphere 5.1.  VMware is going back to per socket licensing and neither RAM nor cores matter.  Personally, I am not sure this was a great move as I think most people were over the headache of vRAM and in reality I never saw a single customer who was adversely affected by it.  When Pat announced this, I think he thought the entire auditorium would roar in appreciation but that was not the case.  Yes, there was some cheering, but even Pat made mention of the fact that it wasn’t the full on reaction he expected.
  4. There are a lot of new certifications and certification tracks that were announced to better align with VMware’s definition of the new “stack.”  These tracks include the pre-existing datacenter infrastructure certs plus new ones around Cloud (think vCloud Director here), Desktop (View and Wanova/Mirage), and Apps (SpringSource).  I’ll be taking the new VCP-IaaS exam tomorrow so wish me luck!
  5. There was a light touch on both the Dynamic Ops and Nicira acquisitions.  Both of these have huge implications for VMware but really not much was announced at the show.  Both of these are very recent acquisitions so it will take some time for VMware to get them integrated but I am very excited about the possibilities of each.
  6. There was an announcement of the vCloud Suite, which essentially is a bundling of existing VMware products under a singular license model.  There are the typical Standard, Enterprise, and Enterprise Plus editions of the suite which include different pieces and parts, but the Enterprise Plus edition throws in about everything and the kitchen sink including….
    1. vSphere 5.1 Enterprise Plus
    2. vCenter Operations Enterprise
    3. vCloud Director
    4. vCloud networking/security (I assume this will eventually include Nicira networking virtualization and the vShield product family)
    5. Site Recovery Manager
    6. vFabric Application Director
    7. Lots of focus on virtualization of business critical applications and not just the usual suspects of SQL, Oracle, Exchange, etc.  There was a cool demo of Hadoop via Project Serengeti which automates the spinning up/down of various Hadoop VMs and this is delivered as a single virtual appliance.  GreenPages has done a lot in the business critical app virtualization space over the past couple of years and we remain excited about the possibilities that virtualization brings to these beefy apps.
    8. One of the big geeky announcements is around the concept of shared nothing vMotion.  This means that you can now move a live running VM between two host servers but without any requirement for shared storage, basically vMotion without a SAN.  This has massive implications in the SMB and branch office spaces where the cost of shared storage was very prohibitive.  Now you can get some of the cool benefits of virtualization using only very cheap direct attached storage!
    9. The final piece of the keynote showed VMware’s vision for virtualization of “everything” including compute, storage, and networking.  Look for some very cool stuff coming over the next 6 months or so in relation to new ways of thinking about networking and storage within a virtual environment.  These are two elements that really have not fundamentally changed how they work since the advent of x86 virtualization and we are now running into limitations due to this.  VMware is leading the charge in changing the way we think about these two critical elements and looking at very interesting ways to attack design and in the end making it much simpler to work with networking and storage technologies within virtualized environments.

Have to jump back over for Day 2 activities now, but be on the lookout for some upcoming GreenPages events where we’ll dive deeper into the announcements from the show!

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…

Mind the Gap – Consumerization of Innovation

The landscape of IT innovation is changing. “Back in the day” (said in my gravelly old-man voice from my Barcalounger wearing my Netware red t-shirt) companies who were developing new technology solutions brought them to the enterprise and marketed them to the IT management stack. CIOs, CTOs and IT directors were the injection point for technology acceptance into the business. Now, that injection point has been turned into a fire hose.

Think about many of the technologies we have to consider as we develop our enterprise architectures:  tablets, smartphones, cloud computing, application stores, and file synchronization. Because our users and clients are consuming these technologies today outside of IT, we need to be aware of what they are using, how they are using it, and what bunker-buster is likely to be dropped into our lap next.

Sure, you can argue that “tablets” had been around for a number of years prior to the release of the iPad in 2010.  Apple’s own Newton Message Pad in 1993 is often the first device defined as a computing tablet. HP, IBM and others developed “tablets” going back to 2000 based on the Microsoft Tablet PC specification. These did gain some traction in certain industries (construction/architecture, medical).  However, these were primarily converted laptops with minimally innovative capabilities that failed to gain mass adoption. With the iPad, Apple demonstrated the concept of consumerization of innovation by developing the platform to the needs of the consumer market first, addressing the reasons why people would use a computing tablet instead of just pounding current corporate technology into a new shape. 

Now, IT has to deal with mass iPad usage by their users and customers.

Similarly, cloud services have been used in the consumer market for over a decade. It can be stated that many of the services users consume outside of the enterprise are cloud services (iTunes, Dropbox, Skype, Pandora, social networking, etc). As a consumer of these services, the user gains functionality that is not always available from the enterprises they work for. They can select, download and install applications that address their specific needs (self-service anyone?). They can share files with others around the globe. They can select the type of content they consume and how they communicate with others via streaming audio, video and news feeds. And don’t get me started on Twitter.

And this is the Gap IT needs to close.

We have tried to show our user population and our business owners the deficiencies in these technologies in terms of security, availability, service levels, management and other great IT industry “talk to the hand” terminology.  We’ve turned blue in the face and stamped our feet like a 2-year-old in the candy isle.  But has that stopped the pressure to adopt and enable these technologies within the enterprise? Remember, our business owners are consumers too.

IT needs to give a little here to maintain a modicum of control over the consumption of these technologies. The tech companies will continue to market to the masses (wouldn’t you?) as long as that mass market continues to consume.  And we, as IT people, will continue to face that mounting pressure and have to answer the question: “Why can’t we do that?” The net is that the pendulum of innovation is now swinging to the consumer side of the fulcrum. IT is reacting to technology instead of introducing it.

To close this Gap, we need to develop ways of saying “yes” without compromising our policies and standards, and do it efficiently. Is there a magic bullet here? No. But we have to recognize the inevitable and start moving toward the light. 

My best advice today is to be open-minded to what users are asking for. Expand your acceptance of user-initiated technology requests (many of them may be great ways to solve long term issues). Become an enabler instead of a CI –“no”. Adjust your perspectives to allow for flexibility in your control processes, tools and metrics.  And, most important of all, become a consumer of the consumer innovations. Knowledge is power, and experience is the best teacher we have.

 

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!