Tag Archives: cloud

Mind the Gap – Quality of Experience: Beyond the Green light/Red light Datacenter.

By Geoff Smith, Senior Solutions Architect

If you have read my last three blogs on the changing landscape of IT management, you can probably guess by now where I’m leaning in terms of what should be a key metric in determining success:  the experience of the user.

As any industry progresses from its infancy to mainstream acceptance, the focus for success invariably transitions from being the “wizard-behind-the-curtain” towards transparency and accountability.  Think of the automobile industry.  Do you really buy a car anymore, or do you buy a driving experience?  Auto manufacturers have had to add a slew of gizmos (some which have absolutely nothing to do with driving) and services (no-cost maintenance plans, loaners, roadside assistance) that were always the responsibility of the consumer before.

It is the same with IT today.  We can no longer just deliver a service to our consumers; we must endeavor to ensure the quality of the consumer’s experience using that service.  This pushes the boundaries for what we need to see, measure, and respond to beyond the obvious green light/red light blinking in the datacenter.  As IT professionals, we need to validate that the services we deliver are being consumed in a manner that enables the user to be productive for the business.

In other words, knowing you have 5 9s of availability for your ERP system is great, but does it really explain the whole story?   If a system is up and available, but the user experience is poor enough to affect productivity, and results in a lower than expected output from that population, what is the net result?

Moving our visibility out to this level is not easy.  We have always relied upon the user to initiate the process and have responded reactively.  With the right framework, we can expand our proactive capabilities, alerting us to potential efficiency issues before the user experience degrades to the point of visibility.  In this way, we move our “cheese” from systems availability to service usability.  The business can then see a direct correlation between what we provided and the actual business value what we provided has delivered.

Some of the management concepts here are not entirely new, but the way they are leveraged may be. Synthetic transactions, round-trip analytics, and bandwidth analysis are a few of the vectors to consider.  But as important is how we react to events in these streams, and how quickly we can return usability to “Normal State.” Auto discovery and re-direction play key roles and parallel process troubleshooting tools can minimize experience impact.

As we move forward, we need to jettison the old concepts of inside-out monitoring and management and a datacenter focus, and move toward service-oriented metrics and measurement across infrastructure layers from delivery engine to consumption point.

Mind the Gap – Service-Oriented Management

IT management used to be about specialization.  We built skills in a swim-lane approach – deep and narrow channels of talent where you could go from point A to B and back in a pretty straight line, all the time being able to see the bottom of the pool.  In essence, we operated like a well-oiled Olympic swim team.  Each team member had a specialty in their specific discipline, and once in a while we’d all get together for a good ole’ medley event.

And because this was our talent base, we developed tools that would focus their skills in those specific areas.  It looked something like this:

"Mind the Gap"

But is this the way IT is actually consumed by the business?  Consumption is by the service, not by the individual layer.  Consumption looks more like this:

"Mind the Gap"

From a user perspective, the individual layers are irrelevant.  It’s about the results of all the layers combined, or to put a common term around it, it’s about a service.  Email is a service, so is Saleforce.com, but both of those have very different implications from a management perspective.

A failure in any one of these underlying layers can dramatically affect to user productivity.  For example, if a user is consuming your email service, and there is a storage layer issue, they may see reduced performance.  The same “result” could be seen if there is a host, network layer, bandwidth or local client issue.  So when a user requests assistance, where do you start?

Most organizations will work from one side of the “pool” to the other using escalations between the lanes as specific layers are eliminated, starting with Help Desk services and ending up in the infrastructure team.  But is this the most efficient way to provide good service to our customers?  And what if the service was Salesforce.com and not something we fully manage internally? Is the same methodology still applicable?

Here is where we need to start looking at a service-level management approach.  Extract the individual layers and combine them into an operating unit that delivers the service in question.  The viewpoint should be from how the service is consumed, not what individually makes up that service.  Measurement, metrics, visibility and response should span the lanes in the same direction as consumption.  This will require us to alter the tools and processes we use to respond to events.

Some scary thoughts here, if you consider the number of “services” our customers consume, and the implications of a hybrid cloud world.  But the alternative is even more frightening.  As platforms that we do not fully manage (IaaS, PaaS, SaaS) become more integral to our environments, the blind spots in our vision will expand.  So, the question is more of a “when” do we move in this direction rather than an “if.”  We can continue to swim our lanes, and maybe we can shave off a tenth of a second here or there.  But, true achievement will come when we can look across all the lanes and see the world from the eyes of our consumers.

 

Cloud Isn’t Social, It’s Business

Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS.

Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model.

While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization.

By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations.

A service subscription business model:

  • Makes it easier to project costs across entire infrastructure
    Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project.
  • Easier to justify cost of infrastructure
    Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not.
  • Enables business to manage costs over time 
    Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not.
  • Easier to start up a project/application and ramp up over time as associated revenue increases
    Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets.
  • Enables consistent comparison with off-premise cloud computing
    A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required.

The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand.

That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well.

One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT.


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

VMworld Recap: Day One

Day 1 at VMworld 2012 has been pretty action packed.  The first order of business was the official handing over of the reins from Paul Maritz to Pat Gelsinger as CEO of VMware.  Paul will remain involved as he is taking the Chief Strategist role at EMC which owns 80% of VMware so I would not expect his influence to go away anytime soon.  From conversations I’ve had with others both inside and outside of VMware, the primary reason for this move seems to be purely operational.  Paul is an absolute visionary and has taken VMware to some fantastic heights over his four-year tenure, however there have been some challenges on the operational side in executing on the great visions.  This is where Pat comes into the picture as he’s historically been a pure operations guy so I envision the team of Paul and Pat to do some great things for VMware going forward.

Some other key highlights from the Keynote are as follows:

  1. It is estimated that 60% of all x86 server workloads in the world are now virtualized and 80% of that 60% are virtualized on ESX/vSphere.
  2. There are now 125,000 VCP certified engineers worldwide, almost a 5-fold increase from 4 years ago
  3. The dreaded vRAM allocation licensing model for vSphere 5 is now officially dead with the release of vSphere 5.1.  VMware is going back to per socket licensing and neither RAM nor cores matter.  Personally, I am not sure this was a great move as I think most people were over the headache of vRAM and in reality I never saw a single customer who was adversely affected by it.  When Pat announced this, I think he thought the entire auditorium would roar in appreciation but that was not the case.  Yes, there was some cheering, but even Pat made mention of the fact that it wasn’t the full on reaction he expected.
  4. There are a lot of new certifications and certification tracks that were announced to better align with VMware’s definition of the new “stack.”  These tracks include the pre-existing datacenter infrastructure certs plus new ones around Cloud (think vCloud Director here), Desktop (View and Wanova/Mirage), and Apps (SpringSource).  I’ll be taking the new VCP-IaaS exam tomorrow so wish me luck!
  5. There was a light touch on both the Dynamic Ops and Nicira acquisitions.  Both of these have huge implications for VMware but really not much was announced at the show.  Both of these are very recent acquisitions so it will take some time for VMware to get them integrated but I am very excited about the possibilities of each.
  6. There was an announcement of the vCloud Suite, which essentially is a bundling of existing VMware products under a singular license model.  There are the typical Standard, Enterprise, and Enterprise Plus editions of the suite which include different pieces and parts, but the Enterprise Plus edition throws in about everything and the kitchen sink including….
    1. vSphere 5.1 Enterprise Plus
    2. vCenter Operations Enterprise
    3. vCloud Director
    4. vCloud networking/security (I assume this will eventually include Nicira networking virtualization and the vShield product family)
    5. Site Recovery Manager
    6. vFabric Application Director
    7. Lots of focus on virtualization of business critical applications and not just the usual suspects of SQL, Oracle, Exchange, etc.  There was a cool demo of Hadoop via Project Serengeti which automates the spinning up/down of various Hadoop VMs and this is delivered as a single virtual appliance.  GreenPages has done a lot in the business critical app virtualization space over the past couple of years and we remain excited about the possibilities that virtualization brings to these beefy apps.
    8. One of the big geeky announcements is around the concept of shared nothing vMotion.  This means that you can now move a live running VM between two host servers but without any requirement for shared storage, basically vMotion without a SAN.  This has massive implications in the SMB and branch office spaces where the cost of shared storage was very prohibitive.  Now you can get some of the cool benefits of virtualization using only very cheap direct attached storage!
    9. The final piece of the keynote showed VMware’s vision for virtualization of “everything” including compute, storage, and networking.  Look for some very cool stuff coming over the next 6 months or so in relation to new ways of thinking about networking and storage within a virtual environment.  These are two elements that really have not fundamentally changed how they work since the advent of x86 virtualization and we are now running into limitations due to this.  VMware is leading the charge in changing the way we think about these two critical elements and looking at very interesting ways to attack design and in the end making it much simpler to work with networking and storage technologies within virtualized environments.

Have to jump back over for Day 2 activities now, but be on the lookout for some upcoming GreenPages events where we’ll dive deeper into the announcements from the show!

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…

Mind the Gap – Consumerization of Innovation

The landscape of IT innovation is changing. “Back in the day” (said in my gravelly old-man voice from my Barcalounger wearing my Netware red t-shirt) companies who were developing new technology solutions brought them to the enterprise and marketed them to the IT management stack. CIOs, CTOs and IT directors were the injection point for technology acceptance into the business. Now, that injection point has been turned into a fire hose.

Think about many of the technologies we have to consider as we develop our enterprise architectures:  tablets, smartphones, cloud computing, application stores, and file synchronization. Because our users and clients are consuming these technologies today outside of IT, we need to be aware of what they are using, how they are using it, and what bunker-buster is likely to be dropped into our lap next.

Sure, you can argue that “tablets” had been around for a number of years prior to the release of the iPad in 2010.  Apple’s own Newton Message Pad in 1993 is often the first device defined as a computing tablet. HP, IBM and others developed “tablets” going back to 2000 based on the Microsoft Tablet PC specification. These did gain some traction in certain industries (construction/architecture, medical).  However, these were primarily converted laptops with minimally innovative capabilities that failed to gain mass adoption. With the iPad, Apple demonstrated the concept of consumerization of innovation by developing the platform to the needs of the consumer market first, addressing the reasons why people would use a computing tablet instead of just pounding current corporate technology into a new shape. 

Now, IT has to deal with mass iPad usage by their users and customers.

Similarly, cloud services have been used in the consumer market for over a decade. It can be stated that many of the services users consume outside of the enterprise are cloud services (iTunes, Dropbox, Skype, Pandora, social networking, etc). As a consumer of these services, the user gains functionality that is not always available from the enterprises they work for. They can select, download and install applications that address their specific needs (self-service anyone?). They can share files with others around the globe. They can select the type of content they consume and how they communicate with others via streaming audio, video and news feeds. And don’t get me started on Twitter.

And this is the Gap IT needs to close.

We have tried to show our user population and our business owners the deficiencies in these technologies in terms of security, availability, service levels, management and other great IT industry “talk to the hand” terminology.  We’ve turned blue in the face and stamped our feet like a 2-year-old in the candy isle.  But has that stopped the pressure to adopt and enable these technologies within the enterprise? Remember, our business owners are consumers too.

IT needs to give a little here to maintain a modicum of control over the consumption of these technologies. The tech companies will continue to market to the masses (wouldn’t you?) as long as that mass market continues to consume.  And we, as IT people, will continue to face that mounting pressure and have to answer the question: “Why can’t we do that?” The net is that the pendulum of innovation is now swinging to the consumer side of the fulcrum. IT is reacting to technology instead of introducing it.

To close this Gap, we need to develop ways of saying “yes” without compromising our policies and standards, and do it efficiently. Is there a magic bullet here? No. But we have to recognize the inevitable and start moving toward the light. 

My best advice today is to be open-minded to what users are asking for. Expand your acceptance of user-initiated technology requests (many of them may be great ways to solve long term issues). Become an enabler instead of a CI –“no”. Adjust your perspectives to allow for flexibility in your control processes, tools and metrics.  And, most important of all, become a consumer of the consumer innovations. Knowledge is power, and experience is the best teacher we have.

 

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!

Mind the Gap – Transitioning Your IT Management Methodology

At the recent GreenPages’ Summit, I presented on a topic that I believe will be key to our (for those of us in IT management) success as we re-define IT in the “cloud” era.  In the past, I have tried to define the term “cloud,” and have described it as anything from “an ecosystem of compute capabilities that can be delivered upon demand from anywhere to anywhere” to “IT in 3D.”  In truth, its definition is not really that important, but how we enable the appropriate use of it in our architectures is.

One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  In theory, cloud services are beyond our direct control.  But are they beyond our ability to evaluate and influence?

IT is about enablement.  Enabling our customers or end users to complete the tasks that drive our businesses forward is our true calling.  Enabling the business to gain intelligence from its data is our craft.    So, we must strive to enable, where appropriate and effective, the use of cloud services as part of our mission.  What then is the impact to IT management?

There are the obvious challenges.  Cloud services are provided by, and managed by, those whom we consume them from.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and services step into that void where we cannot see?

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  Sure, we still have to manage and maintain our own datacenters (unless you go 100% service provider).  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage external resources as part of your overall architecture, you step outside of the hardened physical/virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT resources.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems that are not our own to devices we did not provision.

This is a transition, not a light-switch event.  Over the next few blogs I hope to focus some attention on several of the gaps that will exist as we move forward.  As a sneak peak, consider these statements:

Consumerization of technical innovation

Service-oriented management focus

Quality of Experience

“Greatest Generation” of users

Come on by and bring your imagination.  There is not one right or wrong answer here, but a framework for us to discuss what changes are coming like a speeding train, and how we need to mind the gap if we don’t want to be run over.

Cloudscape 2012: WhatsUp at GreenPages? Journey to success!

Guest Post from Caitlin Buxton, Director of North American Channel Sales, WhatsUp Gold Network Management Division of Ipswitch, Inc.

The WhatsUp Gold team attended the GreenPages Annual Technology Summit this week on the scenic New Hampshire/Maine Seacoast. This event was one of the most valuable technology summits we have participated in this year. The three-day event showcased all of GreenPages’ exemplary talent, skill, and professionalism that the organization brings to the IT community for both clients and vendor partners.

During the Partner Pavilion, we exhibited WhatsUp Gold’s Suite of Network Management and Log Management solutions and showed attendees how these solutions install, discover, and map network connected assets in minutes. We also showcased the powerful SNMP, WMI and SSH monitoring, alerting and notification capabilities, and web-based management which gives organizations a complete picture of an entire network infrastructure in real-time.

The entire GreenPages staff worked very closely with our team both in pre-event planning and during the event to make sure our investment and time was well spent by engaging with their clients, learning their challenges, and understanding how our solutions can make life easier. The GreenPages Account Managers were fantastic in providing insight into their clients’ needs and facilitating productive conversations.

I was also impressed by how many clients raved about the incredible value they receive from GreenPages. Repeatedly, I was told how hard the GreenPages team works to understand their individual business needs and helps to deliver solutions and information specific to their needs. They are always looking out for their customers’ best interests.

This is not surprising given that 100 IT Executives with limited time and budgets would not have travelled from all over the country for this event if they did not get significant value from it. However, it was refreshing to hear directly from the customers. It validates the pride I have in our GreenPages partnership knowing such a quality organization is on our team representing the WhatsUp Gold family of solutions.

Well done GreenPages! Thank you!