By Geoff Smith, Senior Solutions Architect
If you have read my last three blogs on the changing landscape of IT management, you can probably guess by now where I’m leaning in terms of what should be a key metric in determining success: the experience of the user.
As any industry progresses from its infancy to mainstream acceptance, the focus for success invariably transitions from being the “wizard-behind-the-curtain” towards transparency and accountability. Think of the automobile industry. Do you really buy a car anymore, or do you buy a driving experience? Auto manufacturers have had to add a slew of gizmos (some which have absolutely nothing to do with driving) and services (no-cost maintenance plans, loaners, roadside assistance) that were always the responsibility of the consumer before.
It is the same with IT today. We can no longer just deliver a service to our consumers; we must endeavor to ensure the quality of the consumer’s experience using that service. This pushes the boundaries for what we need to see, measure, and respond to beyond the obvious green light/red light blinking in the datacenter. As IT professionals, we need to validate that the services we deliver are being consumed in a manner that enables the user to be productive for the business.
In other words, knowing you have 5 9s of availability for your ERP system is great, but does it really explain the whole story? If a system is up and available, but the user experience is poor enough to affect productivity, and results in a lower than expected output from that population, what is the net result?
Moving our visibility out to this level is not easy. We have always relied upon the user to initiate the process and have responded reactively. With the right framework, we can expand our proactive capabilities, alerting us to potential efficiency issues before the user experience degrades to the point of visibility. In this way, we move our “cheese” from systems availability to service usability. The business can then see a direct correlation between what we provided and the actual business value what we provided has delivered.
Some of the management concepts here are not entirely new, but the way they are leveraged may be. Synthetic transactions, round-trip analytics, and bandwidth analysis are a few of the vectors to consider. But as important is how we react to events in these streams, and how quickly we can return usability to “Normal State.” Auto discovery and re-direction play key roles and parallel process troubleshooting tools can minimize experience impact.
As we move forward, we need to jettison the old concepts of inside-out monitoring and management and a datacenter focus, and move toward service-oriented metrics and measurement across infrastructure layers from delivery engine to consumption point.