Todas las entradas hechas por Geoff Smith

Putting the “Converged” in Hyperconverged Support

Today’s hyperconverged technologies are here to stay it seems.  I mean, who wouldn’t want to employ a novel technology approach that “consolidates all required functionality” into a single infrastructure appliance that provides an “efficient, elastic pool of x86” resources controlled by a “software-centric” architecture?  I mean, outside of the x86 component, it’s not like we haven’t seen this type of platform before (hello, mainframe anyone?).

But this post is not about the technology behind HCI, nor about whether this technology is the right choice for your IT demands – it’s more about what you need to consider on day two, after your new platform is happily spinning away in your datacenter.  Assuming you have determined that the hyperconverged path will deliver technology and business value for your organization, why wouldn’t you extend that belief system to how you plan on operating it?

Today’s hyperconverged vendors offer very comprehensive packages that include some advanced support offerings.  They have spent much time and energy (and VC dollars) in creating monitoring and analytics platforms that are definitely an advancement over traditional technology support packages.  While technology vendors such as HP, Dell/EMC, Cisco and others have for years provided phone-home monitoring and utilization/performance reporting capabilities, hyperconverged vendors have pushed these capabilities further with real-time analytics and automation workflows (ie Nutanix Prism, SimpliVity OmniWatch, OmniView).  Additionally, these vendors have aligned support plans to business outcomes such as “mission critical”, “production”, “basic”, etc.

Now you are asking, Mr. know-it-all, didn’t you just debunk your own argument? Au contraire I say, I have just re-enforced it…

Each hyperconverged vendor technology requires its own SEPARATE platform for monitoring and analytics.  And these tools are RESTRICTED to just what is happening INTERNALLY within the converged platform.  Sure, that covers quite a bit of your operational needs, but is it the COMPLETE story?

Let’s say you deploy SimpliVity for your main datacenter.  You adopt the “Mission Critical” support plan, which comes with OmniWatch and OmniView.  You now have great insight into how your OmniCube architecture is operating, and you can delve into the analytics to understand how your SimpliVity resources are being utilized.  In addition, you get software support with 1, 2, or 4 hour response (depending on the the channel you use – phone, email, web ticket).  You also get software updates and RCA reports.  It sounds like a comprehensive, “converged” set of required support services.

And it is, for your selected hyperconverged vendor.  What these services do not provide is a holistic view of how the hyperconverged platforms are operating WITHIN the totality of your environment.  How effective is the networking that connects it to the rest of the datacenter?  What about non-hyperconverged based workloads, either on traditional server platforms or in the cloud?  And how do you measure end user experience if your view is limited to hyperconverged data-points?  Not to mention, what happens if your selected hyperconverged vendor is gobbled up by one of the major technology companies or, worse, closes when funding runs dry?

Adopting hyperconverged as your next-generation technology play is certainly something to consider carefully, and has the potential to positively impact your overall operational maturity.  You can reduce the number of vendor technologies and management interfaces, get more proactive, and make decisions based on real data analytics. But your operations teams will still need to determine if the source of impact is within the scope of the hyperconverged stack and covered by the vendor support plan, or if its symptomatic of an external influence.

Beyond the awareness of health and optimized operations, there will be service interruptions.  If there weren’t we would all be in the unemployment line.  Will a 1 hour response be sufficient in a major outage?  Is your operational team able to response 7X24 with hyperconverged skills?  And, how will you consolidate governance and compliance reporting between the hyperconverged platform and the rest of your infrastructure?

Hyperconverged platforms can certainly enhance and help mature your IT operations, but they do provide only part of the story.  Consider carefully if their operational and support offerings are sufficient for overall IT operational effectiveness.  Look for ways to consolidate the operational information and data provided by hyperconverged platforms with the rest of your management interfaces into a single control plane, where your operations team can work more efficiently.  If you’re looking for help, GreenPages can provide this support via its Cloud Management as a Service (CMaaS) offering.

Convergence at this level is even more critical to ensure maximum support of your business objectives.

If you are interested in learning how GreenPages’ CMaaS platform can help you manage hyper-converged offerings, reach out!

 

By Geoff Smith, Senior Manager, Managed Services Business Development

Are Your Users Happy? Tips for Running a Successful IT Help Desk

What do you think of when you hear the term Help Desk?  Is it a room full of technicians with noise-cancelling headsets, logged into an IT Service Management (ITSM) system, talking with their hands and guzzling Red Bulls?  In your vision, do they appear haggard, glassy-eyed and stressed?  Do they participate in the corporate culture, or languish in that basement call center the rest of the company thinks is some super-secret laboratory?

That may seem a little outrageous, but consider this: Google and Bing searches on “help desk” don’t show a real human representative until almost 20 images in.  And even then, the images are stereotypical and generic.  So you have to ask yourself, is that how the rest of the organization sees your help desk team?  Are they relegated to anonymity?

Back when I started my career in the IT industry as a service technician in the 1980s, I was a pretty popular guy when I strolled in the door to solve someone’s computer issue.  I would come in with my bag of tools, some floppy disks, and my trusty degausser.  I was that guy who could perform the voodoo ritual that would breathe life back into their systems while they went off and filed something or made some sales calls, and, because what I did was largely a mystery to them, they were (generally) pleasant and patient.

{Register for Geoff’s upcoming webinar, “IT Help Desk for the Holidays: The Strategic Gift That Keeps on Giving”}

There is a new reality today, one born out of the following facts:

  1. Little productive work can be done without a functioning system today
  2. Users are more sophisticated with the basics of computer functionality
  3. Systems are more integrated and inter-dependent
  4. Remote support capabilities and call center technologies have matured greatly

These are not unique to IT; there are many parallels to other industries.  Are you more patient today when visiting the doctor, getting your car serviced, or when your Internet goes out?  Or do you find yourself self-diagnosing, visiting the forums, or fiddling with the cables first, and then when you do call or visit the specialist, you’re frustrated and impatient?

With these new realities, IT help desks have to mature to maintain value and provide good client satisfaction.  Our customer base is better educated, more dependent, and less patient than in the past.  However, we have new technologies to reduce wait times and improve resolution times and can leverage data and analytics to identify trends and predict usage demands. I’m actually hosting a webinar next week to go over strategies for reducing wait times and improving resolution times if you’re interested in learning more.

The development of help desk services has been traditionally based on user counts and request quantities: X amount of users placing Y amount of requests equals the number of people I need to staff my service with. But there are other factors that can complicate that seemingly simple calculation, such as the ebbs and flows of requests by time of day, day of the week or month of the year, usage spikes due to new platform and application roll outs, and the geographic dispersion of users to be supported.  Other factors include the types of requests and length of the typical resolution cycle, technologies being consumed, potential complications from BYOD and mobile workforce requirements, and the quality of support artifacts.  And that doesn’t even consider staff burn out, attrition, and career advancement impacts on delivery capabilities.

So, as the person responsible for the delivery of a seemingly basic, vanilla and anonymous service, how do you create something that is world-class, aligned to your specific business outcomes, and is the face of all IT support to a sophisticated, diverse and impatient workforce that needs to work anytime, from anywhere on any type of device?  Seems a pretty daunting challenge.  Here are some tactics you can use:

  • Consider starting at the end. What is the desired business outcome from your help desk service? Ask the question of the line of business owners, determine their individual needs, and correlate that into a prioritized list of requirements. Is speed-to-answer the most valuable? Or is it resolution time? How much can you rely on their users to self-service?
  • Think like a services provider, not as a member of the organization. If you had to craft a solution that provides a consistent and predictable outcome, but that can flex non-linearly as demand changes without impacting your SLAs, how would you do that? What challenges may impact your service delivery capability, and how can those be dealt with proactively?
  • Determine what information is critical in self-evaluation of your service delivery, and in demonstration of value. How would you share that with the rest of the organization, and how can that be leveraged for continual improvement? Your constituents should feel empowered to opine and provide feedback, but equally as important is how you assess yourself.
  • What can I achieve within the given budget? Do I have to downgrade service levels, or can I save money by utilizing lower cost resources and enabling them with better documentation and support artifacts? What are the trade-offs?
  • Think creatively. If I move some services to a provider, can I improve the overall experience by re-dedicating the internal team to more complex issue resolutions or provide a more robust, hands-on response? Will providing end user training based on issue type allocation reduce the help desk need and create a more sustainable service?

Above all else, set the proper expectations, be realistic, and don’t over-commit.  At the root of it, help desk is a human experience, and since all humans are bound to be imperfect, so will your help desk.  While the prevailing perspective may be that they are automatons toiling away in some deep dark lair, we know that they are the face of all we deliver to our constituents.

Interested in hearing more from Geoff on how to run a world-class help desk? Register for our December, 10th webinar!

 

By Geoff Smith, Senior Manager, Managed Services Business Development

Build-Operate-Transfer Model: Creating a Valuable Framework for IT

The build-operate-transfer model is about taking the concept of a long term outsourced service, traditional in the Managed Services space, and addressing it in a way that allows the customer to get value out of the services at the end of the engagement. It’s also a way to address challenges within the IT operational team that feel like their services are being replaced by outside services.

With a build-operate-transfer model, you really need to start with the end-game in mind. Where are you going to be in 5 years? 7 years? 10 years? Are the services you’re consuming today going to be the same services you need then? How could your future plans be altered (mergers, acquisitions, etc.)? You need a way to be able to transfer those services but get value out of what you have been consuming in the previous term. That’s what the build-operate-transfer model is all about.

 

 

The corporate IT department has evolved. Has yours kept pace?

 

By Geoff Smith, Director, Managed Services Business Development

How to “Houdini” from the Risks of Deferred Maintenance

I recently gave a webinar on deferred maintenance and how you can learn to “escape” the risks of postponing routine maintenance activities from one of the great masters of escape, Harry Houdini. You can listen to the webinar on-demand here.

First, a little Houdini background. Harry started his career working the local nightclub and circus circuits, where he developed both his act and his showmanship skills. He then went to Europe, where he utilized his mastery to get longer bookings and build his reputation as an escape artist. Once he established mastery over one type of escape (for example handcuffs), he would add elements to that trick to keep his material fresh and extend his reputation. He moved on from handcuffs to chains and straightjackets, then to jailbreaks, underwater escapes, etc. Each time he re-invented his routine, mastering each of the individual aspects of the entire performance. Now you may be asking how a magician from the early 1900s can offer any insight into how to keep your modern, 21st century IT platforms healthy and available. Well, through my own version of creative magic, let me show you…

deferred maintenance

 

Let’s define deferred maintenance to start. Simply put, it’s the delay or suspension of the execution of the routine tasks required to retain the full functionality of a system, platform or application. Maintenance is not repair, and the difference is important to this conversation. Repair is to return a system, platform or application to its previous state of functionality. This makes the assumption that the device has moved from its desired state to a lesser state.

Here are some examples of deferred maintenance in IT, at least IMHO. You can certainly argue otherwise with some of these…

Updating firmware on a hard drive, versus replacing a failed one. Performing patch management. How about removing temporary files, disk defragmentation, or extending warranty / vendor support coverage before they expire.  If you wait for them to expire, then you can state that while the device is not in a lesser state of functionality, it may take longer to acquire parts or get a technician on the phone for assistance, and that would impact repair time.

Ok, so let’s agree the lines can be a little blurred between what is and what is not considered maintenance.  But I don’t think there is a lot of room for debate on what the outcomes of deferring that maintenance can be.  When you defer, you can impact system availability, impact your ability to update other systems, extend the length of critical event management, and even extend your time to market support for the business. You can also state that deferring maintenance increases your risks and can increase your maintenance costs when you do catch up.

So, how does the story of Houdini provide a guiding hand in how to escape from this reality? If you look at how Houdini created his act, built his reputation, and maintained his status over a long career, the secrets are there to be discovered.

The first element he employed is that of research. Houdini spent much of his time researching all methods of escape both before and while incorporating them into his performances. It made him more effective over time and allowed for improvements to process and execute.  In this way, Houdini provided a roadmap for all similar artists to follow. He would spend hundreds of hours in this mode in order to perform a trick that may take minutes to execute.  Why so much time?  It was what allowed him to discover singular ways to solve different challenges. Following his example, we can also say that the more research you do on the best and most efficient ways to perform maintenance activities, the more successful and cost effective they become.  If they are successful and cost effective, then they are much more likely to be repeated and not deferred.

The second way Houdini became a master was leveraging advanced planning and preparation. He often visited jails prior to his jailbreaks to map out the layout, determine the locking mechanisms, and where best to conceal his “tools.”  Likewise with his underwater escapes, he installed a large bathtub in his home to allow himself to practice holding his breath. This allowed him to perform feats others saw as impossible. You can do the same with maintenance. Planning and preparation will enable you or your team to feel comfortable with the process, deliver it with consistency, and feel good about the outcome.  With maintenance, that is half the battle.

The third aspect of Houdini’s success was repetition. Houdini would perform the same act hundreds of times in order to reduce his escape time.  That allowed him to add elements of danger, like doing it underwater or while “buried alive.”  Now, I would not recommend that you perform your maintenance tasks while handcuffed underwater, but the repetition of the tasks can lead to interesting outcomes.  Efficiency for one, and consistency for another.  When you reach this level, you can start looking at ways to extend your maintenance into other areas, further improving your systems availability, health and stability.

Lastly, Houdini employed the concept of continual improvement to his performances. He was not satisfied when he mastered a particular escape. Part of his genius was in recognizing that he could push the envelope further, be more daring and dangerous. And this is how he became an international star.  Often his “new” escapes were nothing more than a combination of things he had already mastered, with just a new wrinkle or different angle explored. Again, this is similar to you maintenance plans. Once mastered, you can re-evaluate them on a regular basis to see if there are new methods, technologies, or partnerships out there that could provide further economies or better results.

This all sounds good, but what if you are already behind? Houdini had an answer for that one too.  Occasionally, he would be put on the spot and asked to escape from something without prior knowledge or an ability to research or plan. Yet, most often he was still successful.  How?  Well, because he was never truly surprised. Even if he did not know the request was coming, he was prepared if it did.  And that also applies to maintenance. Create a mindset where all aspects of IT change include a maintenance component. Consider maintenance as critical as the initial implementation or upgrade. What will be the impact?  Can I leverage my current process and tools?  Will it extend my maintenance windows?  In this way you can stay ahead of any challenges to your maintenance procedures.

Proactive maintenance isn’t fun, and it won’t make your career.  But it may help you avoid being “handcuffed” in supporting your organization’s objectives and from being “buried alive” by a backlog of deferred tasks and operational impacts.

 

To hear more on this topic from Geoff, download his recent webinar where he goes into more detail around deferred maintenance

By Geoff Smith, Senior Manager, Managed Services Business Development

 

Balancing Control and Agility in Today’s IT Operational Reality

How can IT Departments balance control and agility in today’s IT operational reality? For decades, IT Operations has viewed itself as the controlling influence on the “wild west” of business influences. We have had to create our own culture of control in order to extend our influence beyond the four hardened walls of the datacenter, and now the diaphanous boundaries of the Cloud. Control was synonymous with good IT hygiene, and we prided ourselves in this. It’s not by accident that outside of the IT circles, we were viewed as gatekeepers and traffic cops, regulating the use (and hopefully abuse) of valuable IT resources and critical data sets. Many of us built our careers on a foundation of saying “no,” or, for those of us with less tact, “are you crazy?”

That was then, when we were the all-seeing, god-like nocturnal creatures operating in the dark of server rooms and wiring closets. Our IT worlds have changed dramatically since those heady days of power and ultimate dominion over our domain(s). I mean, really, we actually created something called Domains so the non-IT peasant-class could work in our world easier, and we even have our own Internet Hall of Fame!

Now, life is a little different. IT awareness has become more mainstream, and innovation is actually happening at a faster pace in the consumer market.  We are continually being challenged by the business, but in a different and more informed manner than in our old glory days. We need to adapt our approach, and adjust our perspective in order to stay valued by the business. My colleague John Dixon has a quality ebook around the evolution of the corporate IT department that I would highly recommend taking a look at.

This is where Agility comes into play. Think of what it takes to become agile.  It takes both a measure of control, and a measure of flexibility. They seem to be odd roommates. But in actuality, they feed off each other, balance one-another. Control is how you keep chaos out of agility, and agility is how you keep control from becoming too restraining.

Mario Andretti has a great quote about control: “If everything seems under control, you’re just not going fast enough.” And this is where the rub is in today’s business climate. We are operating at faster speeds and shorter times-to-market than ever before. Competition is global and not always above-board or out in the open. The raw number of influences in our customer base have exponentially increased.  We have less “control” over our markets now, and by nature have to become more “agile” in our progress.

IT operations must become more agile to support this new reality. Gone are the days of saying “not on my platform”, or calling the CIO the CI-NO. To become more agile, we need to enable our teams to spend more time on innovation than on maintenance.

So what needs to change? Well, first, we need to give our teams back some of the Time and Energy they are spending in maintenance and management functions. To do this, we need to drive innovations in that space, and think about lowest cost of delivery for routine IT functions. To some this means outsourcing, to others it’s about better automation and collaboration. If we can offload 50-70% of the current maintenance workload from our teams, our teams can then turn their attention away from the rear-view mirror and start looking for the next strategic challenge. A few months back I did a webinar around how IT departments can modernize their IT operations by killing the transactional treadmill.

Once we have accomplished this, we then need to re-focus their attention to innovating for the business.  This could be in the form of completing strategic projects or enhancing applications and services that drive revenue. Beyond the obvious benefits for the business, this re-focus on innovation will create a more valuable IT organization, and generally more invested team members.

With more time and energy focused on innovation, we need to now create new culture within IT around sharing and educating. IT teams can no longer operate in silos effectively if they are truly to innovate.  We have to remove the boundaries between the IT layers and share the knowledge our teams gather with the business overall.  Only then can the business truly see and appreciate the advances IT is making in supporting their initiatives.

To keep this going long term you need to adjust your alignment towards shared success, both within IT and between IT and the rest of the organization. And don’t forget your partners, those that are now assisting with your foundational operations and management functions. By tying all of them together to a single set of success criteria and metrics, you will enforce good behavior and focus on the ultimate objective – delivery of world class IT applications and services that enable business growth and profitability.

Or, you could just stay in your proverbial server room, scanning error logs and scheduling patch updates.  You probably will survive.  But is survival all you want?

 

By Geoff Smith, Senior Manager, Managed Services Business Development

Modernizing IT by Killing the Transactional Treadmill

By Geoff Smith, Senior Manager, Managed Services, GreenPages-LogicsOne

Many IT departments today are unable to get off the transactional treadmill. You may have some serious talent in your IT department, but valuable, strategic IT assets are becoming bogged down with tactical actions. When this happens, IT cannot fulfill its true purpose: applying technology to enable business success. As an IT decision maker, you need to be providing IT with an effective, efficient, and modern way of addressing every day responsibilities so that internal focus can shift back to supporting crucial business objectives. I consistently see this issue when I’m out in the field speaking with customers. For this reason, I’m hosting a webinar on May 8th to go over some strategies your IT department can implement.

In this webinar you will learn ways to modernize IT operations and combine advanced management tools, mature operating procedures, and a skilled workforce to:

  • Build an Enterprise Command Center to effectively address and monitor the health and status of critical infrastructure systems
  • Leverage run books and Standard Operating Procedures to complete required actions and create consistency in approach
  • Establish a transparent co-sourced operational structure that promotes a culture of collaboration and joint responsibility for success
  • Create visibility and analytics that maximize availability and functionality of technology investments

If you’re interested in learning more, register here & bring your questions May 8th at 11 am EST.

 

 

Shadow IT Management – Which Pill Morpheus?

By Geoff Smith, Sr. Solutions Architect

 

The term “Shadow IT” has gotten more and more people thinking about the challenges we all face as we try to reign in our IT management and operations.  Recently, I caught a few minutes of the movie The Matrix…now, that movie is a bit of a visual trip, but once you get past the effects, the underlying dilemma it presents is intriguing.

It seems to me that if you accept the notion that people will gravitate towards the easiest ways to get their jobs done, than you have to wonder if the tools and procedures you have in place are likely to encourage compliance, or force rebellion.  As in the Matrix movies, what appears to be happening under the surface may actually be something completely different once you have peeled back the false construct you assume is reality.

It has long been known that IT people are an innovative and, well, curious lot.  We will try just about anything once, and if we find something that allows us to “better” manage our environments then we may cross over from the fringe into the shadowy world of the truly obscure in search of the truly arcane.  It’s almost a badge of honor to demonstrate how to solve IT challenges without relying on the industry best practices or accepted solutions.

The real question is, is this really a bad thing?  If you think back to The Matrix, the false construct did have its advantages.  Sure, you were effectively enslaved by machines, but at least they gave you a good fantasy to operate within.  You had juicy steak and cool clothes and the slickest cars (BTW that is a 1965 Lincoln Continental with the “suicide doors” in the movie).  And as far as anyone else in that reality was concerned you were as legitimate as they were.  So what’s wrong with that, especially considering everyone else is in the same boat?

Shadow IT, especially as it applied to IT Management, may have its benefits, but it also carries a lot of risk.  For every off-the-grid tool that performs a function within IT, or for every service you rely on that may not be fully vetted, you may have exposed your organization to potential abuses, both internal and external.  Where do these tools come from?  How reputable an organization was it that developed them?  Does their use create security vulnerabilities?  Do they violate standing policies or put at risk compliance?  And is the information you’re getting reliable?  How critical are they to the underlying functionality of your business systems?  Who on your team really understands their purpose and use?

So if we have accepted the fact that these tools and services exist, and that in all likelihood their use is prevalent in our industry, what do we do about it?  To blunt their use is to shut the door on creative innovation within our teams.  And frankly it’s not that easy to stop. To lower our standards and policies and embrace their use could lead us into situations where our lack of control and enforcement results in bad things happening.

Red pill or blue pill?  Do we accept the risks, and tell ourselves that those bad things are so unlikely to happen that the benefits outweigh the risks (or – hey I might just be the equivalent of a Duracell battery but since I don’t know it I’m happy)?  Or do we drop into a harsh reality where getting things accomplished might be more difficult and frankly less visibly rewarding (or – I’ve traded steak for Tastee Wheat but at least I know what I’m really eating).  What if there were a “purple” pill available?  An alternative to the options of pure fantasy or brutal reality?

There is a purple pill, and it’s not an answer but a question.  That question is why?  Why does my team feel they need to “jack-in” in order to accomplish anything in our environment?  Why can’t they get done what they need to with the approved tools and service already at their disposal?  Why do these policies and restrictions exist in the first place, and are those reasons still legitimate?

It’s about structured enablement and inclusive decision-making.  Gather your teams and work from the inside out.  Start with what they feel needs to be accomplished to meet the organizational needs.  Understand the gaps between how they work and the policies and procedures that are in place today.  Are there areas of consolidation or elimination of steps that can be taken to improve efficiencies and render some of the shadow services useless?

As you re-architect your approaches, also look for ways to improve the working environment for your teams.  Are there tasks they are required to perform that have become so rote and uninteresting that they have fallen into the shadows?  If so, rather than re-populate your teams with these tasks, look to move them into a more tightly controlled environment.  This may be accomplished by automation or even by out-tasking to a provider (under a strictly defined and controlled contract with full auditing and reporting).  And don’t forget that these “basic” functions are the foundation of a well-oiled IT machine.

In all transparency, I have watched The Matrix a number of times, and while my attempt to tie this concept of Shadow IT Management into the movie may have fallen short, I do think it’s not whether you choose the red pill or the blue one, but it’s the fact that you have the ability to make that choice at all.  There is a difference, after all, in knowing the path and walking the path.  Fate, it seems, is not without a sense of irony.

 

Breaking Down the Management Barriers to Adopting Hybrid Cloud Technologies

By Geoff Smith, Sr. Solutions Architect

It is inarguable that change is sweeping the IT industry.  Over the last five years a number of new technologies that provide huge technological advantages (and create management headaches) have been developed.  We have attempted to leverage these advances to the benefit of our organizations, while at the same time struggling with how to incorporate them into our established IT management methodologies.  Do we need to throw out our mature management protocols in order to partake in the advantages provided by these new technologies, or can we modify our core management approaches and leverage similar advances in management methodologies to provide a more extensible platform that enables adoption of advanced computing architectures?

Cloud computing is one such advance.  One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  Technically, cloud services are beyond our direct control because we do not “own” the underlying infrastructure and have limited say in how those services are designed and deployed.  But are they beyond our ability to evaluate and influence?

There are the obvious challenges in enabling these technologies within our organizations.  Cloud services are provided by and managed by those whom we consume them from, not within our four-walled datacenter.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and service consumption crosses that void beyond our current management capabilities?

{Download this free whitepaper to learn more about GreenPages Cloud Management as a Service offering; a revolutionary way organizations can manage hybrid cloud environments}

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  We still have to manage and maintain our own datacenters as they exist today.  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage externally consumed resources as part of your overall architecture, you step outside of the physical and virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT services.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems not our own.

So the root question is, how do we accomplish this?  There are four distinct areas of change that we need to consider:

  • Tools – the toolsets we utilize to perform our management processes need to both understand these new technologies, and expand our end-to-end visibility and evaluation capabilities
  • Techniques – we need to modify the way we perform our daily IT functions and apply our organizational policies in order to consider the new computing platforms we will be consuming.  Our ability to validate, influence and directly control IT consumption will vary, however our underlying responsibilities to deliver effective and efficient services to our organizations should not
  • Talent – we are faced with adopting not only new technologies, but also new sets of responsibilities within our IT support organizations.  The entire lifecycle of IT is moving under the responsibility of the support organization.  We can develop the appropriate internal talent or we can extend our teams with external support organizations, but in either case the talent needed will expand in proportion to the capabilities of the platforms we are enabling
  • Transparency – the success of enabling new technologies will be gauged on how well those technologies meet business needs.  Through comprehensive analysis, reporting and auditing, IT will be able to demonstrate the value of both the technology decisions and the management structures

First and foremost, we must modify our concepts of what is critical to monitor and manage.  We need to be able to move our viewpoints from individual silos of technology to a higher level of awareness.  No longer can we isolate what is happening at the network layer from what is transpiring within our storage facilities.  The scope of what we are responsible for is expanding, and the key metrics are changing.  No longer is availability the key success factor.  Usability is how our teams will be judged.

In the past, a successful IT team may have strived for five 9s of availability.  In this new paradigm, availability is now a foundational expectation.  The ability of our delivered services to be used in a manner that enables the business to meet its objectives will become the new measuring stick.  Business units will define what the acceptable usability metrics are, basing them on how they leverage these services to complete their tasks.  IT will in fact be driven to meet these service level agreements.

Secondly, we have to enable our support teams to work effectively with these new technologies.  This is a multifaceted issue, consisting of providing the right tools, processes and talent.   Tools will need to expand our ability to view, interface and influence systems and services beyond our traditional reach.  Where possible, the tools should provide an essential level of management across all platforms regardless of where those services are delivered from (internal, SaaS, PaaS, IaaS).  Likewise, our processes for responding to, managing, and remediating events will need to change.  Tighter enforcement of service level commitments and the ability to validate them will be key.  Our staff will need to be authorized to take appropriate actions to resolve issues directly, limiting escalations and handoffs.  And we will need to provide the talent (internally or via partners) necessary to deliver on the entire IT lifecycle, including provisioning, de-provisioning and procurement.

Last, IT will be required to prove the effectiveness not only of their support teams, but also of the selection of cloud-based service providers.  Because we consume external services does not release us from the requirements of service delivery to our organizations.  Our focus will need to shift toward demonstrating that service usability requirements have been met.  This will require transparency between our internally delivered systems and our externally consumed services.

This is a transition, not a light-switch event.  And as such, our approach to management change must mirror that pace.  Our priorities and focus will need to shift in concert with our shift from delivered services toward consumed services.

Would you like to learn more about our Cloud Management as a Service offering? Fill out this form and we will get in touch with you shortly!

Mind the Gap – Greatest Generation Users

By Geoff Smith, Senior Solutions Architect

As this is the last entry in the Mind the Gap blog series, I wanted to tie up all of the loose ends from the previous posts. In those, I’ve asked all of us in IT to break out of our comfy IT management “snuggies” and look at how our world is changing. In the past, IT has been the gatekeeper to technology for the business, mainly because we were the only people who lived it every day. That is no longer true.

In nature, each generation of a species evolves in some way, adapting to changes in their environment, habitat, or position in the food chain. The same can be said for IT users. As each new workforce generation rolls into the business world, they bring with them a greater understanding of what technology is and what it can do for them. I’ve been supporting users since the early 80s, when we got the cross-eyed, “I don’t need that thing” look when you dropped their new computer on their desk and took away their Rolodex. Today, if they don’t have a new laptop every 2 years you are “inhibiting my ability to function.”

This shift, in a relatively short timeframe, is what I call the Greatest Generation Users, or GGU. The workforce today is filled with GGUs. They come out of high schools and colleges with more IT awareness than many of us did when we finished our degrees in computer science. It may not be true IT knowledge, but that makes it even more difficult to support them adequately. GGUs function in a completely different way than businesses typically do today, and in order to enable a business to take full advantage of those they hire, IT is often the one saying “no you can’t.”

Many in IT still firmly believe that if technology ideas or capabilities are not borne from IT, they must be inherently suspect. But all you have to do is look at where your innovation “cheese” has been moved to (see Mind the Gap – Consumerization of Innovation) and you will quickly realize that to keep up with the GGUs you have to shrug off the corporate technology chains and find solutions that enable the GGUs to work in the ways they want. Remember—your bosses are often GGUs as well.

Beyond these users and their knowledge and expectations lies the grey world of “usability.” Up time is a thing of the past. GGUs expect little to no latency in their technology solutions and watch out if they have to refresh a page just to get updated data. Usability equals efficiency in the mind of the business, and efficiency equals profit.

And it’s not just about mobile devices, remote access, work from home or other entitlements, it’s also about how you support these use cases, ensuring the same high standards you provide to traditional corporate users. Technology and work freedoms are rapidly becoming “perks” to hiring desirable candidates. People are now more than ever the intellectual property of most organizations, and if IT is the blocking force for enablement, you may soon be waving to the GGU who takes your spot on the roster.

Mind the Gap – Quality of Experience: Beyond the Green light/Red light Datacenter.

By Geoff Smith, Senior Solutions Architect

If you have read my last three blogs on the changing landscape of IT management, you can probably guess by now where I’m leaning in terms of what should be a key metric in determining success:  the experience of the user.

As any industry progresses from its infancy to mainstream acceptance, the focus for success invariably transitions from being the “wizard-behind-the-curtain” towards transparency and accountability.  Think of the automobile industry.  Do you really buy a car anymore, or do you buy a driving experience?  Auto manufacturers have had to add a slew of gizmos (some which have absolutely nothing to do with driving) and services (no-cost maintenance plans, loaners, roadside assistance) that were always the responsibility of the consumer before.

It is the same with IT today.  We can no longer just deliver a service to our consumers; we must endeavor to ensure the quality of the consumer’s experience using that service.  This pushes the boundaries for what we need to see, measure, and respond to beyond the obvious green light/red light blinking in the datacenter.  As IT professionals, we need to validate that the services we deliver are being consumed in a manner that enables the user to be productive for the business.

In other words, knowing you have 5 9s of availability for your ERP system is great, but does it really explain the whole story?   If a system is up and available, but the user experience is poor enough to affect productivity, and results in a lower than expected output from that population, what is the net result?

Moving our visibility out to this level is not easy.  We have always relied upon the user to initiate the process and have responded reactively.  With the right framework, we can expand our proactive capabilities, alerting us to potential efficiency issues before the user experience degrades to the point of visibility.  In this way, we move our “cheese” from systems availability to service usability.  The business can then see a direct correlation between what we provided and the actual business value what we provided has delivered.

Some of the management concepts here are not entirely new, but the way they are leveraged may be. Synthetic transactions, round-trip analytics, and bandwidth analysis are a few of the vectors to consider.  But as important is how we react to events in these streams, and how quickly we can return usability to “Normal State.” Auto discovery and re-direction play key roles and parallel process troubleshooting tools can minimize experience impact.

As we move forward, we need to jettison the old concepts of inside-out monitoring and management and a datacenter focus, and move toward service-oriented metrics and measurement across infrastructure layers from delivery engine to consumption point.