All posts by Geoff Smith

Mind the Gap – Service-Oriented Management

IT management used to be about specialization.  We built skills in a swim-lane approach – deep and narrow channels of talent where you could go from point A to B and back in a pretty straight line, all the time being able to see the bottom of the pool.  In essence, we operated like a well-oiled Olympic swim team.  Each team member had a specialty in their specific discipline, and once in a while we’d all get together for a good ole’ medley event.

And because this was our talent base, we developed tools that would focus their skills in those specific areas.  It looked something like this:

"Mind the Gap"

But is this the way IT is actually consumed by the business?  Consumption is by the service, not by the individual layer.  Consumption looks more like this:

"Mind the Gap"

From a user perspective, the individual layers are irrelevant.  It’s about the results of all the layers combined, or to put a common term around it, it’s about a service.  Email is a service, so is Saleforce.com, but both of those have very different implications from a management perspective.

A failure in any one of these underlying layers can dramatically affect to user productivity.  For example, if a user is consuming your email service, and there is a storage layer issue, they may see reduced performance.  The same “result” could be seen if there is a host, network layer, bandwidth or local client issue.  So when a user requests assistance, where do you start?

Most organizations will work from one side of the “pool” to the other using escalations between the lanes as specific layers are eliminated, starting with Help Desk services and ending up in the infrastructure team.  But is this the most efficient way to provide good service to our customers?  And what if the service was Salesforce.com and not something we fully manage internally? Is the same methodology still applicable?

Here is where we need to start looking at a service-level management approach.  Extract the individual layers and combine them into an operating unit that delivers the service in question.  The viewpoint should be from how the service is consumed, not what individually makes up that service.  Measurement, metrics, visibility and response should span the lanes in the same direction as consumption.  This will require us to alter the tools and processes we use to respond to events.

Some scary thoughts here, if you consider the number of “services” our customers consume, and the implications of a hybrid cloud world.  But the alternative is even more frightening.  As platforms that we do not fully manage (IaaS, PaaS, SaaS) become more integral to our environments, the blind spots in our vision will expand.  So, the question is more of a “when” do we move in this direction rather than an “if.”  We can continue to swim our lanes, and maybe we can shave off a tenth of a second here or there.  But, true achievement will come when we can look across all the lanes and see the world from the eyes of our consumers.

 

Mind the Gap – Consumerization of Innovation

The landscape of IT innovation is changing. “Back in the day” (said in my gravelly old-man voice from my Barcalounger wearing my Netware red t-shirt) companies who were developing new technology solutions brought them to the enterprise and marketed them to the IT management stack. CIOs, CTOs and IT directors were the injection point for technology acceptance into the business. Now, that injection point has been turned into a fire hose.

Think about many of the technologies we have to consider as we develop our enterprise architectures:  tablets, smartphones, cloud computing, application stores, and file synchronization. Because our users and clients are consuming these technologies today outside of IT, we need to be aware of what they are using, how they are using it, and what bunker-buster is likely to be dropped into our lap next.

Sure, you can argue that “tablets” had been around for a number of years prior to the release of the iPad in 2010.  Apple’s own Newton Message Pad in 1993 is often the first device defined as a computing tablet. HP, IBM and others developed “tablets” going back to 2000 based on the Microsoft Tablet PC specification. These did gain some traction in certain industries (construction/architecture, medical).  However, these were primarily converted laptops with minimally innovative capabilities that failed to gain mass adoption. With the iPad, Apple demonstrated the concept of consumerization of innovation by developing the platform to the needs of the consumer market first, addressing the reasons why people would use a computing tablet instead of just pounding current corporate technology into a new shape. 

Now, IT has to deal with mass iPad usage by their users and customers.

Similarly, cloud services have been used in the consumer market for over a decade. It can be stated that many of the services users consume outside of the enterprise are cloud services (iTunes, Dropbox, Skype, Pandora, social networking, etc). As a consumer of these services, the user gains functionality that is not always available from the enterprises they work for. They can select, download and install applications that address their specific needs (self-service anyone?). They can share files with others around the globe. They can select the type of content they consume and how they communicate with others via streaming audio, video and news feeds. And don’t get me started on Twitter.

And this is the Gap IT needs to close.

We have tried to show our user population and our business owners the deficiencies in these technologies in terms of security, availability, service levels, management and other great IT industry “talk to the hand” terminology.  We’ve turned blue in the face and stamped our feet like a 2-year-old in the candy isle.  But has that stopped the pressure to adopt and enable these technologies within the enterprise? Remember, our business owners are consumers too.

IT needs to give a little here to maintain a modicum of control over the consumption of these technologies. The tech companies will continue to market to the masses (wouldn’t you?) as long as that mass market continues to consume.  And we, as IT people, will continue to face that mounting pressure and have to answer the question: “Why can’t we do that?” The net is that the pendulum of innovation is now swinging to the consumer side of the fulcrum. IT is reacting to technology instead of introducing it.

To close this Gap, we need to develop ways of saying “yes” without compromising our policies and standards, and do it efficiently. Is there a magic bullet here? No. But we have to recognize the inevitable and start moving toward the light. 

My best advice today is to be open-minded to what users are asking for. Expand your acceptance of user-initiated technology requests (many of them may be great ways to solve long term issues). Become an enabler instead of a CI –“no”. Adjust your perspectives to allow for flexibility in your control processes, tools and metrics.  And, most important of all, become a consumer of the consumer innovations. Knowledge is power, and experience is the best teacher we have.

 

Mind the Gap – Transitioning Your IT Management Methodology

At the recent GreenPages’ Summit, I presented on a topic that I believe will be key to our (for those of us in IT management) success as we re-define IT in the “cloud” era.  In the past, I have tried to define the term “cloud,” and have described it as anything from “an ecosystem of compute capabilities that can be delivered upon demand from anywhere to anywhere” to “IT in 3D.”  In truth, its definition is not really that important, but how we enable the appropriate use of it in our architectures is.

One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  In theory, cloud services are beyond our direct control.  But are they beyond our ability to evaluate and influence?

IT is about enablement.  Enabling our customers or end users to complete the tasks that drive our businesses forward is our true calling.  Enabling the business to gain intelligence from its data is our craft.    So, we must strive to enable, where appropriate and effective, the use of cloud services as part of our mission.  What then is the impact to IT management?

There are the obvious challenges.  Cloud services are provided by, and managed by, those whom we consume them from.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and services step into that void where we cannot see?

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  Sure, we still have to manage and maintain our own datacenters (unless you go 100% service provider).  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage external resources as part of your overall architecture, you step outside of the hardened physical/virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT resources.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems that are not our own to devices we did not provision.

This is a transition, not a light-switch event.  Over the next few blogs I hope to focus some attention on several of the gaps that will exist as we move forward.  As a sneak peak, consider these statements:

Consumerization of technical innovation

Service-oriented management focus

Quality of Experience

“Greatest Generation” of users

Come on by and bring your imagination.  There is not one right or wrong answer here, but a framework for us to discuss what changes are coming like a speeding train, and how we need to mind the gap if we don’t want to be run over.

Fun with Neologism in the Cloud Era

Having spent the last several blog posts on more serious considerations about cloud computing and the new IT era, I decided to lighten things up a bit.  The term “cloud” has bothered me from the first time I heard it uttered, as the concept and definition are as nebulous as, well a cloud.  In the intervening years, when thoroughly boring my wife and friends with shop talk about the “cloud,” I came to realize that in order for cloud computing to become mainstream, “it” needs to have some way to translate to the masses.

Neologism is the process of creating new words using existing or combinations of existing words to form a more descriptive term.  In our industry neologisms have been used extensively, although many of us do not realize how these terms got coined.  For example, the word “blog” is a combination of web and log.  “Blog” was formed over time as the lexicon was adopted.  It began with a new form of communicating across the Internet, known as a web log.  “Web log” become “we blog” simply by moving the space between words one to the left.  Now, regardless of who you talk to, the term “blog” is pretty much a fully formed concept.  Similarly, the term “Internet” is a combination of “inter” (between) and “network”, hence meaning between networks.

Today, the term “cloud” has become so overused that confusion reigns (get it?) over everyone.

Cloudable – meaning something that is conducive to leveraging cloud.  As in:  “My CRM application is cloudable “ or “We want to leverage data protection that includes cloudable capabilities”

Cloudiac – someone who is a huge proponent of cloud services.  A combination of “Cloud” and “Maniac”, as in:  “There were cloudiacs everywhere at Interop. “  In the not too distant future, we very well may see parallels to the “Trekkie” phenomena.  Imagine a bunch of middle-aged IT professionals running around in costumes made of giant cotton-balls and cardboard lightning bolts.

Cloudologist – an expert in cloud solutions.  Different from a Cloudiac, the Cloudologist actually has experience in developing and utilizing cloud based services.   This will lead to master’s degree programs in Cloudology.

Cloutonomous –  maintaining your autonomy over your systems and data in the cloud.  “I may be in the Cloud but I make sure I’m cloutonomous.”  Could refer to the consumer of the cloud services not being tied into long term services commitments that may inhibit their ability to move services in the event of a vendor failing to hit SLAs.

Cloud crawl – actions related to monitoring or reviewing your various cloud services.  “I went cloud crawling today and everything was sweet.” Off-take of the common “pub crawl,” just not as fun and with no lingering after-effects.

Counter-cloud – a reference to the concept of “counter culture,” which dates back to hippie days of the 60s and 70s.  In this application, it would describe a person or business that is against utilizing cloud services mainly because it is the new trend, or because they feel that it’s the latest government conspiracy to control the world.

Global Clouding – IT’s version of Global Warming, except in this case the world isn’t becoming uninhabitable, IT is just becoming a bit fuzzy around the edges.  What will IT be like with the advent of Global Clouding?

Clackers – Cloud and Hacker.  Clackers are those nefarious, shadowy figures that focus on disruption of cloud services.  This “new” form of hacker will concentrate on capturing data in transit, traffic disruption/re-direction (i.e. DNS Changer anyone?), and platform incursion.

Because IT is so lexicon heavy, building up a stable of Cloud-based terminology is inevitable, and potentially beneficial in focusing the terminology further.  Besides, as Cloudiacs will be fond of saying… “resistance is futile.”

Do you have any Neologisms of your own? I’d love to hear some!

The Likelihood Theorem

When deciding where and how to spend your IT dollars, one question that comes up consistently is how far down the path of redundancy and resiliency should you build your solution for, and where does it cross the threshold from a necessity, to a nice-to-have because-its-cool.  Defining your relative position on this path has impacts in all areas of IT, including technology selection, implementation design, policies and procedures definition, and management requirements.  Therefore, I’ve developed the Likelihood (LH) Theorem to assist with identifying where that position is relative to your specific situation.  The LH is not a financial criteria, nor is it directly an ROI metric.  However it can be used to assist with determining the impact of making certain decisions in the design process.

Prior to establishing the components that make up your LH ratio, consider that at the start, with a completely blank slate, we all have the same LH.  True, you could argue that someone establishing a system in Kansas doesn’t have to worry about a tsunami, but they do have to consider tornados.  Besides, the preparation for such a level of regional, long term impact would be very similar regardless of the root cause.

The Likelihood Theorem starts with the concept of an Event (E ).  Each ( E ) has its own unique LH.  So initially:

LH=E

Next, apply any minimum standards that you define for included systems in your environment.  Call this the Foundation Factor (FF). If you define a FF, then you can reduce LH by some factor, eliminating certain events from consideration.  For example, your FF for server hardware may be redundant power supplies, NICs, and RAID.  When it comes to network connectivity, it may be redundant paths. If using SaaS for business critical functions, it may be ISP redundancy via multi-homing and link load balancing.  Therefore

LH=E-FF

Any of us who have been in this industry (or been a consumer of IT) for more than 5 minutes knows that even with a baseline established, things happen.  This is known as the Wild Card Effect (WCE).  One key note here is that all WCEs are in some form potentially controllable by the business.  For hardware, this may be the difference between purchasing from Tier 1 and Tier 2 vendors (i.e. lower quality of components or lower mean time to failure rates).  Another WCE may be the available budget for the solution.  There may be multiple WCEs in any scenario, and all WCEs add back to the LH ratio:

WCE1 +WCE2 + WCE3 …=WCEn

And so:

LH=E-FF+WCEn

At this point, we have accounted for the event in question, reduced our risk profile by our minimum standards, and adjusted for wild cards that are beyond our minimum standards but that we could address should we have the authority to make certain decisions.  Now, we need to begin considering the impacts associated with the event in question.  Is the event we are considering singular in nature, or is it potentially repetitive?  LH related to a regional disaster would be singular, however if we are considering telecommunication outages, then repetitive is more reasonable.  So, we need to take the equation and multiply it by the potential frequency (FQ):

LH=(E-FF+WCEn)*FQ

The last factor in LH is determining the length of time that the event in question could impact the environment.  This may come into play if the system in question is transitory, an interim step to a new solution, or has an expected limited lifecycle.  The length of time that the event is possible can impact our thoughts around how much we invest in preventing it:

LH=((E-FF+WCEn)*FQ)/Time

So, in thinking about how to approach your design, consider these factors:  What event are you trying to avoid?  Do your minimum specifications eliminate the possibility of the event occurring ( E = FF )?  What if you had to reduce your specifications to meet a lower budget (WCE1) or use a solution with an inherently higher ratio of failures or lackluster support (WCE 2 and WCE3)?  Can you reduce those wildcards if the Event is not fully covered by your minimum standards (lower total WCEn)?  Will the event be a one-time thing or could it happen repeatedly over the lifecycle of the solution?

I’m not suggesting that you can associate specific numerical values for these factors, but in terms of elevating or reducing the likelihood of an event happening, these criteria are key indicators.  Using this formula is a way to ensure that working within the known constraints placed on us by the business, we have maximized our ability to avoid specific events and reduced the likelihood of those we can realistically address.

The Taxonomy of IT Part 5 – Genus and Species

As the last (do I hear applause?) installment in this five part series on the Taxonomy of IT, we have a bit of cleanup to do.  There are two remaining “levels” of classification (Genus and Species), but there is also a need to summarize this whole extravaganza into some meaningful summary.

Genus classifications allow us to subdivide the Family of IT into subcultures based on their commonality to each other, while the Species definition enables us to highlight sometimes subtle differences, such as color, range or specific habits.   Therefore, in order to round out our Taxonomy, Genus will refer to how IT is physically architected, while Species will expose what that architecture may hide.

The physical architecture of an IT environment used to fall into only a couple of categories.  Most organizations built their platforms to address immediate needs, distributing systems based on the location of their primary users.  An inventory database would be housed at the warehouse facility, while financials would sit on systems at corporate.  This required the building and maintenance of complex links, both at the physical transport layer and also at the data level.  Because of the limits of access technology, people traveled to where the “data” was kept.

Twenty years ago, we began the transition of moving the data to where the users can consume it.  A new Genus evolved that enabled data to be moved to where it could be consumed.  It’s vastly more efficient to ship a 5MB spreadsheet halfway across the country than it is to ship a 170lb accountant.  In this architecture, the enablers were increases in available bandwidth, more efficient protocols, and better end-node processing power.

As we move forward in time, we are continuing to push the efficiency envelope.  Now, we don’t even have to move that spreadsheet, we just have to move an image of that spreadsheet.  And we don’t care where it needs to go, or even what route it takes to get there.  We are all about lowest cost routing of information from storage to consumption and back.

So, Genus is a way for us to gauge how far down that arc of advancement our customers have traveled.  Think in terms of a timeline of alignment with industry trends and capabilities.

Species, on the other hand, can be used to uncover the “gaps” between where in the timeline the customer is and what they have missed (intentionally or not) in terms of best practices.  Did they advance their security in line with their technology?  Have they established usage policies?  Can their storage sustain the transition?  What have they sacrificed to get where they are today, and what lies beneath the surface?

Using Genus and Species classifications, we can round out the taxonomy of any particular IT environment.  The combination of factors from each of the seven layers completes a picture that will allow us to guide our customers through the turbulent waters of today’s IT world.

To recap the seven layers:

Kingdom: How IT is viewed by the business

Phylum: IT’s general operating philosophy

Class: How IT is managed on a daily basis

Order: How IT is consumed, and why

Family: The structure of data flow within IT systems

Genus: How IT is physically architected

Species: What that architecture may hide

It would be quite the undertaking to build out individual groupings in each of these categories.  That is not what is important (although I did enjoy creating the pseudo-Latin neologisms in earlier parts of the series).  What is key is that we consider all of these categories when creating an overall approach for our customers.  It’s not all about the physical architecture, nor all about management.  It’s about how the collection of characteristics that flow from the top level all the way down to the bottom converge into a single picture.

In my opinion, it is a fatal mistake to apply technology and solutions across any of these levels with impunity.  Assuming that because a customer fits into a specific category they “have” to leverage a specific technology or solution is to blind yourself (and ultimately your customer) to what may be more appropriate to their specific situation.

Each environment is as unique as our own strands of DNA, and as such even those that make it to the Species with commonality will branch onto different future paths.  Perhaps there should be an eighth level, one that trumps all above it.  It could be called “Individuality.”

The Taxonomy of IT – Part 4: Order and Family

The Order level of IT classification builds upon the previous Kingdom, Phylum and Class levels. In biology, Order is used to further group like organisms by traits that define their nature or character. In the Mammalia Class, Orders include Primates, Carnivora, Insectivora, and Cetacea. Carnivora is pretty self-explanatory and includes a wide range of animal species. However, Cetacea is restricted to whales, dolphins and porpoises and indicates more of an evolutionary development path that is consistent between them.

In IT, the concept of what we consume and how we got to that consumption model correlates to the concept of Order. So, Order focuses on how IT is consumed and why it’s consumed that way.

Business needs drive IT models, and as business needs change so does the way we leverage IT. An organization may have started out with a traditional on-premise solution that met all needs, and over time has morphed into a hybrid solution of internal and external resources. Likewise, the way users consume IT changes over time. This may be due to underlying business change, or possibly due to “generational” changes in the workforce. In either case, where IT is today does not always reflect its true nature.

Using consumption as a metric, we can group IT environments to bring to light how they have evolved, and expose their future needs. Some examples of different Orders might be:

Contra-Private – IT is mostly a private resource and is not specifically consumption driven. The IT organization uses their own internalized set of standards in order to identify the technical direction of the platforms. Shunning industry standards and trends, they often take a less-is-more approach to the tools and services they provide to the business. Ironically, their platforms tend to be oversized and underutilized.

Mandatorily-Mixed – here IT leverages a mix of internal, external, hard-built and truly consumed resources because the business demands it. IT may have less power to make foundational decisions or affect policy, but they typically will be better funded and be encouraged to work with outside groups. Often the internal/external moat is drawn around the LOB application stack, and these tend to be overly scaled.

Scale-Sourced – In this Order, IT would be incented to make efficiency and flexibility their guiding principles for decision-making. The business allows IT to determine use of and integration with outside services and solutions and relies on them to make the intelligent decisions. This Order is also user driven, with the ability to adopt new services and policies that drive user effectiveness.

The Family classification is the first real grouping of organisms where their external appearance is the primary factor. Oddly, what is probably the most visually apparent comes this deep in the classification model. Similarly within IT, we can now start grouping environments by their IT “appearance,” or more fundamentally, their core framework.

If you dissect a Honey Badger, it would probably be evident that it’s very much like other animals in the weasel family. It’s overall shape and proportions are similar to other weasels, from the smallest Least weasel to the largest Wolverine. So size is not the factor here, what is more important is the structure, and what type of lifestyle that structure has evolved to support. Therefore, in IT, Family refers to the core structure of data flow within IT systems.

Here are some examples:

Linear – IT is built along a pathway that conforms to a linear work flow. Systems are built to address specific point functions such as marketing, financials, manufacturing, etc. Each system has a definitive start and stop point, with end to end integration only. Input/output is translated between them, often by duplicated entry, scripted processes, or 3rd party translation. One function cannot begin until another has completed, thus creating a chain of potential break-points and inefficiencies.

Parallel – Workstreams can be completed concurrently, with some form of data-mashing at the end of each function. While this structure allows for users to work without waiting on others to complete their functions, it does require additional effort to combine the streams at the end.

Linked – Here, systems are linked at key intersections of workflow. Data crosses these intersections in a controlled and orderly fashion. Often, the data conversions are transparent or at least simplified. The efficiency level is increased, as dynamic information can be utilized by more than one person, however the complexities of this approach are often fraught with underlying dangers and support challenges.

Mobius – If you know the form of a Mobius strip, you get the idea here. In this form, it doesn’t matter what side of the workflow you are on, everything flows without interruption or collision. If this is delivered by more than one integrated system, then the integration is well tested and supported by all parties involved. More likely, this form is enabled by a singular system that receives, correlates, and forwards the data along its merry way.

Both the Order and Family are where we start to see the benefits of a Cloud IT architecture. Built to specification, consumed in a flexible, on-demand way, and enabling the true flow of information across all required systems may sound like nirvana. But, consider that our limiting factor in achieving this goal is not technology per se, but our ability to visualize and accept it.