Todas las entradas hechas por Trevor Williamson

Evolving to a Broker of Technology Services: Planning the Solution

By Trevor Williamson, Director, Solutions Architecture

A 3-Part Series:

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again

Part 2: Planning the Solution

As I wrote before and continuing with part 2 of this 3-part series, let’s talk about how we plan the solution for automating IT services and service management within your organization so that you can develop, deliver, and support services in a more disciplined way—which means that your customers will trust you. Of course this doesn’t mean that they won’t pursue outsourced, cloud, or other third-party services—but they will rely on you to get the most out of those services.  And once you do go through this process, some of the major benefits for implementing an automated service management infrastructure are:

  • Improved staff productivity that allows your business to become more competitive. Your time is too valuable to be spent fighting fires and performing repetitive tasks. If you prevent the fires and automate the repetitive tasks, you can focus on new projects and innovation instead. When you apply automation tools to good processes, productivity skyrockets to a level unachievable by manual methods.
  • Heightened quality of service that improves business uptime and customer experience. Consistent execution according to a well-defined change management process, for example, can dramatically reduce errors, that in turn improves uptime and customer experience because in today’s age of continuous operations and unrelenting customer demand, downtime can erode your competitive edge quickly. Sloppy change management can cause business downtime that prevents customers from buying online or reduces the productivity of your workforce.
  • Reduced operational costs to reinvest in new and innovative initiatives. It’s been said that keeping the lights on—the costs to maintain ongoing operations, systems, and equipment—eats up roughly 80% of the overall IT budget rather than going to new or innovative projects. With more standardized and automated processes, you can improve productivity and reduce operational costs allowing you the freedom to focus on more strategic initiatives.
  • Improved reputation with the business. Most self-aware IT organizations acknowledge that their reputation with business stakeholders isn’t always sterling. This is a critical problem, but you can’t fix it overnight—changing an organization’s culture, institutionalized behaviors, and stereotypes takes time and energy. If you can continue to drive higher productivity and quality through automated service management, your business stakeholders will take notice.

A very important aspect of planning this new infrastructure is to look toward, in fact assume, that the range of control will necessarily span both internal and external resources…that you will be stretching into public cloud spaces—not that you will always know you’re there until after the fact—and that you will be managing them (at least monitoring them) with the same level of granularity that you do with your traditional resources.

This includes integrating the native functionality of those off-premises services—reserving virtual machines and groups of machines, extending reservations, cloning aggregate applications, provisioning storage, etc., and connecting them to an end-to-end value chain of IT services that can be assessed, monitored and followed from where the data resides to where it is used by the end user:

It is through this holistic process—rationalized, deconstructed, optimized, reconstituted and ultimately automated—that the system as a whole can be seen as a fully automated IT services management infrastructure, but believe me when I say that this is not nor will it ever be an easy task.  When you are looking to plan how you automate your service management infrastructure, you need a comprehensive approach that follows a logical and tightly controlled progression.  By whatever name you call the methodology (and there are many out there) it needs to be concise, comprehensive, capable, and, above all else, controlled:

1. Identify the trends, justify the business case, and assess your maturity. Before investing in an automated service management infrastructure, you have to assess the opportunity, build the business case, and understand the current state. This phase will answer the following questions:

o    Why is automated service management important to my business?

o    What are the business and IT benefits?

o    How prepared is my organization to tackle this initiative?

2.  Develop your strategic plan, staffing plan, and technology roadmaps. You translate what you learn from the prior phase into specific automated service management strategies. The goal of this phase is to help you answer these key questions:

o    Do I have the right long-term strategic vision for automated service management?

o    What are my stakeholders’ expectations, and how can I deliver on them?

o    What technologies should I invest in and how should I prioritize them?

3.  Invest in your skills and staff, policies and procedures, and technologies and services. This phase is designed to execute on your automated service management strategies. This phase will answer the following people, process, and technology questions:

o    What specific skills and staff will I need, and when?

o    What policies and procedures do I need to develop and enforce?

o    Should I build and manage my own technology capabilities or use external service providers?

o    What specific vendors and service providers should I consider?

4.  Manage your performance, develop metrics, and communicate and train. Finally, to help you refine and improve your automated service management infrastructure, the goal in this phase is to help you answer these key questions:

o    How should I adjust my automated service management plans and budgets?

o    What metrics should I use to track my success?

o    How should I communicate and train stakeholders on new automated service management policies and technologies?

These phases and the associated questions to be answered are just a taste of what is required when you are thinking of moving toward an automated service management infrastructure—and of course GreenPages is here to help—especially when you are in the planning stages.  The process is not painless and it is certainly not easy but the end result, the journey in fact, is well worth the time, effort and investment to accomplish it.

Next…Part 3: Executing the Solution, again and again…

If you’re looking for more information, we will be holding free events in Boston, NYC, and Atlanta to discuss cloud computing, virtualization, VDI, clustered datacenters, and more. We’ll have a bunch of breakout sessions, and it will also be a great opportunity to network with peers.

 

The Evolution from a Provider of Technology Components to a Broker of Technology Services

A 3 Part Series from Trevor Williamson

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again…

Part 1: Understanding the Dilemma

IT teams are increasingly being challenged as bring-your-own-technology (BYOD) policies and “as-a-service” software and infrastructure multiply in mainstream organizations.  In this new reality, developers still need compute, network and storage to keep up with growth…and workers still need some sort of PC or mobile device to get their jobs done…but they don’t necessarily need corporate IT to give it to them.  They can turn to a shadow IT organization using Amazon, Rackspace and Savvis or using SAS applications or an unmanaged desktop because when all is said and done, if you can’t deliver on what your users and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.

Much of this shift toward outside services comes down to customer experience, or how your customers—your users—perceive their every interaction with IT, from your staff in the helpdesk to corporate applications they access every day.  If what you are delivering (or not delivering as the case may be) is more burdensome, more complicated or doesn’t react as fast as other service providers (like Amazon, Office 365, or Salesforce, etc.), then they will turn (in droves) toward those providers.

Now the question hanging heavy in the air is what do those providers have, except of course scale, that your IT organization doesn’t have?  What is the special sauce for them to be able to deliver those high-value services, quicker and at a lower cost than you can?

In a few words; IT Service Management (ITSM)…but wait!…I know the first reaction you might have is that ITSM has become a sour subject and that if you hear ITIL chanted one more time you’re going to flip out.  The type of ITSM I’m talking about is really the next generation and has only passing similarities to the service management initiatives of the past.  While it is agreed that ITSM has the potential to deliver the experiences and outcomes your developers and users need and want, today’s ITSM falls far short of that idea.  Process for process sake you’ve probably heard…but whatever, we are still measuring success based on internal IT efficiencies, not customer or financial value or even customer satisfaction. We still associate ITSM exclusively with ITIL best practices and we continue to label ourselves as providers of technology components.

As it turns out, the adage “You cannot fix today’s problems with yesterday’s solutions” is as right as it ever was.  We need to turn ITSM on its head and create a new way forward based on customer centricity, services focus, and automated operations.  We have to rethink the role we play and how we engage with the business.  Among the most significant transformations of IT we need to complete is from a provider of technology components to a broker of technology services. We have relied on ITSM to drive this transformation, but ITSM needs to change in order to be truly effective in the future. Here’s why:

   The roots of service management focus was on the customer:  “Service management” originated within the product marketing and management departments and from the beginning, service management placed the customer at the center of all decision making within the service provider organization. It’s the foundation to transform product-oriented organizations into service providers where the customer experience and interaction are designed and managed to cost-effectively deliver customer results and satisfaction.

  But when we applied service management to IT, we lost customer focus: Applying service management to information technology produced the well-known discipline of ITSM but, unfortunately, IT professionals associated it exclusively with the IT infrastructure library (ITIL) best practices, which focus on processes for managing IT infrastructure to enable and support services. What’s missing is the customer perspective.

   In the age of the customer, we need to proactively manage services via automation: In the age of the customer, technology-led disruption (virtualization, automation, orchestration, operating at scale, etc.) erodes traditional competitive barriers making it easier than ever for empowered employees and app developers to take advantage of new devices and cloud-based software. To truly function as a service provider, IT needs to first and foremost consider the customer and the customer’s desired outcome in order to serve them faster, cheaper, and at a higher quality.  In today’s world, this can only be accomplished via automation.

When customers don’t trust a provider to deliver quality products or services, they seek alternatives. That’s a pretty simple concept that everyone can understand but, what if the customer is a user of IT services that you provide?  Where do they go if they don’t like the quality of your products or services?  Yep, Amazon, Rackspace, Terremark, etc., and any other service provider who offers a solution that you can’t…or that you can’t in the required time or for the required price.

The reason why these service providers can do these seemingly amazing things and offer such diverse and, at times, sophisticated services, is because they have eliminated (mostly) the issues associated with humans doing “stuff” by automating commodity IT activities and then orchestrating those automated activities toward delivering aggregate IT services.  They have evolved from being providers of technology components to brokers of technology services.

If you’re looking for more information on BYOD, register for our upcoming webinar “BYOD Webinar- Don’t Fight It, Mitigate the Risk with Mobile Management

Next…Part 2: Planning the Solution

 

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…

Optimize Your Infrastructure; From Hand-built to Mass-production

If you’ve been reading this blog, you’ll know that I write a lot about cloud and cloud technologies, specifically around optimizing IT infrastructures and transitioning them from traditional management methodologies and ideals toward dynamic, cloud-based methodologies.  Recently, in conversations with customers as well as my colleagues and peers within the industry, it is becoming increasingly clear that the public, at least the subset I deal with, are simply fed up with the massive amount of hype surrounding cloud.  Everyone is using that as a selling point and have attached so many different meanings that it has become meaningless…white noise that just hums in the background and adds no value to the conversation.  In order to try to cut through that background noise I’m going to cast the conversation in a way that is a lot less buzzy and a little more specific to what people know and are familiar with.  Let’s talk about cars (haa ha, again)…and how Henry Ford revolutionized the automobile industry.

First, let’s be clear that Henry Ford did not invent the automobile, he invented a way to make automobiles affordable to the common man or as he put it, the “great multitude.”  After the Model A, he realized he’d need a more efficient way to mass produce cars in order to lower the price while keeping them at the same level of quality they were known for. He looked at other industries and found four principles that would further his goal: interchangeable parts, continuous flow, division of labor, and reducing wasted effort. Ford put these principles into play gradually over five years, fine-tuning and testing as he went along. In 1913, they came together in the first moving assembly line ever used for large-scale manufacturing. Ford produced cars at a record-breaking rate…and each one that rolled off the production line was virtually identical to the one before and after.

Now let’s see how the same principles (of mass production) can revolutionize the IT Infrastructure as they did the automobile industry…and also let’s be clear that I am not calling this cloud, or dynamic datacenter or whatever the buzz-du-jour is, I am simply calling it an Optimized Infrastructure because that is what it is…an IT infrastructure that produces the highest quality IT products and services in the most efficient manner and at the lowest cost.

Interchangeable Parts

Henry Ford discovered significant efficiency by using interchangeable parts which meant making the individual pieces of the car the same every time. That way any valve would fit any engine, any steering wheel would fit any chassis. The efficiencies to be gained were proven in the assembly of standardized photography equipment pioneered by George Eastman in 1892. This meant improving the machinery and cutting tools used to make the parts. But once the machines were adjusted, a low-skilled laborer could operate them, replacing the skilled craftsperson that formerly made the parts by hand.

In a traditional “Hand-Built” IT infrastructure, skilled engineers are basically building servers—physical and virtual—and other IT assets from scratch and are typically reusing very little with each build.  They may have a “golden image” for the OS, but they then build multiple images based on the purpose of the server, its language or the geographic location of the division or department it is meant to serve.  They might layer on different software stacks with particularly configured applications or install each application one after another.  These assets are then configured by hand using run books, build lists etc. Then tested by hand, etc. which means that it takes time and skilled effort and there are still unacceptable amounts of errors, failures and expensive rework.

By significantly updating and improving the tools used (i.e. virtualization, configuration and change management, software distribution, etc.), the final state of IT assets can be standardized, the way they are built can be standardized, and the processes used to build them can be standardized…such that building any asset becomes a clear and repeatable process of connecting different parts together; these interchangeable parts can be used over and over and over again to produce virtually identical copies of the assets at much lower costs.

Division of Labor

Once Ford standardized his parts and tools, he needed to divide up how things were done in order to be more efficient. He needed to figure out which process should be done first so he divided the labor by breaking the assembly of the Model T into 84 distinct steps. Each worker was trained to do just one of these steps but always in the exact same order.

The Optimized Infrastructure relies on the same principle of dividing up the effort (of defining, creating, managing and ultimately retiring each IT asset) so that only the most relevant technology, tool or sometimes, yes, human, does the work. As can be seen in later sections, these “tools” (people, process or technology components) are then aligned in the most efficient manner such that it dramatically lowers the cost of running the system as well as guarantees that each specific work effort can be optimized individually, irrespective of the system as a whole.

Continuous Flow

To improve efficiency even more, and lower the cost even further, Ford needed the assembly line to be arranged so that as one task was finished, another began, with minimum time spent in set-up (set-up is always a negative production value). Ford was inspired by the meat-packing houses of Chicago and a grain mill conveyor belt he had seen. If he brought the work to the workers, they spent less time moving about. He adopted the Chicago meat-packers overhead trolley to auto production by installing the first automatic conveyer belt.

In an Optimized Infrastructure, this conveyor belt (assembly line) consists of individual process steps (automation) that are “brought to the worker” (each specific technological component responsible for that process step….see; division of labor) in a well-defined pattern (workflow) and then each workflow arranged in a well-controlled manner (orchestration) because it is no longer human workers doing those commodity IT activities (well, in 99.99% of the cases) but the system itself, leveraging virtualization, fungible resource pools and high levels of standardization among other things. This is the infrastructure assembly line and is how IT assets are mass produced…each identical and of the same high quality at the same low cost.

Reducing Wasted Effort

As a final principle, Ford called in Frederick Winslow Taylor, the creator of “scientific management,” to do time and motion studies to determine the exact speed at which the work should proceed and the exact motions workers should use to accomplish their tasks, thereby reducing wasted effort. In an Optimized Infrastructure, this is done through understanding and using continuous process improvement (CPI), but CPI cannot be done correctly unless you are monitoring the performance details of all the processes and the performance of the system as a whole and then documenting the results on a constant basis. This requires an infrastructure-wide management and monitoring strategy which, as you’ve probably guessed, was what Fredrick Taylor was doing in the Ford plant in the early 1900s.

Whatever You Call It…

From the start, the Model T was less expensive than most other hand-built cars because of expert engineering practices, but it was still not attainable for the “great multitude” as Ford had promised the world. He realized he’d need a more efficient way to produce the car in order to lower the price, and by using the four principles of interchangeable parts, continuous flow, division of labor, and reducing wasted effort, in 1915 he was able to drop the price of the Model T from $850 to $290 and, in that year, he sold 1 million cars.

Whether you prefer to call it cloud, or dynamic datacenter, or the Great Spedini’s Presto-Chango Cave of Magic Data doesn’t really matter…the fact is that those four principles listed above can be used along with the tools, technologies and operational methodologies that exist today—which are not rocket science or bleeding edge—to revolutionize your IT Infrastructure and stop hand-building your IT assets (employing your smartest and best workers to do so) and start mass producing those assets to lower your cost, increase your quality and, ultimately, significantly increase the value of your infrastructure.

With an Optimized Infrastructure of automated tools and processes where standardized/interchangeable parts are constantly reused based on a well-designed and efficiently orchestrated workflow that is monitored end-to-end, you too can make IT affordable for the “great multitude” in your organization.

The Private Cloud Strikes Back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree.  His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth.  Yes, cost is indeed a factor as is sharing risk but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not. He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions.  I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon but if your main concern is how to better equip and provide the environment and tools necessary to innovate within your organization, the whole social thing is a red herring for selling you things that you don’t need.

Traditional status quo within IT is deeply encumbered by mostly manual processes—optimized for people carrying out commodity IT tasks such as provisioning servers and OSes—that cannot be optimized any further, therefore a different, much better way had to be found.  That way is the private cloud which takes those commodity IT tasks and elevates them to automated and orchestrated, well defined workflows and then utilizes a policy-driven system to carry them out.  Whether these workflows are initiated by a human or as a result of a specific set of monitored criteria, the system dynamically creates and recreates itself based on actual business and performance need—something that is almost impossible to translate into the public cloud scenario.

Not that public cloud cannot be leveraged where appropriate, but the enterprise’s requirement is much more granular and specific than any public cloud can or should allow…simply to JP’s point that they must share the risk among many players and that risk is generic by definition within the public cloud.  Once you start creating one-off specific environments, the commonality is lost and it loses the cost benefits because now you are simply utilizing a private cloud whose assets are owned by someone else…sound like co-lo?

Finally, I wouldn’t expect someone whose main revenue source is based on the idea that a public cloud is better than a private cloud to say anything different than what JP has said, but I did expect some semblance of clarity as to where his loyalties lie…and it looks like it’s not with the best interests of the enterprise customer.

Where Is the Cloud Going? Try Thinking “Minority Report”

I read a news release (here) recently where NVidia is proposing to partition processing between on-device and cloud-located graphics hardware…here’s an excerpt:

“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realizing their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”

As well as the split processing that is handled by the Silk browser on the Kindle Fire (see here), I started thinking about that “processing partitioning” strategy in relation to other aspects of computing and cloud computing in particular.  My thinking is that, over the next five to seven years (at most by 2020), there will be several very important seismic shifts in computing dealing with at least four separate events:  1) user data becomes a centralized commodity that’s brokered by a few major players,  2) a new cloud-specific programming language is developed, 3) processing becomes “completely” decoupled from hardware and location, and, D) end user computing becomes based almost completely on SoC technologies (see here).  The end result will be a world of data and processing independence never seen that will allow us to live in that Minority Report world.  I’ll describe the events and then will describe how all of them will come together to create what I call “pervasive personal processing” or P3.

User Data

Data about you, your reading preferences, what you buy, what you watch on TV, where you shop, etc. exist in literally thousands of different locations and that’s a problem…not for you…but for merchants and the companies that support them.  It’s information that must be stored and maintained and regularly refreshed for it to remain valuable, basically, what is being called “big data.” The extent of this data almost cannot be measured because it is so pervasive and relevant to everyday life. It is contained within so many services we access day in and day out and businesses are struggling to manage it. Now the argument goes that they do this, at great cost, because it is a competitive advantage to hoard that information (information is power, right?) and eventually, profits will arise from it.  Um, maybe yes and maybe no but it’s extremely difficult to actually measure that “eventual” profit…so I’ll go along with “no.” Now even though big data-focused hardware and software manufacturers are attempting to alleviate these problems of scale, the businesses who house these growing petabytes…and yes, even exabytes…of data are not seeing the expected benefits—relevant to their profits—as it costs money, lots of it.  This is money that is taken off the top line and definitely affects the bottom line.

Because of these imaginary profits (and the real loss), more and more companies will start outsourcing the “hoarding” of this data until the eventual state is that there are 2 or 3 big players who will act as brokers. I personally think it will be either the credit card companies or the credit rating agencies…both groups have the basic frameworks for delivering consumer profiles as a service (CPaaS) and charge for access rights.  A big step toward this will be when Microsoft unleashes IDaaS (Identity as a Service) as part of their integrating Active Directory into their Azure cloud. It’ll be a hurdle for them to convince the public to trust them, but I think they will eventually prevail.

These profile brokers will start using IDaaS because then they don’t have to have separate internal identity management systems (for separate data repositories of user data) for other businesses to access their CPaaS offerings.  Once this starts to gain traction you can bet that the real data mining begins on your online, and offline, habits because your loyalty card at the grocery store will be part of your profile…as will your
credit history and your public driving record and the books you get from your local library and…well, you get the picture.  Once your consumer profile is centralized, all kinds of data feeds will appear because the profile brokers will pay for them.  Your local government, always strapped for cash, will sell you out in an instant for some recurring monthly revenue.

Cloud-specific Programming

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely but, to-date, they have been entirely encapsulated within the local machine (or in some cases the nodes of a super computer or HPC cluster which, for our purposes, really is just a large single machine).  What this means is that the programs written for those systems need to know precisely where the functions will be run, what subsystems will run them, the exact syntax and context, etc.  One slight error or a small lag in the response time and the whole thing could crash or, at best, run slowly or produce additional errors.

But, what if you had a computer language that understood the cloud and took into account latency, data errors and even missing data?  A language that was able to partition processing amongst all kinds of different processing locations, and know that the next time, the locations may have moved?  A language that could guess at the best place to process (i.e. lowest latency, highest cache hit rate, etc.) but then change its mind as conditions change?

That language would allow you to specify a type of processing and then actively seek the best place for that processing to happen based on many different details…processing intensity, floating point, entire algorithm or proportional, subset or superset…and fully understand that, in some cases, it will have to make educated guesses about what the returned data will be (in case of unexpected latency).  It will also have to know that the data to be processed may exist in a thousand different locations such as the CPaaS providers, government feeds, or other providers for specific data types.  It will also be able to adapt its processing to the available processing locations such that it elegantly deprecates functionality…maybe based on a probability factor included in the language that records variables over time and uses that to guess where it will be next and line up the processing needed beforehand.  The possibilities are endless, but not impossible…which leads to…

Decoupled Processing and SoC

As can be seen by the efforts NVidia is making is this area, it will soon be that the processing of data will become completely decoupled from where that data lives or is used. What this is and how it will be done will rely on other events (see previous section) but the bottom line is that once it is decoupled, a whole new class of device will appear, in both static and mobile versions, that will be based on System on a Chip (SoC) which will allow deep processing density with very, very low power consumption. These devices will support multiple code sets across hundreds of cores and be able to intelligently communicate their capabilities in real time to distributed processing services that request their local processing services…whether over Wi-Fi, Bluetooth, IrDA, GSM, CDMA, or whatever comes next, the devices themselves will make the choice based on best use of bandwidth, processing request, location, etc.  These devices will take full advantage of the cloud specific computing languages to distribute processing across dozens and possibly hundreds of processing locations and will hold almost no data because they don’t have to, everything exists someplace else in the cloud.  In some cases these devices will be very small, the size of a thin watch for example, but they will be able to process the equivalent of what a super computer can do because they don’t do all of the processing, only what makes sense for the location and capabilities, etc.

These decoupled processing units, Pervasive Personal Processing or P3 units, will allow you to walk up to any workstation or monitor or TV set…anywhere in the world…and basically conduct your business as if you were sitting in from of your home computer.  All of you data, your photos, your documents, and your personal files will be instantly available in whatever way that you prefer.  All of your history for whatever services you use, online and offline, will be directly accessible.  The memo you left off writing that morning in the Houston office will be right where you left it, on that screen you just walked up to in the hotel lobby in Tokyo the next day, with the cursor blinking in the middle of the word you stopped on.

Welcome to Minority Report.

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.