Archivo de la etiqueta: automation

Is the Cloud Right for You?

I recently presented a session entitled, “Is the Cloud Right for You?” with Randy Weis and wanted to provide a recap of the things I covered in the presentation. In this video, I discuss some of the advantages of cloud including the access to enterprise class hardware that you might not normally be able to afford, load balancers, multiple data centers, redundancy, automation and more. I also cover some of the risks associated with the cloud. Enjoy, and as always, reach out with any questions!

Download eBook: The Evolution of the Corporate IT Department

By Chris Chesley, Solutions Architect

Amazon buys ClusterK to reduce AWS deployment costs

Amazon has acquired ClusterK, which offers software that optimses deployments on AWS spot instances

Amazon has acquired ClusterK, which offers software that optimses deployments on AWS spot instances

Amazon has acquired ClusterK, a provider of software that optimises deployment on AWS spot instances for cost and availability.

Amazon confirmed the acquisition to BCN but declined to offer any details about how the technology would be integrated in AWS, or the financial terms of the acquisition.

One of the challenges with EC2 spot instances is that cost and availability can vary dramatically depending on overall demand.

At the same time when these instances are used for long jobs (say, running batch jobs on large databases) and those jobs are interrupted, those instances can actually disappear from right under you – unless failovers on reserved instances or similar techniques are deployed.

Those are some of the things ClusterK aims to solve. It offers an orchestration and scaling service that uses the AWS spot market in conjunction with on-demand or reserved instances to optimise workload deployments for cost and availability – an automated way of keeping workload cost and availability in check (the company claims it can reduce cloud costs by up to 90 per cent).

While it’s not clear exactly how Amazon intends to integrate the technology it is clear the company is keen to do what it takes to keep the price of its services dropping, which is where ClusterK could certainly add value. While disclosing its cloud revenues for the first time last week the company said it has dropped the prices of its services about 50 times since AWS launched ten years ago.

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).

 

 

Why Automate? What to Automate? How to Automate?

By John Dixon, Consulting Architect

Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.

Why automate?

There are several key benefits surrounding automation. They include:

  • Saving time
  • Employees can be retrained to focus on other (hopefully more strategic) tasks
  • Removing human intervention reduces errors
  • Troubleshooting and support is improved when everything is deployed the same way

What to automate?

Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.

What are companies automating?

Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.

How to automate?

There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:

  1. Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
  2. Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
    1. “As a [role],
    2. I want to [get something done]
    3. So that I can [benefit in the following way]”
  3. Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
  4. Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
  5. You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.

 

What has your organization automated? How have the results been?

 

IT Multi-Tasking: I Was Told There’d Be No Math

By Ben Sawyer, Solutions Engineer

 

The term “multi-tasking” basically means doing more than one thing at once.  I am writing this blog while playing Legos w/ my son & helping my daughter find New Hampshire on the map.  But I am by no means doing more than one thing at once; I’m just quickly switching back & forth between the three which is referred to ask “context switching.”  Context switching in most cases is very costly.  There is a toll to be paid in terms of productivity when ramping up on a task before you can actually tackle that task. In an ideal world (where I also have a 2 handicap) one has the luxury to do a task from start to finish before starting a new task.  My son just refuses to let me have 15 minutes to write this blog because apparently building a steam roller right now is extremely important.  There is a sense of inertia when you work on a task after a short while because you begin to really concentrate on the task at hand.  Since we know it’s nearly impossible to put ourselves in a vacuum & work on one thing only, the best we can hope for is to do “similar” things (i.e., in the same context) at the same time.  Let’s pretend I have to email my co-worker that I’m late writing a blog, shovel my driveway, buy more Legos at Amazon.com, & get the mail (okay, I’m not pretending).  Since emailing & buying stuff online both require me to be in-front of my laptop and shoveling & going to my mailbox require me to be outside my house (my physical location), it would be far more efficient to do the tasks in the same “context” at the same time.  Think of the time it takes to get all bundled up & the time it takes to power on your laptop to get online.  Doing a few things at once usually means that you will not do that task as well (its quality) as you would have had you done it uninterrupted.  The more closely, time-wise, you can do a task usually means the better you will do that task since it will be “fresher” in your mind.  So…

  • Entire Task A + Entire Task B = Great Task A & Great Task B.
  • 1/2 Task A + Entire Task B + 1/2 Task A = Okay Task A & Excellent Task B.
  • 1/2 Task A + 1/2 Task B + 1/2 Task A + 1/2 Task B = Good Task A & Good Task B

Why does this matter?  Well, because the same exact concept applies to computers & the software we write.  A single processor can do one thing at a time only (let’s forget threads), but it can context switch extremely fast which gives the illusion of multi-tasking.  But, like a human, context switching has a cost for a computer.  So, when you write code try to do many “similar” things at the same time.  If you have a bunch of SQL queries to execute then you should open a connection to the database first, execute them, & close the connection.  If you need to call some VMware APIs then you should connect to vCenter first, do them, & close the connection.  Opening & closing connections to any system is often slow so group your actions by context which, in this case, are systems.  This also makes the code easier to read.  Speaking of reading, here’s a great example of the cost of context switching.  The author Tom Clancy loves to switch characters & plot lines every chapter.  This makes following the story very hard & whenever you put the book down & start reading again it’s nearly impossible to remember where you left off b/c there’s never, ever a good stopping point.  Tom Clancy’s writing is one of the best examples of how costly context switching is.

So, what does this have to do with cloud computing?  Well, it ties in directly with automation & orchestration.  Automation is doing the work & orchestration is determining the order in which work is done.  Things can get complicated quickly when numerous tasks need to be executed & it’s not immediately apparent which need to run first & which are dependent on other tasks.  And, once that is all figured out, what happens when a task fails?  While software executes linearly, an orchestration engine provides the ability to run multiple pieces of software concurrently.  And that’s where things get complicated real fast.  Sometimes it may make sense to execute things serially (one at a time) vs. in parallel (more than one at a time) simply b/c it becomes very hand to manage more than one task at the same time.

We live in a world in which there are 10 different devices from which we can check our email and, if we want, we can talk to our smartphone & ask it to read our email to us.  Technology has made it easy for us to get information virtually any time & in any format we want.  However, it is because of this information overload that our brains have trouble separating all the useful information from the white noise.  So we try to be more productive and we multi-task but that usually means we’re becoming more busy than productive.  In blogs to follow, I will provide some best practices for determining when it makes sense to run more than one task at a time.  Now, if you don’t mind, I need to help my daughter find Maine…

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…

The Operational Consistency Proxy

#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside

f5friday

While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.

F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.

f5em2.0AUTOMATE COMMON TASKS

The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.

Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.

The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.

INTEGRATED PERFORMANCE METRICS real-time-app-perf-monitoring-cloud-dc

F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.

These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand. 

Monitoring includes:

  • Device Level Visibility & Monitoring
  • Capacity Planning
  • Virtual Level & Pool Member Statistics
  • Object Level Visibility
  • Near Real-Time Graphics
  • Reporting

In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.

Alerts include:

  • Device status change
  • SSL certificate expiration
  • Software install complete
  • Software copy failure
  • Statistics data threshold
  • Configuration synchronization
  • Attack signature update
  • Clock skew

When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.

OPERATIONAL CONSISTENCY PROXY

By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.

F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.

Additional Resources:

Happy Managing!


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

Automation & Orchestration Part 1: What’s In A Name? That Which We Call a “Service”…

The phrases “service,” “abstraction,” & “automation & orchestration” are used a lot these days. Over the course of the next few blogs, I am going to describe what I think each phrase means and in the final blog I will describe how they all tie in together.

Let’s look at “service.” To me, when you trim off all the fat that word means, “Something (from whom) that provides a benefit to something (to whom).” The first thing that comes to mind when I think of who provides me a service is a bartender. I like wine. They have wine behind the bar. I will pay them the price of a glass + 20% for them to fill that glass & move it from behind the bar to in front of me. It’s all about services these days. Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. Professional services. Service level agreement. No shirts, no shoes, no service.

Within a company, there are many people working together to deliver a service. Some to external people & some to internal people. I want to examine an internal service because those tend to be much more loosely defined & documented. If a company sells an external service to a customer, chances are that service is very well defined b/c that company needs to describe in very clear terms to the customer exactly what they are getting when the customer shells out money. If that service changes, careful consideration needs to be paid to what ways that service can add more benefit (i.e., make the company more money) and in what ways parts of that service will change or be removed. Think about how many “Terms of Service & Conditions” pamphlets you get from a credit card company and how many pages each one is.

It can take many, many hours as a consultant in order to understand a service as it exists in a company today. Typically, the “something” that provides a benefit are the many people who work together to deliver that service. In order to define the service and its scope, you need to break it down into manageable pieces…let’s call them “tasks.” And those tasks can be complex so you can break those down into “steps.” You will find that each task, with its one or more steps, which is part of a service, is usually performed by the same person over and over again. Or, if the task is performed a lot (many times per day) then that task can usually be executed by a member of a team and not just a single person. Having the capability internally for more than one person to perform a task also protects the company from when Bob in accounting takes a sick day or when Bob in accounting takes home a pink slip. I’ll throw in a teaser for when I cover automation and orchestration…it would be ideal that not only can Bob do a task, but a computer as well (automation). That also may play into Bob getting a pink slip…but, again, more on that later. For now Bob doesn’t need to update his resume.

A lot of companies have not documented many, if any, of the internal services they deliver. I’m sure there is someone who knows the service from soup to nuts, but it’s likely they don’t know how (can’t) to do every task—or—may not have the authority/permission (shouldn’t) to do the task. Determining who in a company performs what task(s) can be a big undertaking in and of itself. And then, once you find Bob (sorry to pick on you Bob), it takes a lot of time for him to describe all the steps he does to complete a task. And once you put it on paper & show Bob, he remembers that he missed a step. And once you’ve pieced it all together and Bob says, “Yup, that about covers it,” you ask Bob what happens when something goes wrong and he looks at you and says, “Oh man, where do I begin?”

That last part is key. When things go well I call it the “Happy Day Scenario.” But things don’t always go well (ask the Yankees after the 2004 season) and just as, if not more, important in understanding a service is to know what to do when the Bob hits the fan. This part is almost never documented. Documentation is boring to lots of people and it’s hard enough for people to capture what the service *should* do let alone what it *could* do if something goes awry. So it’s a challenge to get people to recall and also predict what could go wrong. Documenting and regurgitating the steps of a business service “back” to the company is a big undertaking and very valuable to that company. Without knowing what Bob does today, it’s extremely hard to tell him how he can do it better.

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.