Tag Archives: orchestration

Century launches automation offering for hybrid and multi-cloud

multi cloudCenturyLink has launched Runner, it’s new configuration management and orchestration service designed for hybrid and multi-cloud environments.

The new offering is built with openness in mind, ensuring automation in any cloud or data centre, including its own cloud platform, other third-party cloud providers and on premise infrastructures and devices. Runner focuses on open source automation and orchestration engine as a service.

“Runner is a new product from CenturyLink Cloud that enables fast, easy automation and orchestration on the CenturyLink Cloud Platform, as well as third-party cloud providers and on-premises infrastructure and devices,” said Chris Kent, Runner Product Owner at Century Link. “Runner provides the ability to quickly provision and modifies resources on any environment, and gives users a true Hybrid IT solution, regardless of where their resources are.

“Runner, at its core, is an Ansible engine. On top of that engine exists several other custom services and APIs we’ve created, many of which were created in tandem with the Runner job service to enhance the job execution capabilities.”

The new offering is built on the assumption that customers do not have the time or resource to effectively manage a hybrid or multi-cloud environment, and also cases where customers need better distribution in case of failures. The team seem to be focusing on the concepts of execution speed and a reduction in human error as some of the prominent features to differentiate themselves in an already competitive market. CenturyLink has also differentiated itself by focusing the technology on managing and automating the infrastructure itself, as opposed to focusing on the connections themselves, as with other competitors.

Amazon buys ClusterK to reduce AWS deployment costs

Amazon has acquired ClusterK, which offers software that optimses deployments on AWS spot instances

Amazon has acquired ClusterK, which offers software that optimses deployments on AWS spot instances

Amazon has acquired ClusterK, a provider of software that optimises deployment on AWS spot instances for cost and availability.

Amazon confirmed the acquisition to BCN but declined to offer any details about how the technology would be integrated in AWS, or the financial terms of the acquisition.

One of the challenges with EC2 spot instances is that cost and availability can vary dramatically depending on overall demand.

At the same time when these instances are used for long jobs (say, running batch jobs on large databases) and those jobs are interrupted, those instances can actually disappear from right under you – unless failovers on reserved instances or similar techniques are deployed.

Those are some of the things ClusterK aims to solve. It offers an orchestration and scaling service that uses the AWS spot market in conjunction with on-demand or reserved instances to optimise workload deployments for cost and availability – an automated way of keeping workload cost and availability in check (the company claims it can reduce cloud costs by up to 90 per cent).

While it’s not clear exactly how Amazon intends to integrate the technology it is clear the company is keen to do what it takes to keep the price of its services dropping, which is where ClusterK could certainly add value. While disclosing its cloud revenues for the first time last week the company said it has dropped the prices of its services about 50 times since AWS launched ten years ago.

IT Multi-Tasking: I Was Told There’d Be No Math

By Ben Sawyer, Solutions Engineer

 

The term “multi-tasking” basically means doing more than one thing at once.  I am writing this blog while playing Legos w/ my son & helping my daughter find New Hampshire on the map.  But I am by no means doing more than one thing at once; I’m just quickly switching back & forth between the three which is referred to ask “context switching.”  Context switching in most cases is very costly.  There is a toll to be paid in terms of productivity when ramping up on a task before you can actually tackle that task. In an ideal world (where I also have a 2 handicap) one has the luxury to do a task from start to finish before starting a new task.  My son just refuses to let me have 15 minutes to write this blog because apparently building a steam roller right now is extremely important.  There is a sense of inertia when you work on a task after a short while because you begin to really concentrate on the task at hand.  Since we know it’s nearly impossible to put ourselves in a vacuum & work on one thing only, the best we can hope for is to do “similar” things (i.e., in the same context) at the same time.  Let’s pretend I have to email my co-worker that I’m late writing a blog, shovel my driveway, buy more Legos at Amazon.com, & get the mail (okay, I’m not pretending).  Since emailing & buying stuff online both require me to be in-front of my laptop and shoveling & going to my mailbox require me to be outside my house (my physical location), it would be far more efficient to do the tasks in the same “context” at the same time.  Think of the time it takes to get all bundled up & the time it takes to power on your laptop to get online.  Doing a few things at once usually means that you will not do that task as well (its quality) as you would have had you done it uninterrupted.  The more closely, time-wise, you can do a task usually means the better you will do that task since it will be “fresher” in your mind.  So…

  • Entire Task A + Entire Task B = Great Task A & Great Task B.
  • 1/2 Task A + Entire Task B + 1/2 Task A = Okay Task A & Excellent Task B.
  • 1/2 Task A + 1/2 Task B + 1/2 Task A + 1/2 Task B = Good Task A & Good Task B

Why does this matter?  Well, because the same exact concept applies to computers & the software we write.  A single processor can do one thing at a time only (let’s forget threads), but it can context switch extremely fast which gives the illusion of multi-tasking.  But, like a human, context switching has a cost for a computer.  So, when you write code try to do many “similar” things at the same time.  If you have a bunch of SQL queries to execute then you should open a connection to the database first, execute them, & close the connection.  If you need to call some VMware APIs then you should connect to vCenter first, do them, & close the connection.  Opening & closing connections to any system is often slow so group your actions by context which, in this case, are systems.  This also makes the code easier to read.  Speaking of reading, here’s a great example of the cost of context switching.  The author Tom Clancy loves to switch characters & plot lines every chapter.  This makes following the story very hard & whenever you put the book down & start reading again it’s nearly impossible to remember where you left off b/c there’s never, ever a good stopping point.  Tom Clancy’s writing is one of the best examples of how costly context switching is.

So, what does this have to do with cloud computing?  Well, it ties in directly with automation & orchestration.  Automation is doing the work & orchestration is determining the order in which work is done.  Things can get complicated quickly when numerous tasks need to be executed & it’s not immediately apparent which need to run first & which are dependent on other tasks.  And, once that is all figured out, what happens when a task fails?  While software executes linearly, an orchestration engine provides the ability to run multiple pieces of software concurrently.  And that’s where things get complicated real fast.  Sometimes it may make sense to execute things serially (one at a time) vs. in parallel (more than one at a time) simply b/c it becomes very hand to manage more than one task at the same time.

We live in a world in which there are 10 different devices from which we can check our email and, if we want, we can talk to our smartphone & ask it to read our email to us.  Technology has made it easy for us to get information virtually any time & in any format we want.  However, it is because of this information overload that our brains have trouble separating all the useful information from the white noise.  So we try to be more productive and we multi-task but that usually means we’re becoming more busy than productive.  In blogs to follow, I will provide some best practices for determining when it makes sense to run more than one task at a time.  Now, if you don’t mind, I need to help my daughter find Maine…

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

Automation & Orchestration Part 1: What’s In A Name? That Which We Call a “Service”…

The phrases “service,” “abstraction,” & “automation & orchestration” are used a lot these days. Over the course of the next few blogs, I am going to describe what I think each phrase means and in the final blog I will describe how they all tie in together.

Let’s look at “service.” To me, when you trim off all the fat that word means, “Something (from whom) that provides a benefit to something (to whom).” The first thing that comes to mind when I think of who provides me a service is a bartender. I like wine. They have wine behind the bar. I will pay them the price of a glass + 20% for them to fill that glass & move it from behind the bar to in front of me. It’s all about services these days. Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. Professional services. Service level agreement. No shirts, no shoes, no service.

Within a company, there are many people working together to deliver a service. Some to external people & some to internal people. I want to examine an internal service because those tend to be much more loosely defined & documented. If a company sells an external service to a customer, chances are that service is very well defined b/c that company needs to describe in very clear terms to the customer exactly what they are getting when the customer shells out money. If that service changes, careful consideration needs to be paid to what ways that service can add more benefit (i.e., make the company more money) and in what ways parts of that service will change or be removed. Think about how many “Terms of Service & Conditions” pamphlets you get from a credit card company and how many pages each one is.

It can take many, many hours as a consultant in order to understand a service as it exists in a company today. Typically, the “something” that provides a benefit are the many people who work together to deliver that service. In order to define the service and its scope, you need to break it down into manageable pieces…let’s call them “tasks.” And those tasks can be complex so you can break those down into “steps.” You will find that each task, with its one or more steps, which is part of a service, is usually performed by the same person over and over again. Or, if the task is performed a lot (many times per day) then that task can usually be executed by a member of a team and not just a single person. Having the capability internally for more than one person to perform a task also protects the company from when Bob in accounting takes a sick day or when Bob in accounting takes home a pink slip. I’ll throw in a teaser for when I cover automation and orchestration…it would be ideal that not only can Bob do a task, but a computer as well (automation). That also may play into Bob getting a pink slip…but, again, more on that later. For now Bob doesn’t need to update his resume.

A lot of companies have not documented many, if any, of the internal services they deliver. I’m sure there is someone who knows the service from soup to nuts, but it’s likely they don’t know how (can’t) to do every task—or—may not have the authority/permission (shouldn’t) to do the task. Determining who in a company performs what task(s) can be a big undertaking in and of itself. And then, once you find Bob (sorry to pick on you Bob), it takes a lot of time for him to describe all the steps he does to complete a task. And once you put it on paper & show Bob, he remembers that he missed a step. And once you’ve pieced it all together and Bob says, “Yup, that about covers it,” you ask Bob what happens when something goes wrong and he looks at you and says, “Oh man, where do I begin?”

That last part is key. When things go well I call it the “Happy Day Scenario.” But things don’t always go well (ask the Yankees after the 2004 season) and just as, if not more, important in understanding a service is to know what to do when the Bob hits the fan. This part is almost never documented. Documentation is boring to lots of people and it’s hard enough for people to capture what the service *should* do let alone what it *could* do if something goes awry. So it’s a challenge to get people to recall and also predict what could go wrong. Documenting and regurgitating the steps of a business service “back” to the company is a big undertaking and very valuable to that company. Without knowing what Bob does today, it’s extremely hard to tell him how he can do it better.

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.

BIG-IP Solutions for Microsoft Private Cloud

Five of the top six services critical to cloud are application delivery services and available with F5 BIG-IP.

f5friday

The big news at MMS 2012 was focused on private cloud and Microsoft’s latest solutions in the space with System Center 2012. Microsoft’s news comes on the heels of IBM’s latest foray with its PureSystems launch at its premiere conference, IBM Pulse. 

As has become common, while System Center 2012 addresses the resources most commonly associated with cloud of any kind, compute, and the means by which operational tasks can be codified, automated, and integrated, it does not delve too deeply into the network, leaving that task to its strategic partners.

One of its long-term partners is F5, and we take the task seriously.The benefits of private cloud are rooted in greater economies of scale through broader aggregation and provisioning of resources, as well its ability to provide for flexible and reliable applications that are always available and rely on many of these critical services. Applications are not islands of business functionality, after all; they rely upon a multitude of network-hosted services such as load balancing, identity and access management, and security services to ensure a consistent, secure end-user experience from anywhere, from any device.most important features cloud nww 5 of the top 6 services seen as most critical to cloud implementations in a 2012 Network World Cloud survey are infrastructure services, all of which are supported by the application delivery tier.

The ability to consistently apply policies governing these aspects of every successful application deployment is critical to keeping the network aligned with the allocation of compute and storage resources. With the network, applications cannot scale, reliability is variable, and security compromised through fragmentation and complexity. The lack of a unified infrastructure architecture reduces the performance, scale, security and flexibility of cloud computing environments, both private and public. Thus, just as we ensure the elasticity and operational benefits associated with a more automated and integrated application delivery strategy for IBM, so have we done with respect to a Microsoft private cloud solution.

BIG-IP Solutions for Microsoft Private Cloud

BIG-IP solutions for Microsoft private cloud take advantage of key features and technologies in BIG-IP version 11.1, including F5’s virtual Clustered MultiprocessingTM (vCMP™) technology, iControl®, F5’s web services-enabled open application programming interface (API), administrative partitioning and server name indication (SNI). Together, these features help reduce the cost and complexity of managing cloud infrastructures in multi-tenant environments. With BIG-IP v11.1, organizations reap the maximum benefits of conducting IT operations and application delivery services in the private cloud. Although these technologies are generally applicable to all cloud implementations – private, public or hybrid – we also announced Microsoft-specific integration and support that enables organizations to ensure the capability to extend automation and orchestration into the application delivery tier for maximum return on investment.

F5 Monitoring Pack for System Center
Provides two-way communication between BIG-IP devices and the System Center management console. Health monitoring, failover, and configuration synchronization of BIG-IP devices, along with customized alerting, Maintenance Mode, and Live Migration, occur within the Operations Manager component of System Center. The F5 Load Balancing Provider for System Center
Enables one-step, automated deployment of load balancing services through direct interoperability between the Virtual Machine Manager component of System Center 2012 and BIG-IP devices. BIG-IP devices are managed through the System Center user interface, and administrators can custom-define load balancing services. The Orchestrator component of System Center 2012
Provides F5 traffic management capabilities and takes advantage of workflows designed using the Orchestrator Runbook Designer. These custom workflows can then be published directly into System Center 2012 service catalogs and presented as a standard offering to the organization. This is made possible using the F5 iControl SDK, which gives customers the flexibility to choose a familiar development environment such as the Microsoft .NET Framework programming model or Windows PowerShell scripting.

 

F5 big ip msft private cloud solution diagram

Private cloud – as an approach to IT operations – calls for transformation of datacenters, leveraging a few specific strategic points of control, to aggregate and continuously re-allocate IT resources as needed in such as way to make software applications more like services that are always on and secured across users and devices. Private cloud itself is not a single, tangible solution today. Today it is a solution comprised of several key components, including power/cooling, compute, storage and network, management and monitoring tools and the the software applications/databases that end users need.

We’ve moved past the hype of private cloud and its potential benefits. Now organizations need a path, clearly marked, to help them build and deploy private clouds.

That’s part of F5’s goal – to provide the blueprints necessary to build out the application delivery tier to ensure a flexible, reliable and scalable foundation for the infrastructure services required to build and deploy private clouds.

Availability

The F5 Monitoring Pack for System Center and the F5 PRO-enabled Monitoring Pack for System Center are now available. The F5 Load Balancing Provider for System Center is available as a free download from the F5 DevCentral website. The Orchestrator component of System Center 2012 is based on F5 iControl and Windows PowerShell, and is also free.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Complexity Drives Consolidation  At the Intersection of Cloud and Control…  F5 Friday: Addressing the Unintended Consequences of Cloud  F5 Friday: Workload Optimization with F5 and IBM PureSystems  The HTTP 2.0 War has Just Begun  F5 Friday: Microsoft and F5 Lync Up on Unified Communications  DevCentral Groups – Microsoft / F5 Solutions  Webcast: BIG-IP v11 and Microsoft Technologies – Applications   Technorati Tags: F5,F5 Friday,MacVittie,Microsoft,MMS 2012,BIG-IP,private cloud computing,cloud computing,devops,automation,orchestration,architecture,System Center 2012,load balancing,security,performance,scalability domain,blog

read more