Tag Archives: automation

Lenovo ushers in new era of edge automation at scale

Lenovo has unveiled the next generation of ThinkEdge remote automation and orchestration with the introduction of new software solutions to accelerate the deployment of edge solutions. Lenovo’s new Lenovo Open Cloud Automation (LOC-A) 2.6 software delivers secure automated setup, enabling customers to complete global edge deployments for any number of locations in a matter of… Read more »

The post Lenovo ushers in new era of edge automation at scale appeared first on Cloud Computing News.

Chef boosts application IQ with Habitat launch

artificial intelligence, communication and futuristicChef has launched a new open source project called Habitat, which it claims introduces a new approach for application automation.

The team claim Habitat is a unique piece of software which enables applications to be freed from dependency on a company’s infrastructure. When applications are wrapped in Habitat the runtime environment is no longer the focus and does not constrain the application itself. Due to this USP applications can run across numerous environments such as containers, PaaS, cloud infrastructure and on premise data centres, but also has the intelligence to self-organize and self-configure, the company claims.

“We must free the application from its dependency on infrastructure to truly achieve the promise of DevOps,” said Adam Jacob, CTO at Chef. “There is so much open source software to be written in the world and we’re very excited to release Habitat into the wild. We believe application-centric automation can give modern development teams what they really want — to build new apps, not muck around in the plumbing.”

Chef would generally be considered a challenger to the technology industry’s giants having only been founded in 2008, though the company has made positive strides in recent years specializing in the DevOps and containers arenas, two of the more prominent growth areas. Although both of these areas are prominent in marketing campaigns and conference presentations, applications into the real-world have been more difficult.

The Habitat product is built on the idea that infrastructure dictated the design of an application. Chef claims by making the application and its automation the unit of deployment, developers can focus on business value and planning features that will make their products stand out rather than on the constraints of infrastructure and particular runtime environments.

“The launch of Habitat is a significant moment for both Chef and the entire DevOps community in the UK and EMEA,” said Joe Pynadath, ‎GM of EMEA for Chef Software, Chef. “It marks our next evolution and will provide an absolutely transformative, paradigm shift to how our community and customers can approach application management and automation. An approach that puts the application first and makes them independent of their underlying infrastructure.  I am extremely excited to see the positive impact that our Chef community and customers throughout Europe will gain from this revolutionary technology.”

Is the Cloud Right for You?

I recently presented a session entitled, “Is the Cloud Right for You?” with Randy Weis and wanted to provide a recap of the things I covered in the presentation. In this video, I discuss some of the advantages of cloud including the access to enterprise class hardware that you might not normally be able to afford, load balancers, multiple data centers, redundancy, automation and more. I also cover some of the risks associated with the cloud. Enjoy, and as always, reach out with any questions!

 

Download eBook: The Evolution of the Corporate IT Department

 

By Chris Chesley, Solutions Architect

Is the Cloud Right for You?

I recently presented a session entitled, “Is the Cloud Right for You?” with Randy Weis and wanted to provide a recap of the things I covered in the presentation. In this video, I discuss some of the advantages of cloud including the access to enterprise class hardware that you might not normally be able to afford, load balancers, multiple data centers, redundancy, automation and more. I also cover some of the risks associated with the cloud. Enjoy, and as always, reach out with any questions!

Download eBook: The Evolution of the Corporate IT Department

By Chris Chesley, Solutions Architect

Amazon buys ClusterK to reduce AWS deployment costs

Amazon has acquired ClusterK, which offers software that optimses deployments on AWS spot instances

Amazon has acquired ClusterK, which offers software that optimses deployments on AWS spot instances

Amazon has acquired ClusterK, a provider of software that optimises deployment on AWS spot instances for cost and availability.

Amazon confirmed the acquisition to BCN but declined to offer any details about how the technology would be integrated in AWS, or the financial terms of the acquisition.

One of the challenges with EC2 spot instances is that cost and availability can vary dramatically depending on overall demand.

At the same time when these instances are used for long jobs (say, running batch jobs on large databases) and those jobs are interrupted, those instances can actually disappear from right under you – unless failovers on reserved instances or similar techniques are deployed.

Those are some of the things ClusterK aims to solve. It offers an orchestration and scaling service that uses the AWS spot market in conjunction with on-demand or reserved instances to optimise workload deployments for cost and availability – an automated way of keeping workload cost and availability in check (the company claims it can reduce cloud costs by up to 90 per cent).

While it’s not clear exactly how Amazon intends to integrate the technology it is clear the company is keen to do what it takes to keep the price of its services dropping, which is where ClusterK could certainly add value. While disclosing its cloud revenues for the first time last week the company said it has dropped the prices of its services about 50 times since AWS launched ten years ago.

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).

 

 

Why Automate? What to Automate? How to Automate?

By John Dixon, Consulting Architect

Automation is extremely beneficial to organizations. However, the questions often come up around why to automate, what to automate, and how to automate.

Why automate?

There are several key benefits surrounding automation. They include:

  • Saving time
  • Employees can be retrained to focus on other (hopefully more strategic) tasks
  • Removing human intervention reduces errors
  • Troubleshooting and support is improved when everything is deployed the same way

What to automate?

Organizations should always start with the voice of the customer (VoC). IT departments need to factor in what the end user wants and what the end user expects to improve their experience. If you can’t trace back something you’re automating to an improved customer experience, that’s usually a good warning sign that you should not be automating it. In addition, you need to be able to track back to how automation has provided a benefit to the organization. The benefit should always be measurable and always financial.

What are companies automating?

Requests management is the hot one because that’s a major component of cloud computing. This includes service catalogues and self-service portals. Providing a self-service portal, sending the request for approval based on the dollar amount requested, and fulfilling the order through one or more systems is something that is commonly automated today. My advice here is to automate tasks through a general purpose orchestrator tool (such as CA Process Automation or similar tools) so that automated jobs can be managed from a single console. This is instead of stitching together disparate systems that call each other in a “rat’s nest” of automation. The general purpose orchestrator also allows for easier troubleshooting when an automated task does not complete successfully.

How to automate?

There are some things to consider when sitting down to automate a task, or even determining the best things to automate. Here are a few key points:

  1. Start with the VoC or Voice of the Customer, and work backwards to identify the systems that are needed to automate a particular task. For example, maybe the customer is the Human Resources department, and they want to automate the onboarding of a new employee. It may have to setup user accounts, order a new cell phone, order a new laptop, and schedule the new employee on their manager’s calendar on their first day of work. Map out the systems that are required to accomplish this, and integrate those – and no more. You may find that some parts of the procedure may already be automated; perhaps your phone provider already has an interface to programmatically request new equipment. Take every advantage of these components.
  2. Don’t automate things that you can’t trace back to a benefit for the organization. Just because you can automate something doesn’t mean that you should. Again, use the voice of the customer and user stories here. A common user story is structure as follows:
    1. “As a [role],
    2. I want to [get something done]
    3. So that I can [benefit in the following way]”
  3. Start small and work upwards to automate more and more complex tasks. Remember the HR onboarding procedure in point #1? I wouldn’t suggest beginning your automation journey there. Pick out one thing to automate from a larger story, and get it working properly. Maybe you begin by automating the scheduling of an appointment in Outlook or your calendaring system, or creating a user in Active Directory. Those pieces become components in the HR onboarding story, but perhaps other stories as well.
  4. Use a general purpose orchestrator instead of stitching together different systems. As in point #3, using an orchestrator will allow you to build reusable components that are useful to automate different tasks. A general purpose orchestrator also allows for easier troubleshooting when things go wrong, tracking of automation jobs in the environment, and more advanced conditional logic. Troubleshooting automation any other way can be very difficult.
  5. You’ll need someone with software development experience. Some automation packages claim that even non-developers can build robust automation with “no coding required.” In some cases, that may be true. However, the experience that a developer brings to the table is an absolute must have when automating complex tasks like the HR onboarding example in point #1.

 

What has your organization automated? How have the results been?

 

IT Multi-Tasking: I Was Told There’d Be No Math

By Ben Sawyer, Solutions Engineer

 

The term “multi-tasking” basically means doing more than one thing at once.  I am writing this blog while playing Legos w/ my son & helping my daughter find New Hampshire on the map.  But I am by no means doing more than one thing at once; I’m just quickly switching back & forth between the three which is referred to ask “context switching.”  Context switching in most cases is very costly.  There is a toll to be paid in terms of productivity when ramping up on a task before you can actually tackle that task. In an ideal world (where I also have a 2 handicap) one has the luxury to do a task from start to finish before starting a new task.  My son just refuses to let me have 15 minutes to write this blog because apparently building a steam roller right now is extremely important.  There is a sense of inertia when you work on a task after a short while because you begin to really concentrate on the task at hand.  Since we know it’s nearly impossible to put ourselves in a vacuum & work on one thing only, the best we can hope for is to do “similar” things (i.e., in the same context) at the same time.  Let’s pretend I have to email my co-worker that I’m late writing a blog, shovel my driveway, buy more Legos at Amazon.com, & get the mail (okay, I’m not pretending).  Since emailing & buying stuff online both require me to be in-front of my laptop and shoveling & going to my mailbox require me to be outside my house (my physical location), it would be far more efficient to do the tasks in the same “context” at the same time.  Think of the time it takes to get all bundled up & the time it takes to power on your laptop to get online.  Doing a few things at once usually means that you will not do that task as well (its quality) as you would have had you done it uninterrupted.  The more closely, time-wise, you can do a task usually means the better you will do that task since it will be “fresher” in your mind.  So…

  • Entire Task A + Entire Task B = Great Task A & Great Task B.
  • 1/2 Task A + Entire Task B + 1/2 Task A = Okay Task A & Excellent Task B.
  • 1/2 Task A + 1/2 Task B + 1/2 Task A + 1/2 Task B = Good Task A & Good Task B

Why does this matter?  Well, because the same exact concept applies to computers & the software we write.  A single processor can do one thing at a time only (let’s forget threads), but it can context switch extremely fast which gives the illusion of multi-tasking.  But, like a human, context switching has a cost for a computer.  So, when you write code try to do many “similar” things at the same time.  If you have a bunch of SQL queries to execute then you should open a connection to the database first, execute them, & close the connection.  If you need to call some VMware APIs then you should connect to vCenter first, do them, & close the connection.  Opening & closing connections to any system is often slow so group your actions by context which, in this case, are systems.  This also makes the code easier to read.  Speaking of reading, here’s a great example of the cost of context switching.  The author Tom Clancy loves to switch characters & plot lines every chapter.  This makes following the story very hard & whenever you put the book down & start reading again it’s nearly impossible to remember where you left off b/c there’s never, ever a good stopping point.  Tom Clancy’s writing is one of the best examples of how costly context switching is.

So, what does this have to do with cloud computing?  Well, it ties in directly with automation & orchestration.  Automation is doing the work & orchestration is determining the order in which work is done.  Things can get complicated quickly when numerous tasks need to be executed & it’s not immediately apparent which need to run first & which are dependent on other tasks.  And, once that is all figured out, what happens when a task fails?  While software executes linearly, an orchestration engine provides the ability to run multiple pieces of software concurrently.  And that’s where things get complicated real fast.  Sometimes it may make sense to execute things serially (one at a time) vs. in parallel (more than one at a time) simply b/c it becomes very hand to manage more than one task at the same time.

We live in a world in which there are 10 different devices from which we can check our email and, if we want, we can talk to our smartphone & ask it to read our email to us.  Technology has made it easy for us to get information virtually any time & in any format we want.  However, it is because of this information overload that our brains have trouble separating all the useful information from the white noise.  So we try to be more productive and we multi-task but that usually means we’re becoming more busy than productive.  In blogs to follow, I will provide some best practices for determining when it makes sense to run more than one task at a time.  Now, if you don’t mind, I need to help my daughter find Maine…

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…