Archivo de la categoría: Cloud computing

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

The Evolution from a Provider of Technology Components to a Broker of Technology Services

A 3 Part Series from Trevor Williamson

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again…

Part 1: Understanding the Dilemma

IT teams are increasingly being challenged as bring-your-own-technology (BYOD) policies and “as-a-service” software and infrastructure multiply in mainstream organizations.  In this new reality, developers still need compute, network and storage to keep up with growth…and workers still need some sort of PC or mobile device to get their jobs done…but they don’t necessarily need corporate IT to give it to them.  They can turn to a shadow IT organization using Amazon, Rackspace and Savvis or using SAS applications or an unmanaged desktop because when all is said and done, if you can’t deliver on what your users and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.

Much of this shift toward outside services comes down to customer experience, or how your customers—your users—perceive their every interaction with IT, from your staff in the helpdesk to corporate applications they access every day.  If what you are delivering (or not delivering as the case may be) is more burdensome, more complicated or doesn’t react as fast as other service providers (like Amazon, Office 365, or Salesforce, etc.), then they will turn (in droves) toward those providers.

Now the question hanging heavy in the air is what do those providers have, except of course scale, that your IT organization doesn’t have?  What is the special sauce for them to be able to deliver those high-value services, quicker and at a lower cost than you can?

In a few words; IT Service Management (ITSM)…but wait!…I know the first reaction you might have is that ITSM has become a sour subject and that if you hear ITIL chanted one more time you’re going to flip out.  The type of ITSM I’m talking about is really the next generation and has only passing similarities to the service management initiatives of the past.  While it is agreed that ITSM has the potential to deliver the experiences and outcomes your developers and users need and want, today’s ITSM falls far short of that idea.  Process for process sake you’ve probably heard…but whatever, we are still measuring success based on internal IT efficiencies, not customer or financial value or even customer satisfaction. We still associate ITSM exclusively with ITIL best practices and we continue to label ourselves as providers of technology components.

As it turns out, the adage “You cannot fix today’s problems with yesterday’s solutions” is as right as it ever was.  We need to turn ITSM on its head and create a new way forward based on customer centricity, services focus, and automated operations.  We have to rethink the role we play and how we engage with the business.  Among the most significant transformations of IT we need to complete is from a provider of technology components to a broker of technology services. We have relied on ITSM to drive this transformation, but ITSM needs to change in order to be truly effective in the future. Here’s why:

   The roots of service management focus was on the customer:  “Service management” originated within the product marketing and management departments and from the beginning, service management placed the customer at the center of all decision making within the service provider organization. It’s the foundation to transform product-oriented organizations into service providers where the customer experience and interaction are designed and managed to cost-effectively deliver customer results and satisfaction.

  But when we applied service management to IT, we lost customer focus: Applying service management to information technology produced the well-known discipline of ITSM but, unfortunately, IT professionals associated it exclusively with the IT infrastructure library (ITIL) best practices, which focus on processes for managing IT infrastructure to enable and support services. What’s missing is the customer perspective.

   In the age of the customer, we need to proactively manage services via automation: In the age of the customer, technology-led disruption (virtualization, automation, orchestration, operating at scale, etc.) erodes traditional competitive barriers making it easier than ever for empowered employees and app developers to take advantage of new devices and cloud-based software. To truly function as a service provider, IT needs to first and foremost consider the customer and the customer’s desired outcome in order to serve them faster, cheaper, and at a higher quality.  In today’s world, this can only be accomplished via automation.

When customers don’t trust a provider to deliver quality products or services, they seek alternatives. That’s a pretty simple concept that everyone can understand but, what if the customer is a user of IT services that you provide?  Where do they go if they don’t like the quality of your products or services?  Yep, Amazon, Rackspace, Terremark, etc., and any other service provider who offers a solution that you can’t…or that you can’t in the required time or for the required price.

The reason why these service providers can do these seemingly amazing things and offer such diverse and, at times, sophisticated services, is because they have eliminated (mostly) the issues associated with humans doing “stuff” by automating commodity IT activities and then orchestrating those automated activities toward delivering aggregate IT services.  They have evolved from being providers of technology components to brokers of technology services.

If you’re looking for more information on BYOD, register for our upcoming webinar “BYOD Webinar- Don’t Fight It, Mitigate the Risk with Mobile Management

Next…Part 2: Planning the Solution

 

Stay Safe in the Cloud With Two-Factor Authentication

The use of two-factor authentication has been around for years, but the recent addition of this security feature in cloud services from Google and Dropbox has drawn widespread attention.  The Dropbox offering came just two months after a well-publicized security breach at their online file sharing service.

Exactly What Is Two-Factor Authentication?

Of course, most online applications require a user name and password in order to log on.  Much has been written about the importance of managing your passwords carefully.  However, simple password protection only goes so far.

Two-factor authentication involves not only the use of something the user knows such as a password, but also something that only the user has.  An intruder can no longer gain access to the system simply by illicitly obtaining your password.

Authentication Tools

  • ATM Cards:  These are perhaps the most widely used two-factor authentication device.  The user must both insert the card and enter a password in order to access the ATM.
  • Tokens:  The use of tokens has increased substantially in recent years.  Most of these are time-based tokens that involve the use of a key sized plastic device with a screen that displays a security code that continually changes.  The user must enter not only their password, but also the security code from the token. Tokens have been popular with sensitive applications such as on-line bank and
    brokerage sites.
  • Smart Cards:  These function similarly to ATM cards, but are used in a wider variety of applications.  Unlike most ATM cards, smart cards have an embedded microprocessor for added security.
  • Smart Phones:  The proliferation of smart phones has provided the perfect impetus to expand two-factor authentication to widely used internet applications in the cloud.  In these cases, users must enter not only a password, but also a security code from their phone or other mobile device.  This code can be sent to a phone by the service provider as an SMS text message or generated on a smartphone using a mobile authenticator app.  Both Google and Dropbox now use this method.

Yahoo! Mail and Facebook are also introducing two-factor authentication using smart phones.  However, their methodology only prompts the user to enter the security code if a security breach is suspected or a new device is used.

So What’s Next?

Cloud security is a hot topic and two-factor authentication is one way to mitigate users’ well founded concerns.  As a result, development and adoption of two-factor authentication systems is proceeding at a rapid pace and should be available for most cloud applications within just a few short years.

The shift from token based authentication to SMS based authentication is also likely to accelerate along with smart phone use.

Two-factor and even three-factor authentication using biometrics will become more popular.   Finger print readers are already quite common on laptop computers.  Use of facial recognition, voice recognition, hand geometry, retina scans, etc. will become more common as the technology develops and the price drops.  The obvious advantage of these biometric systems is that the physical device cannot be stolen or otherwise used by a third party to gain access to the system.

As with any security system, two-factor authentication is not 100% secure.  Even token systems have been hacked and there is no doubt that there will be breaches in SMS authentication tools as well.  However, two-factor authentication still provides the best way to stay safe in the cloud and it’s advisable to use it whenever possible.

This post is by Rackspace blogger Thomas Parent. Rackspace Hosting is a service leader in cloud computing, and a founder of OpenStack, an open source cloud operating system. The San Antonio-based company provides Fanatical Support to its customers and partners, across a portfolio of IT services, including Managed Hosting and Cloud Computing.

A More Practical View of Cloud Brokers

#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance

cloudbrokerviews During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations.

Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you’re deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary.

While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology.

See, choosing a cloud deployment environment is not just about «will this workload run in cloud X». It’s far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can’t know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself.

Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It’s one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it’s quite another to violate some of the government regulations that govern not only data at rest but in flight.

A PRACTICAL VIEW of CLOUD BROKERS

Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process.

And it must be IT that codifies those requirements into a policy that can be leveraged by the  broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default «deny all» policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That’s the role of IT, and even wearing rainbow-colored cloud glasses can’t eliminate the very real and important role IT has to play here.

The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today’s environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. 

So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs.

The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed.


Related blogs and articles:  
 
lori-short-2012clip_image004[5]

Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite.

Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules

 

F5 Networks

clip_image003[5]clip_image004[5]clip_image006[5]clip_image007[5]clip_image008[5]


read more

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…

Automation & Orchestration Part 1: What’s In A Name? That Which We Call a “Service”…

The phrases “service,” “abstraction,” & “automation & orchestration” are used a lot these days. Over the course of the next few blogs, I am going to describe what I think each phrase means and in the final blog I will describe how they all tie in together.

Let’s look at “service.” To me, when you trim off all the fat that word means, “Something (from whom) that provides a benefit to something (to whom).” The first thing that comes to mind when I think of who provides me a service is a bartender. I like wine. They have wine behind the bar. I will pay them the price of a glass + 20% for them to fill that glass & move it from behind the bar to in front of me. It’s all about services these days. Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. Professional services. Service level agreement. No shirts, no shoes, no service.

Within a company, there are many people working together to deliver a service. Some to external people & some to internal people. I want to examine an internal service because those tend to be much more loosely defined & documented. If a company sells an external service to a customer, chances are that service is very well defined b/c that company needs to describe in very clear terms to the customer exactly what they are getting when the customer shells out money. If that service changes, careful consideration needs to be paid to what ways that service can add more benefit (i.e., make the company more money) and in what ways parts of that service will change or be removed. Think about how many “Terms of Service & Conditions” pamphlets you get from a credit card company and how many pages each one is.

It can take many, many hours as a consultant in order to understand a service as it exists in a company today. Typically, the “something” that provides a benefit are the many people who work together to deliver that service. In order to define the service and its scope, you need to break it down into manageable pieces…let’s call them “tasks.” And those tasks can be complex so you can break those down into “steps.” You will find that each task, with its one or more steps, which is part of a service, is usually performed by the same person over and over again. Or, if the task is performed a lot (many times per day) then that task can usually be executed by a member of a team and not just a single person. Having the capability internally for more than one person to perform a task also protects the company from when Bob in accounting takes a sick day or when Bob in accounting takes home a pink slip. I’ll throw in a teaser for when I cover automation and orchestration…it would be ideal that not only can Bob do a task, but a computer as well (automation). That also may play into Bob getting a pink slip…but, again, more on that later. For now Bob doesn’t need to update his resume.

A lot of companies have not documented many, if any, of the internal services they deliver. I’m sure there is someone who knows the service from soup to nuts, but it’s likely they don’t know how (can’t) to do every task—or—may not have the authority/permission (shouldn’t) to do the task. Determining who in a company performs what task(s) can be a big undertaking in and of itself. And then, once you find Bob (sorry to pick on you Bob), it takes a lot of time for him to describe all the steps he does to complete a task. And once you put it on paper & show Bob, he remembers that he missed a step. And once you’ve pieced it all together and Bob says, “Yup, that about covers it,” you ask Bob what happens when something goes wrong and he looks at you and says, “Oh man, where do I begin?”

That last part is key. When things go well I call it the “Happy Day Scenario.” But things don’t always go well (ask the Yankees after the 2004 season) and just as, if not more, important in understanding a service is to know what to do when the Bob hits the fan. This part is almost never documented. Documentation is boring to lots of people and it’s hard enough for people to capture what the service *should* do let alone what it *could* do if something goes awry. So it’s a challenge to get people to recall and also predict what could go wrong. Documenting and regurgitating the steps of a business service “back” to the company is a big undertaking and very valuable to that company. Without knowing what Bob does today, it’s extremely hard to tell him how he can do it better.

Fun with Neologism in the Cloud Era

Having spent the last several blog posts on more serious considerations about cloud computing and the new IT era, I decided to lighten things up a bit.  The term “cloud” has bothered me from the first time I heard it uttered, as the concept and definition are as nebulous as, well a cloud.  In the intervening years, when thoroughly boring my wife and friends with shop talk about the “cloud,” I came to realize that in order for cloud computing to become mainstream, “it” needs to have some way to translate to the masses.

Neologism is the process of creating new words using existing or combinations of existing words to form a more descriptive term.  In our industry neologisms have been used extensively, although many of us do not realize how these terms got coined.  For example, the word “blog” is a combination of web and log.  “Blog” was formed over time as the lexicon was adopted.  It began with a new form of communicating across the Internet, known as a web log.  “Web log” become “we blog” simply by moving the space between words one to the left.  Now, regardless of who you talk to, the term “blog” is pretty much a fully formed concept.  Similarly, the term “Internet” is a combination of “inter” (between) and “network”, hence meaning between networks.

Today, the term “cloud” has become so overused that confusion reigns (get it?) over everyone.

Cloudable – meaning something that is conducive to leveraging cloud.  As in:  “My CRM application is cloudable “ or “We want to leverage data protection that includes cloudable capabilities”

Cloudiac – someone who is a huge proponent of cloud services.  A combination of “Cloud” and “Maniac”, as in:  “There were cloudiacs everywhere at Interop. “  In the not too distant future, we very well may see parallels to the “Trekkie” phenomena.  Imagine a bunch of middle-aged IT professionals running around in costumes made of giant cotton-balls and cardboard lightning bolts.

Cloudologist – an expert in cloud solutions.  Different from a Cloudiac, the Cloudologist actually has experience in developing and utilizing cloud based services.   This will lead to master’s degree programs in Cloudology.

Cloutonomous –  maintaining your autonomy over your systems and data in the cloud.  “I may be in the Cloud but I make sure I’m cloutonomous.”  Could refer to the consumer of the cloud services not being tied into long term services commitments that may inhibit their ability to move services in the event of a vendor failing to hit SLAs.

Cloud crawl – actions related to monitoring or reviewing your various cloud services.  “I went cloud crawling today and everything was sweet.” Off-take of the common “pub crawl,” just not as fun and with no lingering after-effects.

Counter-cloud – a reference to the concept of “counter culture,” which dates back to hippie days of the 60s and 70s.  In this application, it would describe a person or business that is against utilizing cloud services mainly because it is the new trend, or because they feel that it’s the latest government conspiracy to control the world.

Global Clouding – IT’s version of Global Warming, except in this case the world isn’t becoming uninhabitable, IT is just becoming a bit fuzzy around the edges.  What will IT be like with the advent of Global Clouding?

Clackers – Cloud and Hacker.  Clackers are those nefarious, shadowy figures that focus on disruption of cloud services.  This “new” form of hacker will concentrate on capturing data in transit, traffic disruption/re-direction (i.e. DNS Changer anyone?), and platform incursion.

Because IT is so lexicon heavy, building up a stable of Cloud-based terminology is inevitable, and potentially beneficial in focusing the terminology further.  Besides, as Cloudiacs will be fond of saying… “resistance is futile.”

Do you have any Neologisms of your own? I’d love to hear some!