Category Archives: Cloud computing

IT Multi-Tasking: I Was Told There’d Be No Math

By Ben Sawyer, Solutions Engineer

 

The term “multi-tasking” basically means doing more than one thing at once.  I am writing this blog while playing Legos w/ my son & helping my daughter find New Hampshire on the map.  But I am by no means doing more than one thing at once; I’m just quickly switching back & forth between the three which is referred to ask “context switching.”  Context switching in most cases is very costly.  There is a toll to be paid in terms of productivity when ramping up on a task before you can actually tackle that task. In an ideal world (where I also have a 2 handicap) one has the luxury to do a task from start to finish before starting a new task.  My son just refuses to let me have 15 minutes to write this blog because apparently building a steam roller right now is extremely important.  There is a sense of inertia when you work on a task after a short while because you begin to really concentrate on the task at hand.  Since we know it’s nearly impossible to put ourselves in a vacuum & work on one thing only, the best we can hope for is to do “similar” things (i.e., in the same context) at the same time.  Let’s pretend I have to email my co-worker that I’m late writing a blog, shovel my driveway, buy more Legos at Amazon.com, & get the mail (okay, I’m not pretending).  Since emailing & buying stuff online both require me to be in-front of my laptop and shoveling & going to my mailbox require me to be outside my house (my physical location), it would be far more efficient to do the tasks in the same “context” at the same time.  Think of the time it takes to get all bundled up & the time it takes to power on your laptop to get online.  Doing a few things at once usually means that you will not do that task as well (its quality) as you would have had you done it uninterrupted.  The more closely, time-wise, you can do a task usually means the better you will do that task since it will be “fresher” in your mind.  So…

  • Entire Task A + Entire Task B = Great Task A & Great Task B.
  • 1/2 Task A + Entire Task B + 1/2 Task A = Okay Task A & Excellent Task B.
  • 1/2 Task A + 1/2 Task B + 1/2 Task A + 1/2 Task B = Good Task A & Good Task B

Why does this matter?  Well, because the same exact concept applies to computers & the software we write.  A single processor can do one thing at a time only (let’s forget threads), but it can context switch extremely fast which gives the illusion of multi-tasking.  But, like a human, context switching has a cost for a computer.  So, when you write code try to do many “similar” things at the same time.  If you have a bunch of SQL queries to execute then you should open a connection to the database first, execute them, & close the connection.  If you need to call some VMware APIs then you should connect to vCenter first, do them, & close the connection.  Opening & closing connections to any system is often slow so group your actions by context which, in this case, are systems.  This also makes the code easier to read.  Speaking of reading, here’s a great example of the cost of context switching.  The author Tom Clancy loves to switch characters & plot lines every chapter.  This makes following the story very hard & whenever you put the book down & start reading again it’s nearly impossible to remember where you left off b/c there’s never, ever a good stopping point.  Tom Clancy’s writing is one of the best examples of how costly context switching is.

So, what does this have to do with cloud computing?  Well, it ties in directly with automation & orchestration.  Automation is doing the work & orchestration is determining the order in which work is done.  Things can get complicated quickly when numerous tasks need to be executed & it’s not immediately apparent which need to run first & which are dependent on other tasks.  And, once that is all figured out, what happens when a task fails?  While software executes linearly, an orchestration engine provides the ability to run multiple pieces of software concurrently.  And that’s where things get complicated real fast.  Sometimes it may make sense to execute things serially (one at a time) vs. in parallel (more than one at a time) simply b/c it becomes very hand to manage more than one task at the same time.

We live in a world in which there are 10 different devices from which we can check our email and, if we want, we can talk to our smartphone & ask it to read our email to us.  Technology has made it easy for us to get information virtually any time & in any format we want.  However, it is because of this information overload that our brains have trouble separating all the useful information from the white noise.  So we try to be more productive and we multi-task but that usually means we’re becoming more busy than productive.  In blogs to follow, I will provide some best practices for determining when it makes sense to run more than one task at a time.  Now, if you don’t mind, I need to help my daughter find Maine…

 

Research and Markets: Potential of Cloud Computing

Research and Markets has announced the addition of the “Potential of Cloud Computing” report to their offering.

First there was the advent of the Internet that changed the manner in which we do business forever. Now, with the advent of cloud computing, the world is ready to undergo another major shift in terms of technology.

Cloud computing is an internet-based process that makes it possible to share information, software and even resources from computers to other devices all through the internet. The concept of cloud computing brings forth a new delivery model for IT services that are conducting businesses over the Internet. The process generally involves provision of scalable and virtualized resources over the internet. Not only does the process provide ease-of-access, but the speed and overall reliability of the entire concept of cloud computing is changing the IT industry rapidly.

Taiyou Research presents an analysis of the Potential of Cloud Computing.

Key Topics Covered:

1. Executive Summary

2. Overview of Cloud Computing

3. Market Profile

4. Benefits of Deploying the Cloud

5. Cost Benefits to Organizations from Cloud Systems

6. Cloud Computing Delivery Modes

7. Cloud Computing Deployment Models

8. Understanding the Concept behind Cloud Computing

9. Application Programming Interfaces

10. Cloud Computing Taxonomy

11. Deployment Process of the Cloud System

12. Technical Features of Cloud Systems

13. Understanding Cloud Clients

14. Regulatory Landscape & Investment

15. Commercializing of Cloud Computing

16. Concepts Related to Cloud Computing

17. Cloud Computing versus Other Computing Paradigms

18. Cloud Exchanges and Markets Worldwide

19. Research Projects on Cloud Computing

20. Cloud Computing Case Studies

21. Future of Cloud Computing

22. Market Leaders

23. Appendix

24. Glossary


Kids on Work Devices, Bubble Wrap, and Why Every IT Organization Should Support BYOD.

 

http://www.youtube.com/watch?v=TPgT4UxuGRo

Francis Czekalski, GreenPages Enterprise Consultant talks about the challenges that IT professionals face today when dealing with BYOD—from supporting devices to dealing with employee behavior—and offers some coping strategies for living in the BYOD Era.

 

If you’re looking for more information, we will be holding free event in Atlanta on November 28th to discuss cloud management, virtualization, VDI, datacenter clusters, and more. Click for more information and to register- space is limited and filling up quickly!

Learn More About BYOD

To learn more about BYOD policy and strategy, please fill out this form and we will get in touch with you shortly.

CyrusOne, Dell, R Systems Partner for Oil & Gas Cloud-Based Solution

CyrusOne, a wholly owned subsidiary of Cincinnati Bell, announced today that its Houston West colocation facility is housing and enabling the first-ever enterprise high performance computing (HPC) Cloud solution from Dell and R Systems. The two companies have teamed together to establish a working “project partner” alliance, offering customized HPC solutions for clients.

Leveraging dedicated, secure and powerful computing resources for periods between one day and one year, the enterprise HPC Cloud solution enables companies to align performance compute directly to project periods and technology refresh cycles, to optimize resources and take advantage of the fastest compute technology available. HPC cloud solutions are an alternative to legacy IT infrastructures because they are faster to deploy, easily scale to uses and business cycles, and require less capital and operating investment.

“We see the combination of HPC and cloud technologies as an incredibly powerful solution with tremendous customer benefit,” says Nnamdi Orakwue, vice president, Dell Cloud. “Customers who need immediate, high-performing computing solutions for shorter time frames can quickly realize revenue opportunities. Dell continues to invest in cloud enabling solutions to help our customers achieve faster business results.”

The enterprise HPC Cloud solution frees companies from having to manage HPC environments and resources so that they can focus on running their businesses, not data centers. Oil and Gas companies use HPC to more rapidly analyze large amounts of geological data enabling these organizations to make wiser operational decisions and get to market faster, which amounts to improved financial performance.

To mitigate any performance risks, Dell and R Systems chose to launch the cloud-based solution in CyrusOne’s highly reliable enterprise data center colocation facility in Houston. The facility offers the highest power redundancy (2N architecture) and power-density infrastructure required to deliver excellent availability.

“It was a natural progression in our support of the oil and gas industry to move from supporting traditional hardware and processing for the data intensive industry to enabling a cloud-based solution,” said Kevin Timmons, chief technology officer, CyrusOne. “Sky for the Cloud creates an ecosystem to efficiently facilitate the generation, analysis, and sharing of all the geophysical data locally and statewide.”

CyrusOne’s Sky for the Cloud™ peering and interconnection platform enables Cloud applications in a customized data hall, designed for maximizing power usage effectiveness (PUE). It encompasses peering within a single location, to more quickly and affordably pull content from the edge of the Internet to the heart of the data center. The company is expected to launch later this year, the first statewide Internet exchange in the country that will connect all CyrusOne facilities in Texas—including Austin, Dallas, Houston, and San Antonio. The platform provides customers freedom of choice about how to build out capacity choosing either CyrusOne’s bandwidth marketplace, Internet exchange platform, or a cross-connect to cloud services.

CyrusOne has designed data center locations across the United States, Europe, and Asia that give customers the flexibility and scale to perfectly match their specific growth needs. In August 2012, the company announced plans to expand its Houston West site such that once fully complete, the facility will have more than 300,000 square feet of data center space, making it the oil and gas industry’s largest digital energy campus and a true geophysical center of excellence for seismic exploration computing.

The HPC Cloud solution from Dell and R Systems can support any industry requiring complex computing, including: oil and gas, finance, healthcare/life sciences, manufacturing and media.


Evolving to a Broker of Technology Services: Planning the Solution

By Trevor Williamson, Director, Solutions Architecture

A 3-Part Series:

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again

Part 2: Planning the Solution

As I wrote before and continuing with part 2 of this 3-part series, let’s talk about how we plan the solution for automating IT services and service management within your organization so that you can develop, deliver, and support services in a more disciplined way—which means that your customers will trust you. Of course this doesn’t mean that they won’t pursue outsourced, cloud, or other third-party services—but they will rely on you to get the most out of those services.  And once you do go through this process, some of the major benefits for implementing an automated service management infrastructure are:

  • Improved staff productivity that allows your business to become more competitive. Your time is too valuable to be spent fighting fires and performing repetitive tasks. If you prevent the fires and automate the repetitive tasks, you can focus on new projects and innovation instead. When you apply automation tools to good processes, productivity skyrockets to a level unachievable by manual methods.
  • Heightened quality of service that improves business uptime and customer experience. Consistent execution according to a well-defined change management process, for example, can dramatically reduce errors, that in turn improves uptime and customer experience because in today’s age of continuous operations and unrelenting customer demand, downtime can erode your competitive edge quickly. Sloppy change management can cause business downtime that prevents customers from buying online or reduces the productivity of your workforce.
  • Reduced operational costs to reinvest in new and innovative initiatives. It’s been said that keeping the lights on—the costs to maintain ongoing operations, systems, and equipment—eats up roughly 80% of the overall IT budget rather than going to new or innovative projects. With more standardized and automated processes, you can improve productivity and reduce operational costs allowing you the freedom to focus on more strategic initiatives.
  • Improved reputation with the business. Most self-aware IT organizations acknowledge that their reputation with business stakeholders isn’t always sterling. This is a critical problem, but you can’t fix it overnight—changing an organization’s culture, institutionalized behaviors, and stereotypes takes time and energy. If you can continue to drive higher productivity and quality through automated service management, your business stakeholders will take notice.

A very important aspect of planning this new infrastructure is to look toward, in fact assume, that the range of control will necessarily span both internal and external resources…that you will be stretching into public cloud spaces—not that you will always know you’re there until after the fact—and that you will be managing them (at least monitoring them) with the same level of granularity that you do with your traditional resources.

This includes integrating the native functionality of those off-premises services—reserving virtual machines and groups of machines, extending reservations, cloning aggregate applications, provisioning storage, etc., and connecting them to an end-to-end value chain of IT services that can be assessed, monitored and followed from where the data resides to where it is used by the end user:

It is through this holistic process—rationalized, deconstructed, optimized, reconstituted and ultimately automated—that the system as a whole can be seen as a fully automated IT services management infrastructure, but believe me when I say that this is not nor will it ever be an easy task.  When you are looking to plan how you automate your service management infrastructure, you need a comprehensive approach that follows a logical and tightly controlled progression.  By whatever name you call the methodology (and there are many out there) it needs to be concise, comprehensive, capable, and, above all else, controlled:

1. Identify the trends, justify the business case, and assess your maturity. Before investing in an automated service management infrastructure, you have to assess the opportunity, build the business case, and understand the current state. This phase will answer the following questions:

o    Why is automated service management important to my business?

o    What are the business and IT benefits?

o    How prepared is my organization to tackle this initiative?

2.  Develop your strategic plan, staffing plan, and technology roadmaps. You translate what you learn from the prior phase into specific automated service management strategies. The goal of this phase is to help you answer these key questions:

o    Do I have the right long-term strategic vision for automated service management?

o    What are my stakeholders’ expectations, and how can I deliver on them?

o    What technologies should I invest in and how should I prioritize them?

3.  Invest in your skills and staff, policies and procedures, and technologies and services. This phase is designed to execute on your automated service management strategies. This phase will answer the following people, process, and technology questions:

o    What specific skills and staff will I need, and when?

o    What policies and procedures do I need to develop and enforce?

o    Should I build and manage my own technology capabilities or use external service providers?

o    What specific vendors and service providers should I consider?

4.  Manage your performance, develop metrics, and communicate and train. Finally, to help you refine and improve your automated service management infrastructure, the goal in this phase is to help you answer these key questions:

o    How should I adjust my automated service management plans and budgets?

o    What metrics should I use to track my success?

o    How should I communicate and train stakeholders on new automated service management policies and technologies?

These phases and the associated questions to be answered are just a taste of what is required when you are thinking of moving toward an automated service management infrastructure—and of course GreenPages is here to help—especially when you are in the planning stages.  The process is not painless and it is certainly not easy but the end result, the journey in fact, is well worth the time, effort and investment to accomplish it.

Next…Part 3: Executing the Solution, again and again…

If you’re looking for more information, we will be holding free events in Boston, NYC, and Atlanta to discuss cloud computing, virtualization, VDI, clustered datacenters, and more. We’ll have a bunch of breakout sessions, and it will also be a great opportunity to network with peers.

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

The Evolution from a Provider of Technology Components to a Broker of Technology Services

A 3 Part Series from Trevor Williamson

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again…

Part 1: Understanding the Dilemma

IT teams are increasingly being challenged as bring-your-own-technology (BYOD) policies and “as-a-service” software and infrastructure multiply in mainstream organizations.  In this new reality, developers still need compute, network and storage to keep up with growth…and workers still need some sort of PC or mobile device to get their jobs done…but they don’t necessarily need corporate IT to give it to them.  They can turn to a shadow IT organization using Amazon, Rackspace and Savvis or using SAS applications or an unmanaged desktop because when all is said and done, if you can’t deliver on what your users and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.

Much of this shift toward outside services comes down to customer experience, or how your customers—your users—perceive their every interaction with IT, from your staff in the helpdesk to corporate applications they access every day.  If what you are delivering (or not delivering as the case may be) is more burdensome, more complicated or doesn’t react as fast as other service providers (like Amazon, Office 365, or Salesforce, etc.), then they will turn (in droves) toward those providers.

Now the question hanging heavy in the air is what do those providers have, except of course scale, that your IT organization doesn’t have?  What is the special sauce for them to be able to deliver those high-value services, quicker and at a lower cost than you can?

In a few words; IT Service Management (ITSM)…but wait!…I know the first reaction you might have is that ITSM has become a sour subject and that if you hear ITIL chanted one more time you’re going to flip out.  The type of ITSM I’m talking about is really the next generation and has only passing similarities to the service management initiatives of the past.  While it is agreed that ITSM has the potential to deliver the experiences and outcomes your developers and users need and want, today’s ITSM falls far short of that idea.  Process for process sake you’ve probably heard…but whatever, we are still measuring success based on internal IT efficiencies, not customer or financial value or even customer satisfaction. We still associate ITSM exclusively with ITIL best practices and we continue to label ourselves as providers of technology components.

As it turns out, the adage “You cannot fix today’s problems with yesterday’s solutions” is as right as it ever was.  We need to turn ITSM on its head and create a new way forward based on customer centricity, services focus, and automated operations.  We have to rethink the role we play and how we engage with the business.  Among the most significant transformations of IT we need to complete is from a provider of technology components to a broker of technology services. We have relied on ITSM to drive this transformation, but ITSM needs to change in order to be truly effective in the future. Here’s why:

   The roots of service management focus was on the customer:  “Service management” originated within the product marketing and management departments and from the beginning, service management placed the customer at the center of all decision making within the service provider organization. It’s the foundation to transform product-oriented organizations into service providers where the customer experience and interaction are designed and managed to cost-effectively deliver customer results and satisfaction.

  But when we applied service management to IT, we lost customer focus: Applying service management to information technology produced the well-known discipline of ITSM but, unfortunately, IT professionals associated it exclusively with the IT infrastructure library (ITIL) best practices, which focus on processes for managing IT infrastructure to enable and support services. What’s missing is the customer perspective.

   In the age of the customer, we need to proactively manage services via automation: In the age of the customer, technology-led disruption (virtualization, automation, orchestration, operating at scale, etc.) erodes traditional competitive barriers making it easier than ever for empowered employees and app developers to take advantage of new devices and cloud-based software. To truly function as a service provider, IT needs to first and foremost consider the customer and the customer’s desired outcome in order to serve them faster, cheaper, and at a higher quality.  In today’s world, this can only be accomplished via automation.

When customers don’t trust a provider to deliver quality products or services, they seek alternatives. That’s a pretty simple concept that everyone can understand but, what if the customer is a user of IT services that you provide?  Where do they go if they don’t like the quality of your products or services?  Yep, Amazon, Rackspace, Terremark, etc., and any other service provider who offers a solution that you can’t…or that you can’t in the required time or for the required price.

The reason why these service providers can do these seemingly amazing things and offer such diverse and, at times, sophisticated services, is because they have eliminated (mostly) the issues associated with humans doing “stuff” by automating commodity IT activities and then orchestrating those automated activities toward delivering aggregate IT services.  They have evolved from being providers of technology components to brokers of technology services.

If you’re looking for more information on BYOD, register for our upcoming webinar “BYOD Webinar- Don’t Fight It, Mitigate the Risk with Mobile Management

Next…Part 2: Planning the Solution

 

Stay Safe in the Cloud With Two-Factor Authentication

The use of two-factor authentication has been around for years, but the recent addition of this security feature in cloud services from Google and Dropbox has drawn widespread attention.  The Dropbox offering came just two months after a well-publicized security breach at their online file sharing service.

Exactly What Is Two-Factor Authentication?

Of course, most online applications require a user name and password in order to log on.  Much has been written about the importance of managing your passwords carefully.  However, simple password protection only goes so far.

Two-factor authentication involves not only the use of something the user knows such as a password, but also something that only the user has.  An intruder can no longer gain access to the system simply by illicitly obtaining your password.

Authentication Tools

  • ATM Cards:  These are perhaps the most widely used two-factor authentication device.  The user must both insert the card and enter a password in order to access the ATM.
  • Tokens:  The use of tokens has increased substantially in recent years.  Most of these are time-based tokens that involve the use of a key sized plastic device with a screen that displays a security code that continually changes.  The user must enter not only their password, but also the security code from the token. Tokens have been popular with sensitive applications such as on-line bank and
    brokerage sites.
  • Smart Cards:  These function similarly to ATM cards, but are used in a wider variety of applications.  Unlike most ATM cards, smart cards have an embedded microprocessor for added security.
  • Smart Phones:  The proliferation of smart phones has provided the perfect impetus to expand two-factor authentication to widely used internet applications in the cloud.  In these cases, users must enter not only a password, but also a security code from their phone or other mobile device.  This code can be sent to a phone by the service provider as an SMS text message or generated on a smartphone using a mobile authenticator app.  Both Google and Dropbox now use this method.

Yahoo! Mail and Facebook are also introducing two-factor authentication using smart phones.  However, their methodology only prompts the user to enter the security code if a security breach is suspected or a new device is used.

So What’s Next?

Cloud security is a hot topic and two-factor authentication is one way to mitigate users’ well founded concerns.  As a result, development and adoption of two-factor authentication systems is proceeding at a rapid pace and should be available for most cloud applications within just a few short years.

The shift from token based authentication to SMS based authentication is also likely to accelerate along with smart phone use.

Two-factor and even three-factor authentication using biometrics will become more popular.   Finger print readers are already quite common on laptop computers.  Use of facial recognition, voice recognition, hand geometry, retina scans, etc. will become more common as the technology develops and the price drops.  The obvious advantage of these biometric systems is that the physical device cannot be stolen or otherwise used by a third party to gain access to the system.

As with any security system, two-factor authentication is not 100% secure.  Even token systems have been hacked and there is no doubt that there will be breaches in SMS authentication tools as well.  However, two-factor authentication still provides the best way to stay safe in the cloud and it’s advisable to use it whenever possible.

This post is by Rackspace blogger Thomas Parent. Rackspace Hosting is a service leader in cloud computing, and a founder of OpenStack, an open source cloud operating system. The San Antonio-based company provides Fanatical Support to its customers and partners, across a portfolio of IT services, including Managed Hosting and Cloud Computing.

A More Practical View of Cloud Brokers

#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance

cloudbrokerviews During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations.

Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you’re deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary.

While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology.

See, choosing a cloud deployment environment is not just about “will this workload run in cloud X”. It’s far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can’t know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself.

Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It’s one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it’s quite another to violate some of the government regulations that govern not only data at rest but in flight.

A PRACTICAL VIEW of CLOUD BROKERS

Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process.

And it must be IT that codifies those requirements into a policy that can be leveraged by the  broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default “deny all” policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That’s the role of IT, and even wearing rainbow-colored cloud glasses can’t eliminate the very real and important role IT has to play here.

The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today’s environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. 

So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs.

The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed.


Related blogs and articles:  
 
lori-short-2012clip_image004[5]

Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite.

Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules

 

F5 Networks

clip_image003[5]clip_image004[5]clip_image006[5]clip_image007[5]clip_image008[5]


read more

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…