Archivo de la categoría: Cloud computing

Is Cloud Computing Ready for Prime Time?

By John Dixon, Senior Solutions Architect

 

A few weeks ago, I took part in another engaging tweetchat on Cloud Computing. The topic: is cloud computing ready for enterprise adoption? You can find the transcript here.

 

As usual with tweetchats hosted by CloudCommons, five questions are presented a few days in advance of the event. This time around, the questions were:

  1. Is Public Cloud mature enough for enterprise adoption?
  2. Should Public Cloud be a part of every business’s IT strategy?
  3. How big of a barrier are legacy applications and hardware to public cloud adoption?
  4. What’s the best way to deal with cloud security?
  5. What’s the best way to get started with public cloud?

 

As far as Question #1, the position of most people in the chat session this time was that Public Cloud is mature enough for certain applications in enterprises today. The technology certainly exists to run applications “in the cloud” but regulations and policies may not be ready to handle an application’s cloud deployment. Another interesting observation from the tweetchat was that most enterprises are indeed running applications “in the cloud” right now. GreenPages considers applications such as Concur and Salesforce.com as running “in the cloud.” And of course, many organizations large and small run these applications successfully. I’d also consider ADP as a cloud application. And of course, many organizations make use of ADP for payroll processing.

Are enterprises mature enough for cloud computing?

Much of the discussion during question #1 turned the question on end – the technology is there, but enterprises are not ready to deploy applications there. GreenPages’ position is that, even if we assume that cloud computing is not yet ready for prime time, then it certainly will be soon. Organizations should prepare for this eventuality by gaining a deep understanding of the IT services they provide, and how much a particular IT service costs. When one or more of your IT services can be substituted for one that runs (reliably and inexpensively) in the cloud, will your company be able to make the right decision to take advantage of that condition? Also, another interesting observation: some public cloud offerings may be enterprise-ready, but not all public cloud vendors are enterprise-grade. We agree.

Should every business have a public cloud strategy?

Most of the discussion here pointed to a “yes” answer. Or that an organization’s strategy will eventually, by default, include consideration for public cloud. We think of cloud computing as a sourcing strategy in and of itself – especially when thinking of IaaS and PaaS. Even now, IaaS vendors are essentially providers of commodity IT services. Most commonly, IaaS vendors can provide you with an operating system instance: Windows or Linux. For IaaS, the degree of abstraction is very high, as an operating system instance can be deployed on a wide range of systems – physical, virtual, paravirtual, etc. The consumer of these services doesn’t mind where the OS instance is running, as long as it is performing to the agreed SLA. Think of Amazon Web Services here. Depending on the application that I’m deploying, there is little difference whether I’m using infrastructure that is running physically in Northern Virginia or in Southern California. At GreenPages, we think that this degree of abstraction will move in to the enterprise as corporate IT departments evolve to behave more like service providers… and probably evolve in to brokers of IT services – supported by a public cloud strategy.

Security and legacy applications

Two questions revolved around legacy applications and security as barriers to adoption. Every organization has a particular application that will not be considered for cloud computing. The arguments are similar for the reasons why we never (or, are just beginning to) virtualize legacy applications. Sometimes, virtualizing specialized hardware is, well, really hard and just not worth the effort.

What’s the best way to get started with public cloud?

“Just go out and use Amazon,” was a common response to this question, both in this particular tweetchat and in other discussions. Indeed, trying Amazon for some development activities is not a bad way to evaluate the features of public cloud. In our view, the best way to get started with cloud is to begin managing your datacenter as if it were a cloud environment, with some tool that can manage traditional and cloud environments the same way. Even legacy applications. Even applications with specialized hardware. Virtual, physical, paravirtual, etc. Begin to monitor and measure your applications in a consistent manner. This way, when an application is deployed to a cloud provider, your organization can continue to monitor, measure, and manage that application using the same method. For those of us who are risk-averse, this is the easiest way to get started with cloud! How is this done? We think you’ll see that Cloud Management as a Service (CMaaS) is the best way.

Would you like to learn more about our new CMaaS offering? Click here to receive some more information.

Getting Out of the IT Business

Randy Weis, Director of Solutions Architecture

Strange title for a blog from an IT solutions architect? Not really.

Some of our clients—a lumber mill, a consulting firm, a hospital—are starting to ask us how to get out of “doing IT.” What do these organizations all have in common? They all have a history of challenges in effective technology implementations and application projects leading to the CIO/CTO/CFO asking, “Why are we in the IT business? What can we do to offload the work, eliminate the capital expenses, keep operating expenses down, and focus our IT efforts on making our business more responsive to shifting demands and reaching more customers with a higher satisfaction rate?”

True stories.

If you are in the business of reselling compute, network, or storage gear, this might not be the kind of question you want to hear.

If you are in the business of consulting on technology solutions to meet business requirements, this is exactly the kind of question you should be preparing to answer. If you don’t start working on those answers, your business will suffer for it.

Technology has evolved to the point where the failed marketing terms of grid or utility computing are starting to come back to life—and we are not talking about zombie technology. Cloud computing used to be about as real as grid or utility computing, but “cloud” is no longer just a marketing term. We now have new, proven, and emerging technologies that actually can support a utility model for information technology. Corporate IT executives now are starting to accept that the new cloud computing infrastructure-as-a-service is reliable (recent AWS outages not withstanding) predictable, and useful to a corporate strategy. Corporate applications still need to be evaluated for requirements that restrict deployment and implementation strategies–latency, performance, concerns over satisfying legal/privacy/regulatory issues, and so on. However, the need to have elastic, scalable, on-demand IT services that are accessible anywhere is starting to force even the most conservative executives to look at the cloud for offloading non-mission critical workloads and associated costs (staff, equipment, licensing, training and so on). Mission critical applications can still benefit from cloud technology, perhaps only as internal or private cloud, but the same factors still apply—reduce time to deploy or provision, automate workflow, scale up or down as dictated by business cycles, and push provisioning back out into the business (while holding those same units accountable for the resources they “deploy”).

Infrastructure as a service is really just the latest iteration of self-service IT. Software as a service has been with us for some time now, and in some cases is the default mode—CRM is the best example (e.g. Salesforce). Web-based businesses have been virtualizing workloads and automating deployment of capacity for some time now as well. Development and testing have also been the “low hanging fruit” of both virtualization and cloud computing. However, when the technology of virtualization reached a certain critical mass, primarily driven by VMware and Microsoft (at least at the datacenter level), then everyone started taking a second look at this new type of managed hosting. Make no mistake—IaaS is managed hosting, but New and Improved. Anyone who had to deal with provisioning and deployment at AT&T or other large colocation data centers (and no offense meant) knew that there was no “self-service” involved at all. Deployments were major projects with timelines that rivaled the internal glacial pace of most IT projects—a pace that led to the historic frustration levels that drove business units to run around their own IT and start buying IT services with a credit card at Amazon and Rack Space.

If you or your executives are starting to ask yourselves if you can get out of the day-to-day business of running an internal datacenter, you are in good company. Virtualization of compute, network and storage has led to ever-greater efficiency, helping you get more out of every dollar spent on hardware and staff. But it has also led to ever-greater complexity and a need to retrain your internal staff more frequently. Information Technology services are essential to a successful business, but they can no longer just be a cost center. They need to be a profit center; a cost of doing business for sure, but also a way to drive revenues and shorten time-to-market.

Where do you go for answers? What service providers have a good track record for uptime, customer satisfaction, support excellence and innovation? What technologies will help you integrate your internal IT with your “external” IT? Where can you turn to for management and monitoring tools? What managed services can help you with gaining visibility into all parts of your IT infrastructure, that can deal with a hybrid and distributed datacenter model, that can address everything from firewalls to backups? Who can you ask?

There is an emerging cadre of thought leaders and technologists that have been preparing for this day, laying the foundation, developing the expertise, building partner relationships with service providers and watching to see who is successful and growing…and who is not. GreenPages is in the very front line of this new cadre. We have been out in front with virtualization of servers. We have been out in front with storage and networking support for virtual datacenters. We have been out in front with private cloud implementations. We are absolutely out in front of everyone in developing Cloud Management As A Service.

We have been waiting for you. Welcome. Now let’s get to work.For more information on our Cloud Management as a Service Offering click here

IT Multi-Tasking: I Was Told There’d Be No Math

By Ben Sawyer, Solutions Engineer

 

The term “multi-tasking” basically means doing more than one thing at once.  I am writing this blog while playing Legos w/ my son & helping my daughter find New Hampshire on the map.  But I am by no means doing more than one thing at once; I’m just quickly switching back & forth between the three which is referred to ask “context switching.”  Context switching in most cases is very costly.  There is a toll to be paid in terms of productivity when ramping up on a task before you can actually tackle that task. In an ideal world (where I also have a 2 handicap) one has the luxury to do a task from start to finish before starting a new task.  My son just refuses to let me have 15 minutes to write this blog because apparently building a steam roller right now is extremely important.  There is a sense of inertia when you work on a task after a short while because you begin to really concentrate on the task at hand.  Since we know it’s nearly impossible to put ourselves in a vacuum & work on one thing only, the best we can hope for is to do “similar” things (i.e., in the same context) at the same time.  Let’s pretend I have to email my co-worker that I’m late writing a blog, shovel my driveway, buy more Legos at Amazon.com, & get the mail (okay, I’m not pretending).  Since emailing & buying stuff online both require me to be in-front of my laptop and shoveling & going to my mailbox require me to be outside my house (my physical location), it would be far more efficient to do the tasks in the same “context” at the same time.  Think of the time it takes to get all bundled up & the time it takes to power on your laptop to get online.  Doing a few things at once usually means that you will not do that task as well (its quality) as you would have had you done it uninterrupted.  The more closely, time-wise, you can do a task usually means the better you will do that task since it will be “fresher” in your mind.  So…

  • Entire Task A + Entire Task B = Great Task A & Great Task B.
  • 1/2 Task A + Entire Task B + 1/2 Task A = Okay Task A & Excellent Task B.
  • 1/2 Task A + 1/2 Task B + 1/2 Task A + 1/2 Task B = Good Task A & Good Task B

Why does this matter?  Well, because the same exact concept applies to computers & the software we write.  A single processor can do one thing at a time only (let’s forget threads), but it can context switch extremely fast which gives the illusion of multi-tasking.  But, like a human, context switching has a cost for a computer.  So, when you write code try to do many “similar” things at the same time.  If you have a bunch of SQL queries to execute then you should open a connection to the database first, execute them, & close the connection.  If you need to call some VMware APIs then you should connect to vCenter first, do them, & close the connection.  Opening & closing connections to any system is often slow so group your actions by context which, in this case, are systems.  This also makes the code easier to read.  Speaking of reading, here’s a great example of the cost of context switching.  The author Tom Clancy loves to switch characters & plot lines every chapter.  This makes following the story very hard & whenever you put the book down & start reading again it’s nearly impossible to remember where you left off b/c there’s never, ever a good stopping point.  Tom Clancy’s writing is one of the best examples of how costly context switching is.

So, what does this have to do with cloud computing?  Well, it ties in directly with automation & orchestration.  Automation is doing the work & orchestration is determining the order in which work is done.  Things can get complicated quickly when numerous tasks need to be executed & it’s not immediately apparent which need to run first & which are dependent on other tasks.  And, once that is all figured out, what happens when a task fails?  While software executes linearly, an orchestration engine provides the ability to run multiple pieces of software concurrently.  And that’s where things get complicated real fast.  Sometimes it may make sense to execute things serially (one at a time) vs. in parallel (more than one at a time) simply b/c it becomes very hand to manage more than one task at the same time.

We live in a world in which there are 10 different devices from which we can check our email and, if we want, we can talk to our smartphone & ask it to read our email to us.  Technology has made it easy for us to get information virtually any time & in any format we want.  However, it is because of this information overload that our brains have trouble separating all the useful information from the white noise.  So we try to be more productive and we multi-task but that usually means we’re becoming more busy than productive.  In blogs to follow, I will provide some best practices for determining when it makes sense to run more than one task at a time.  Now, if you don’t mind, I need to help my daughter find Maine…

 

Research and Markets: Potential of Cloud Computing

Research and Markets has announced the addition of the “Potential of Cloud Computing” report to their offering.

First there was the advent of the Internet that changed the manner in which we do business forever. Now, with the advent of cloud computing, the world is ready to undergo another major shift in terms of technology.

Cloud computing is an internet-based process that makes it possible to share information, software and even resources from computers to other devices all through the internet. The concept of cloud computing brings forth a new delivery model for IT services that are conducting businesses over the Internet. The process generally involves provision of scalable and virtualized resources over the internet. Not only does the process provide ease-of-access, but the speed and overall reliability of the entire concept of cloud computing is changing the IT industry rapidly.

Taiyou Research presents an analysis of the Potential of Cloud Computing.

Key Topics Covered:

1. Executive Summary

2. Overview of Cloud Computing

3. Market Profile

4. Benefits of Deploying the Cloud

5. Cost Benefits to Organizations from Cloud Systems

6. Cloud Computing Delivery Modes

7. Cloud Computing Deployment Models

8. Understanding the Concept behind Cloud Computing

9. Application Programming Interfaces

10. Cloud Computing Taxonomy

11. Deployment Process of the Cloud System

12. Technical Features of Cloud Systems

13. Understanding Cloud Clients

14. Regulatory Landscape & Investment

15. Commercializing of Cloud Computing

16. Concepts Related to Cloud Computing

17. Cloud Computing versus Other Computing Paradigms

18. Cloud Exchanges and Markets Worldwide

19. Research Projects on Cloud Computing

20. Cloud Computing Case Studies

21. Future of Cloud Computing

22. Market Leaders

23. Appendix

24. Glossary


Kids on Work Devices, Bubble Wrap, and Why Every IT Organization Should Support BYOD.

 

http://www.youtube.com/watch?v=TPgT4UxuGRo

Francis Czekalski, GreenPages Enterprise Consultant talks about the challenges that IT professionals face today when dealing with BYOD—from supporting devices to dealing with employee behavior—and offers some coping strategies for living in the BYOD Era.

 

If you’re looking for more information, we will be holding free event in Atlanta on November 28th to discuss cloud management, virtualization, VDI, datacenter clusters, and more. Click for more information and to register- space is limited and filling up quickly!

Learn More About BYOD

To learn more about BYOD policy and strategy, please fill out this form and we will get in touch with you shortly.

CyrusOne, Dell, R Systems Partner for Oil & Gas Cloud-Based Solution

CyrusOne, a wholly owned subsidiary of Cincinnati Bell, announced today that its Houston West colocation facility is housing and enabling the first-ever enterprise high performance computing (HPC) Cloud solution from Dell and R Systems. The two companies have teamed together to establish a working “project partner” alliance, offering customized HPC solutions for clients.

Leveraging dedicated, secure and powerful computing resources for periods between one day and one year, the enterprise HPC Cloud solution enables companies to align performance compute directly to project periods and technology refresh cycles, to optimize resources and take advantage of the fastest compute technology available. HPC cloud solutions are an alternative to legacy IT infrastructures because they are faster to deploy, easily scale to uses and business cycles, and require less capital and operating investment.

“We see the combination of HPC and cloud technologies as an incredibly powerful solution with tremendous customer benefit,” says Nnamdi Orakwue, vice president, Dell Cloud. “Customers who need immediate, high-performing computing solutions for shorter time frames can quickly realize revenue opportunities. Dell continues to invest in cloud enabling solutions to help our customers achieve faster business results.”

The enterprise HPC Cloud solution frees companies from having to manage HPC environments and resources so that they can focus on running their businesses, not data centers. Oil and Gas companies use HPC to more rapidly analyze large amounts of geological data enabling these organizations to make wiser operational decisions and get to market faster, which amounts to improved financial performance.

To mitigate any performance risks, Dell and R Systems chose to launch the cloud-based solution in CyrusOne’s highly reliable enterprise data center colocation facility in Houston. The facility offers the highest power redundancy (2N architecture) and power-density infrastructure required to deliver excellent availability.

“It was a natural progression in our support of the oil and gas industry to move from supporting traditional hardware and processing for the data intensive industry to enabling a cloud-based solution,” said Kevin Timmons, chief technology officer, CyrusOne. “Sky for the Cloud creates an ecosystem to efficiently facilitate the generation, analysis, and sharing of all the geophysical data locally and statewide.”

CyrusOne’s Sky for the Cloud™ peering and interconnection platform enables Cloud applications in a customized data hall, designed for maximizing power usage effectiveness (PUE). It encompasses peering within a single location, to more quickly and affordably pull content from the edge of the Internet to the heart of the data center. The company is expected to launch later this year, the first statewide Internet exchange in the country that will connect all CyrusOne facilities in Texas—including Austin, Dallas, Houston, and San Antonio. The platform provides customers freedom of choice about how to build out capacity choosing either CyrusOne’s bandwidth marketplace, Internet exchange platform, or a cross-connect to cloud services.

CyrusOne has designed data center locations across the United States, Europe, and Asia that give customers the flexibility and scale to perfectly match their specific growth needs. In August 2012, the company announced plans to expand its Houston West site such that once fully complete, the facility will have more than 300,000 square feet of data center space, making it the oil and gas industry’s largest digital energy campus and a true geophysical center of excellence for seismic exploration computing.

The HPC Cloud solution from Dell and R Systems can support any industry requiring complex computing, including: oil and gas, finance, healthcare/life sciences, manufacturing and media.


Evolving to a Broker of Technology Services: Planning the Solution

By Trevor Williamson, Director, Solutions Architecture

A 3-Part Series:

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again

Part 2: Planning the Solution

As I wrote before and continuing with part 2 of this 3-part series, let’s talk about how we plan the solution for automating IT services and service management within your organization so that you can develop, deliver, and support services in a more disciplined way—which means that your customers will trust you. Of course this doesn’t mean that they won’t pursue outsourced, cloud, or other third-party services—but they will rely on you to get the most out of those services.  And once you do go through this process, some of the major benefits for implementing an automated service management infrastructure are:

  • Improved staff productivity that allows your business to become more competitive. Your time is too valuable to be spent fighting fires and performing repetitive tasks. If you prevent the fires and automate the repetitive tasks, you can focus on new projects and innovation instead. When you apply automation tools to good processes, productivity skyrockets to a level unachievable by manual methods.
  • Heightened quality of service that improves business uptime and customer experience. Consistent execution according to a well-defined change management process, for example, can dramatically reduce errors, that in turn improves uptime and customer experience because in today’s age of continuous operations and unrelenting customer demand, downtime can erode your competitive edge quickly. Sloppy change management can cause business downtime that prevents customers from buying online or reduces the productivity of your workforce.
  • Reduced operational costs to reinvest in new and innovative initiatives. It’s been said that keeping the lights on—the costs to maintain ongoing operations, systems, and equipment—eats up roughly 80% of the overall IT budget rather than going to new or innovative projects. With more standardized and automated processes, you can improve productivity and reduce operational costs allowing you the freedom to focus on more strategic initiatives.
  • Improved reputation with the business. Most self-aware IT organizations acknowledge that their reputation with business stakeholders isn’t always sterling. This is a critical problem, but you can’t fix it overnight—changing an organization’s culture, institutionalized behaviors, and stereotypes takes time and energy. If you can continue to drive higher productivity and quality through automated service management, your business stakeholders will take notice.

A very important aspect of planning this new infrastructure is to look toward, in fact assume, that the range of control will necessarily span both internal and external resources…that you will be stretching into public cloud spaces—not that you will always know you’re there until after the fact—and that you will be managing them (at least monitoring them) with the same level of granularity that you do with your traditional resources.

This includes integrating the native functionality of those off-premises services—reserving virtual machines and groups of machines, extending reservations, cloning aggregate applications, provisioning storage, etc., and connecting them to an end-to-end value chain of IT services that can be assessed, monitored and followed from where the data resides to where it is used by the end user:

It is through this holistic process—rationalized, deconstructed, optimized, reconstituted and ultimately automated—that the system as a whole can be seen as a fully automated IT services management infrastructure, but believe me when I say that this is not nor will it ever be an easy task.  When you are looking to plan how you automate your service management infrastructure, you need a comprehensive approach that follows a logical and tightly controlled progression.  By whatever name you call the methodology (and there are many out there) it needs to be concise, comprehensive, capable, and, above all else, controlled:

1. Identify the trends, justify the business case, and assess your maturity. Before investing in an automated service management infrastructure, you have to assess the opportunity, build the business case, and understand the current state. This phase will answer the following questions:

o    Why is automated service management important to my business?

o    What are the business and IT benefits?

o    How prepared is my organization to tackle this initiative?

2.  Develop your strategic plan, staffing plan, and technology roadmaps. You translate what you learn from the prior phase into specific automated service management strategies. The goal of this phase is to help you answer these key questions:

o    Do I have the right long-term strategic vision for automated service management?

o    What are my stakeholders’ expectations, and how can I deliver on them?

o    What technologies should I invest in and how should I prioritize them?

3.  Invest in your skills and staff, policies and procedures, and technologies and services. This phase is designed to execute on your automated service management strategies. This phase will answer the following people, process, and technology questions:

o    What specific skills and staff will I need, and when?

o    What policies and procedures do I need to develop and enforce?

o    Should I build and manage my own technology capabilities or use external service providers?

o    What specific vendors and service providers should I consider?

4.  Manage your performance, develop metrics, and communicate and train. Finally, to help you refine and improve your automated service management infrastructure, the goal in this phase is to help you answer these key questions:

o    How should I adjust my automated service management plans and budgets?

o    What metrics should I use to track my success?

o    How should I communicate and train stakeholders on new automated service management policies and technologies?

These phases and the associated questions to be answered are just a taste of what is required when you are thinking of moving toward an automated service management infrastructure—and of course GreenPages is here to help—especially when you are in the planning stages.  The process is not painless and it is certainly not easy but the end result, the journey in fact, is well worth the time, effort and investment to accomplish it.

Next…Part 3: Executing the Solution, again and again…

If you’re looking for more information, we will be holding free events in Boston, NYC, and Atlanta to discuss cloud computing, virtualization, VDI, clustered datacenters, and more. We’ll have a bunch of breakout sessions, and it will also be a great opportunity to network with peers.

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

The Evolution from a Provider of Technology Components to a Broker of Technology Services

A 3 Part Series from Trevor Williamson

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again…

Part 1: Understanding the Dilemma

IT teams are increasingly being challenged as bring-your-own-technology (BYOD) policies and “as-a-service” software and infrastructure multiply in mainstream organizations.  In this new reality, developers still need compute, network and storage to keep up with growth…and workers still need some sort of PC or mobile device to get their jobs done…but they don’t necessarily need corporate IT to give it to them.  They can turn to a shadow IT organization using Amazon, Rackspace and Savvis or using SAS applications or an unmanaged desktop because when all is said and done, if you can’t deliver on what your users and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.

Much of this shift toward outside services comes down to customer experience, or how your customers—your users—perceive their every interaction with IT, from your staff in the helpdesk to corporate applications they access every day.  If what you are delivering (or not delivering as the case may be) is more burdensome, more complicated or doesn’t react as fast as other service providers (like Amazon, Office 365, or Salesforce, etc.), then they will turn (in droves) toward those providers.

Now the question hanging heavy in the air is what do those providers have, except of course scale, that your IT organization doesn’t have?  What is the special sauce for them to be able to deliver those high-value services, quicker and at a lower cost than you can?

In a few words; IT Service Management (ITSM)…but wait!…I know the first reaction you might have is that ITSM has become a sour subject and that if you hear ITIL chanted one more time you’re going to flip out.  The type of ITSM I’m talking about is really the next generation and has only passing similarities to the service management initiatives of the past.  While it is agreed that ITSM has the potential to deliver the experiences and outcomes your developers and users need and want, today’s ITSM falls far short of that idea.  Process for process sake you’ve probably heard…but whatever, we are still measuring success based on internal IT efficiencies, not customer or financial value or even customer satisfaction. We still associate ITSM exclusively with ITIL best practices and we continue to label ourselves as providers of technology components.

As it turns out, the adage “You cannot fix today’s problems with yesterday’s solutions” is as right as it ever was.  We need to turn ITSM on its head and create a new way forward based on customer centricity, services focus, and automated operations.  We have to rethink the role we play and how we engage with the business.  Among the most significant transformations of IT we need to complete is from a provider of technology components to a broker of technology services. We have relied on ITSM to drive this transformation, but ITSM needs to change in order to be truly effective in the future. Here’s why:

   The roots of service management focus was on the customer:  “Service management” originated within the product marketing and management departments and from the beginning, service management placed the customer at the center of all decision making within the service provider organization. It’s the foundation to transform product-oriented organizations into service providers where the customer experience and interaction are designed and managed to cost-effectively deliver customer results and satisfaction.

  But when we applied service management to IT, we lost customer focus: Applying service management to information technology produced the well-known discipline of ITSM but, unfortunately, IT professionals associated it exclusively with the IT infrastructure library (ITIL) best practices, which focus on processes for managing IT infrastructure to enable and support services. What’s missing is the customer perspective.

   In the age of the customer, we need to proactively manage services via automation: In the age of the customer, technology-led disruption (virtualization, automation, orchestration, operating at scale, etc.) erodes traditional competitive barriers making it easier than ever for empowered employees and app developers to take advantage of new devices and cloud-based software. To truly function as a service provider, IT needs to first and foremost consider the customer and the customer’s desired outcome in order to serve them faster, cheaper, and at a higher quality.  In today’s world, this can only be accomplished via automation.

When customers don’t trust a provider to deliver quality products or services, they seek alternatives. That’s a pretty simple concept that everyone can understand but, what if the customer is a user of IT services that you provide?  Where do they go if they don’t like the quality of your products or services?  Yep, Amazon, Rackspace, Terremark, etc., and any other service provider who offers a solution that you can’t…or that you can’t in the required time or for the required price.

The reason why these service providers can do these seemingly amazing things and offer such diverse and, at times, sophisticated services, is because they have eliminated (mostly) the issues associated with humans doing “stuff” by automating commodity IT activities and then orchestrating those automated activities toward delivering aggregate IT services.  They have evolved from being providers of technology components to brokers of technology services.

If you’re looking for more information on BYOD, register for our upcoming webinar “BYOD Webinar- Don’t Fight It, Mitigate the Risk with Mobile Management

Next…Part 2: Planning the Solution

 

Stay Safe in the Cloud With Two-Factor Authentication

The use of two-factor authentication has been around for years, but the recent addition of this security feature in cloud services from Google and Dropbox has drawn widespread attention.  The Dropbox offering came just two months after a well-publicized security breach at their online file sharing service.

Exactly What Is Two-Factor Authentication?

Of course, most online applications require a user name and password in order to log on.  Much has been written about the importance of managing your passwords carefully.  However, simple password protection only goes so far.

Two-factor authentication involves not only the use of something the user knows such as a password, but also something that only the user has.  An intruder can no longer gain access to the system simply by illicitly obtaining your password.

Authentication Tools

  • ATM Cards:  These are perhaps the most widely used two-factor authentication device.  The user must both insert the card and enter a password in order to access the ATM.
  • Tokens:  The use of tokens has increased substantially in recent years.  Most of these are time-based tokens that involve the use of a key sized plastic device with a screen that displays a security code that continually changes.  The user must enter not only their password, but also the security code from the token. Tokens have been popular with sensitive applications such as on-line bank and
    brokerage sites.
  • Smart Cards:  These function similarly to ATM cards, but are used in a wider variety of applications.  Unlike most ATM cards, smart cards have an embedded microprocessor for added security.
  • Smart Phones:  The proliferation of smart phones has provided the perfect impetus to expand two-factor authentication to widely used internet applications in the cloud.  In these cases, users must enter not only a password, but also a security code from their phone or other mobile device.  This code can be sent to a phone by the service provider as an SMS text message or generated on a smartphone using a mobile authenticator app.  Both Google and Dropbox now use this method.

Yahoo! Mail and Facebook are also introducing two-factor authentication using smart phones.  However, their methodology only prompts the user to enter the security code if a security breach is suspected or a new device is used.

So What’s Next?

Cloud security is a hot topic and two-factor authentication is one way to mitigate users’ well founded concerns.  As a result, development and adoption of two-factor authentication systems is proceeding at a rapid pace and should be available for most cloud applications within just a few short years.

The shift from token based authentication to SMS based authentication is also likely to accelerate along with smart phone use.

Two-factor and even three-factor authentication using biometrics will become more popular.   Finger print readers are already quite common on laptop computers.  Use of facial recognition, voice recognition, hand geometry, retina scans, etc. will become more common as the technology develops and the price drops.  The obvious advantage of these biometric systems is that the physical device cannot be stolen or otherwise used by a third party to gain access to the system.

As with any security system, two-factor authentication is not 100% secure.  Even token systems have been hacked and there is no doubt that there will be breaches in SMS authentication tools as well.  However, two-factor authentication still provides the best way to stay safe in the cloud and it’s advisable to use it whenever possible.

This post is by Rackspace blogger Thomas Parent. Rackspace Hosting is a service leader in cloud computing, and a founder of OpenStack, an open source cloud operating system. The San Antonio-based company provides Fanatical Support to its customers and partners, across a portfolio of IT services, including Managed Hosting and Cloud Computing.