Archivo de la etiqueta: cloud

Getting Out of the IT Business

Randy Weis, Director of Solutions Architecture

Strange title for a blog from an IT solutions architect? Not really.

Some of our clients—a lumber mill, a consulting firm, a hospital—are starting to ask us how to get out of “doing IT.” What do these organizations all have in common? They all have a history of challenges in effective technology implementations and application projects leading to the CIO/CTO/CFO asking, “Why are we in the IT business? What can we do to offload the work, eliminate the capital expenses, keep operating expenses down, and focus our IT efforts on making our business more responsive to shifting demands and reaching more customers with a higher satisfaction rate?”

True stories.

If you are in the business of reselling compute, network, or storage gear, this might not be the kind of question you want to hear.

If you are in the business of consulting on technology solutions to meet business requirements, this is exactly the kind of question you should be preparing to answer. If you don’t start working on those answers, your business will suffer for it.

Technology has evolved to the point where the failed marketing terms of grid or utility computing are starting to come back to life—and we are not talking about zombie technology. Cloud computing used to be about as real as grid or utility computing, but “cloud” is no longer just a marketing term. We now have new, proven, and emerging technologies that actually can support a utility model for information technology. Corporate IT executives now are starting to accept that the new cloud computing infrastructure-as-a-service is reliable (recent AWS outages not withstanding) predictable, and useful to a corporate strategy. Corporate applications still need to be evaluated for requirements that restrict deployment and implementation strategies–latency, performance, concerns over satisfying legal/privacy/regulatory issues, and so on. However, the need to have elastic, scalable, on-demand IT services that are accessible anywhere is starting to force even the most conservative executives to look at the cloud for offloading non-mission critical workloads and associated costs (staff, equipment, licensing, training and so on). Mission critical applications can still benefit from cloud technology, perhaps only as internal or private cloud, but the same factors still apply—reduce time to deploy or provision, automate workflow, scale up or down as dictated by business cycles, and push provisioning back out into the business (while holding those same units accountable for the resources they “deploy”).

Infrastructure as a service is really just the latest iteration of self-service IT. Software as a service has been with us for some time now, and in some cases is the default mode—CRM is the best example (e.g. Salesforce). Web-based businesses have been virtualizing workloads and automating deployment of capacity for some time now as well. Development and testing have also been the “low hanging fruit” of both virtualization and cloud computing. However, when the technology of virtualization reached a certain critical mass, primarily driven by VMware and Microsoft (at least at the datacenter level), then everyone started taking a second look at this new type of managed hosting. Make no mistake—IaaS is managed hosting, but New and Improved. Anyone who had to deal with provisioning and deployment at AT&T or other large colocation data centers (and no offense meant) knew that there was no “self-service” involved at all. Deployments were major projects with timelines that rivaled the internal glacial pace of most IT projects—a pace that led to the historic frustration levels that drove business units to run around their own IT and start buying IT services with a credit card at Amazon and Rack Space.

If you or your executives are starting to ask yourselves if you can get out of the day-to-day business of running an internal datacenter, you are in good company. Virtualization of compute, network and storage has led to ever-greater efficiency, helping you get more out of every dollar spent on hardware and staff. But it has also led to ever-greater complexity and a need to retrain your internal staff more frequently. Information Technology services are essential to a successful business, but they can no longer just be a cost center. They need to be a profit center; a cost of doing business for sure, but also a way to drive revenues and shorten time-to-market.

Where do you go for answers? What service providers have a good track record for uptime, customer satisfaction, support excellence and innovation? What technologies will help you integrate your internal IT with your “external” IT? Where can you turn to for management and monitoring tools? What managed services can help you with gaining visibility into all parts of your IT infrastructure, that can deal with a hybrid and distributed datacenter model, that can address everything from firewalls to backups? Who can you ask?

There is an emerging cadre of thought leaders and technologists that have been preparing for this day, laying the foundation, developing the expertise, building partner relationships with service providers and watching to see who is successful and growing…and who is not. GreenPages is in the very front line of this new cadre. We have been out in front with virtualization of servers. We have been out in front with storage and networking support for virtual datacenters. We have been out in front with private cloud implementations. We are absolutely out in front of everyone in developing Cloud Management As A Service.

We have been waiting for you. Welcome. Now let’s get to work.For more information on our Cloud Management as a Service Offering click here

2013 Outlook: A CIO’s Perspective

Journey to the Cloud recently sat down with GreenPages Chief Information and Technology Officer Kevin Hall to talk about the outlook for 2013.

JTC: As CIO at GreenPages what are your major priorities heading into 2013?

KH: As CIO, my major priorities are to continue to rationalize and prioritize within the organization. By rationalize I mean looking at what it is we think the business needs vs. what it is we have, and by prioritize I mean looking at where there are differences between what we have and what we need and then building and operationalizing to get what we need into production.  We are working through that process right now. More specifically, we’re actively trying to do all of this in a way that will simultaneously help the business have more velocity and, as a percentage of revenue, cost less. We’re trying to do more with less, faster.

JTC: What do you think will be some of the biggest IT challenges CIOs will face in 2013?

KH:  I think number one is staying relevant with their business. A huge challenge is being able to understand what it is the business actually needs.  Another big challenge is accepting the fact of life that the business has to actively participate with IT in building out IT. In other words, we have to accept the fact that our business users are oftentimes going to know about technologies that we don’t or are going to be asking questions that we don’t have the answers for. All parties will have to work together to figure it out.

JTC: Any predictions for how the IT landscape will look in 2013 and beyond?

KH: Overall, I think there is a very positive outlook for IT as we move into the future. Whether or not the economy turns around (and I believe it is going to), all businesses are seeking to leverage technology. Based on our conversations with our customers, no one has made any statements to say “hey, we’ve got it all figured out, there is nothing left to do.” Everyone is in a state of understanding that more can be done and that we aren’t at the end of driving business value for IT. More specifically, one thing I would have people keep an eye on is the software defined data center. Important companies like VMware, EMC, and Cisco, amongst others, were rapidly moving to a place that reduces datacenter icons so that just as easily as we can spin up Virtual Machines now, we will be able to spin up datacenters in the future. This will allow us to support high velocity and agility.

JTC: Anything that surprised you about the technology landscape in 2012?

KH: Given a great deal of confusion in our economy, I think I was surprised by how positive the end of the year turned out. The thought seems to be that it must be easy for anyone seeking to hire great people right now due to a high rate of unemployment, but in IT people who get it technically and from a business perspective are working, and they are highly valued by their organizations. Another thing I was surprised about is the determination businesses have to go around, or not use, IT if IT is not being responsive. Now we’re in an age where end users have more choices and a reasonably astute business person can acquire an “as a Service” technology quickly, even though it may be less than fully optimized and there may be issues (security comes to mind). Inside a company, employees may prefer to work with IT, but if IT moves too slowly or appears to just say “no,” people will figure out how to get it done without them.

JTC: What are some of the biggest misconception organizations have about the cloud heading into 2013?

KH: I think a major misconception about cloud is about the amount these technologies are actually being used in one’s organization.  It is rare to find a CIO (this included myself up to recently) who has evaluated just how much cloud technologies are truly being used in their business. Are they aware of every single app being used? How about every “as a Service” that is being procured in some way without IT involvement? Therefore, when they think of their platform, are they including in it all of the traditional IT assets as well as all the “aaS” and cloud assets that are at their company? It goes back to how we as IT professionals can’t be meaningful when we are not even positive of exactly what is going on within the walls of our own company.

JTC: Any recommendations for IT Decision makers who are trying to decide where to allocate their 2013 budgets?

KH: I think IT Decision Makers need to be working with colleagues throughout the company to see what they need to get done and then build out budgets accordingly so they truly support the goals of the business. They need to be prepared to be agile so that unexpected, yet important, business decisions that pop up throughout the year can be supported. Furthermore, they need to be prepared from a velocity standpoint so that when a decision is made, the IT department can go from thought to action very quickly.

 

 

Evolving to a Broker of Technology Services: Planning the Solution

By Trevor Williamson, Director, Solutions Architecture

A 3-Part Series:

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again

Part 2: Planning the Solution

As I wrote before and continuing with part 2 of this 3-part series, let’s talk about how we plan the solution for automating IT services and service management within your organization so that you can develop, deliver, and support services in a more disciplined way—which means that your customers will trust you. Of course this doesn’t mean that they won’t pursue outsourced, cloud, or other third-party services—but they will rely on you to get the most out of those services.  And once you do go through this process, some of the major benefits for implementing an automated service management infrastructure are:

  • Improved staff productivity that allows your business to become more competitive. Your time is too valuable to be spent fighting fires and performing repetitive tasks. If you prevent the fires and automate the repetitive tasks, you can focus on new projects and innovation instead. When you apply automation tools to good processes, productivity skyrockets to a level unachievable by manual methods.
  • Heightened quality of service that improves business uptime and customer experience. Consistent execution according to a well-defined change management process, for example, can dramatically reduce errors, that in turn improves uptime and customer experience because in today’s age of continuous operations and unrelenting customer demand, downtime can erode your competitive edge quickly. Sloppy change management can cause business downtime that prevents customers from buying online or reduces the productivity of your workforce.
  • Reduced operational costs to reinvest in new and innovative initiatives. It’s been said that keeping the lights on—the costs to maintain ongoing operations, systems, and equipment—eats up roughly 80% of the overall IT budget rather than going to new or innovative projects. With more standardized and automated processes, you can improve productivity and reduce operational costs allowing you the freedom to focus on more strategic initiatives.
  • Improved reputation with the business. Most self-aware IT organizations acknowledge that their reputation with business stakeholders isn’t always sterling. This is a critical problem, but you can’t fix it overnight—changing an organization’s culture, institutionalized behaviors, and stereotypes takes time and energy. If you can continue to drive higher productivity and quality through automated service management, your business stakeholders will take notice.

A very important aspect of planning this new infrastructure is to look toward, in fact assume, that the range of control will necessarily span both internal and external resources…that you will be stretching into public cloud spaces—not that you will always know you’re there until after the fact—and that you will be managing them (at least monitoring them) with the same level of granularity that you do with your traditional resources.

This includes integrating the native functionality of those off-premises services—reserving virtual machines and groups of machines, extending reservations, cloning aggregate applications, provisioning storage, etc., and connecting them to an end-to-end value chain of IT services that can be assessed, monitored and followed from where the data resides to where it is used by the end user:

It is through this holistic process—rationalized, deconstructed, optimized, reconstituted and ultimately automated—that the system as a whole can be seen as a fully automated IT services management infrastructure, but believe me when I say that this is not nor will it ever be an easy task.  When you are looking to plan how you automate your service management infrastructure, you need a comprehensive approach that follows a logical and tightly controlled progression.  By whatever name you call the methodology (and there are many out there) it needs to be concise, comprehensive, capable, and, above all else, controlled:

1. Identify the trends, justify the business case, and assess your maturity. Before investing in an automated service management infrastructure, you have to assess the opportunity, build the business case, and understand the current state. This phase will answer the following questions:

o    Why is automated service management important to my business?

o    What are the business and IT benefits?

o    How prepared is my organization to tackle this initiative?

2.  Develop your strategic plan, staffing plan, and technology roadmaps. You translate what you learn from the prior phase into specific automated service management strategies. The goal of this phase is to help you answer these key questions:

o    Do I have the right long-term strategic vision for automated service management?

o    What are my stakeholders’ expectations, and how can I deliver on them?

o    What technologies should I invest in and how should I prioritize them?

3.  Invest in your skills and staff, policies and procedures, and technologies and services. This phase is designed to execute on your automated service management strategies. This phase will answer the following people, process, and technology questions:

o    What specific skills and staff will I need, and when?

o    What policies and procedures do I need to develop and enforce?

o    Should I build and manage my own technology capabilities or use external service providers?

o    What specific vendors and service providers should I consider?

4.  Manage your performance, develop metrics, and communicate and train. Finally, to help you refine and improve your automated service management infrastructure, the goal in this phase is to help you answer these key questions:

o    How should I adjust my automated service management plans and budgets?

o    What metrics should I use to track my success?

o    How should I communicate and train stakeholders on new automated service management policies and technologies?

These phases and the associated questions to be answered are just a taste of what is required when you are thinking of moving toward an automated service management infrastructure—and of course GreenPages is here to help—especially when you are in the planning stages.  The process is not painless and it is certainly not easy but the end result, the journey in fact, is well worth the time, effort and investment to accomplish it.

Next…Part 3: Executing the Solution, again and again…

If you’re looking for more information, we will be holding free events in Boston, NYC, and Atlanta to discuss cloud computing, virtualization, VDI, clustered datacenters, and more. We’ll have a bunch of breakout sessions, and it will also be a great opportunity to network with peers.

 

Cloud Corner Series- Is Automation & Orchestration Like Taking a Shower?

http://www.youtube.com/watch?v=s_U_S8qyhGM

I sat down yesterday to talk about automating and orchestrating business processes and how it is critical in a cloud environment. I hope you enjoy it- even if the info stinks, at least you have 5 minutes of eye candy watching yours truly!

If you’re looking for more information on cloud management GreenPages has two, free events coming up (one in Boston & one in NYC). Click for more information and to register- space is limited and filling up quickly so check it out!

The Evolution from a Provider of Technology Components to a Broker of Technology Services

A 3 Part Series from Trevor Williamson

  • Part 1: Understanding the Dilemma
  • Part 2: Planning the Solution
  • Part 3: Executing the Solution, again and again…

Part 1: Understanding the Dilemma

IT teams are increasingly being challenged as bring-your-own-technology (BYOD) policies and “as-a-service” software and infrastructure multiply in mainstream organizations.  In this new reality, developers still need compute, network and storage to keep up with growth…and workers still need some sort of PC or mobile device to get their jobs done…but they don’t necessarily need corporate IT to give it to them.  They can turn to a shadow IT organization using Amazon, Rackspace and Savvis or using SAS applications or an unmanaged desktop because when all is said and done, if you can’t deliver on what your users and developers care about, they will use whatever and whoever to get their jobs done better, faster and cheaper.

Much of this shift toward outside services comes down to customer experience, or how your customers—your users—perceive their every interaction with IT, from your staff in the helpdesk to corporate applications they access every day.  If what you are delivering (or not delivering as the case may be) is more burdensome, more complicated or doesn’t react as fast as other service providers (like Amazon, Office 365, or Salesforce, etc.), then they will turn (in droves) toward those providers.

Now the question hanging heavy in the air is what do those providers have, except of course scale, that your IT organization doesn’t have?  What is the special sauce for them to be able to deliver those high-value services, quicker and at a lower cost than you can?

In a few words; IT Service Management (ITSM)…but wait!…I know the first reaction you might have is that ITSM has become a sour subject and that if you hear ITIL chanted one more time you’re going to flip out.  The type of ITSM I’m talking about is really the next generation and has only passing similarities to the service management initiatives of the past.  While it is agreed that ITSM has the potential to deliver the experiences and outcomes your developers and users need and want, today’s ITSM falls far short of that idea.  Process for process sake you’ve probably heard…but whatever, we are still measuring success based on internal IT efficiencies, not customer or financial value or even customer satisfaction. We still associate ITSM exclusively with ITIL best practices and we continue to label ourselves as providers of technology components.

As it turns out, the adage “You cannot fix today’s problems with yesterday’s solutions” is as right as it ever was.  We need to turn ITSM on its head and create a new way forward based on customer centricity, services focus, and automated operations.  We have to rethink the role we play and how we engage with the business.  Among the most significant transformations of IT we need to complete is from a provider of technology components to a broker of technology services. We have relied on ITSM to drive this transformation, but ITSM needs to change in order to be truly effective in the future. Here’s why:

   The roots of service management focus was on the customer:  “Service management” originated within the product marketing and management departments and from the beginning, service management placed the customer at the center of all decision making within the service provider organization. It’s the foundation to transform product-oriented organizations into service providers where the customer experience and interaction are designed and managed to cost-effectively deliver customer results and satisfaction.

  But when we applied service management to IT, we lost customer focus: Applying service management to information technology produced the well-known discipline of ITSM but, unfortunately, IT professionals associated it exclusively with the IT infrastructure library (ITIL) best practices, which focus on processes for managing IT infrastructure to enable and support services. What’s missing is the customer perspective.

   In the age of the customer, we need to proactively manage services via automation: In the age of the customer, technology-led disruption (virtualization, automation, orchestration, operating at scale, etc.) erodes traditional competitive barriers making it easier than ever for empowered employees and app developers to take advantage of new devices and cloud-based software. To truly function as a service provider, IT needs to first and foremost consider the customer and the customer’s desired outcome in order to serve them faster, cheaper, and at a higher quality.  In today’s world, this can only be accomplished via automation.

When customers don’t trust a provider to deliver quality products or services, they seek alternatives. That’s a pretty simple concept that everyone can understand but, what if the customer is a user of IT services that you provide?  Where do they go if they don’t like the quality of your products or services?  Yep, Amazon, Rackspace, Terremark, etc., and any other service provider who offers a solution that you can’t…or that you can’t in the required time or for the required price.

The reason why these service providers can do these seemingly amazing things and offer such diverse and, at times, sophisticated services, is because they have eliminated (mostly) the issues associated with humans doing “stuff” by automating commodity IT activities and then orchestrating those automated activities toward delivering aggregate IT services.  They have evolved from being providers of technology components to brokers of technology services.

If you’re looking for more information on BYOD, register for our upcoming webinar “BYOD Webinar- Don’t Fight It, Mitigate the Risk with Mobile Management

Next…Part 2: Planning the Solution

 

Mind the Gap – Quality of Experience: Beyond the Green light/Red light Datacenter.

By Geoff Smith, Senior Solutions Architect

If you have read my last three blogs on the changing landscape of IT management, you can probably guess by now where I’m leaning in terms of what should be a key metric in determining success:  the experience of the user.

As any industry progresses from its infancy to mainstream acceptance, the focus for success invariably transitions from being the “wizard-behind-the-curtain” towards transparency and accountability.  Think of the automobile industry.  Do you really buy a car anymore, or do you buy a driving experience?  Auto manufacturers have had to add a slew of gizmos (some which have absolutely nothing to do with driving) and services (no-cost maintenance plans, loaners, roadside assistance) that were always the responsibility of the consumer before.

It is the same with IT today.  We can no longer just deliver a service to our consumers; we must endeavor to ensure the quality of the consumer’s experience using that service.  This pushes the boundaries for what we need to see, measure, and respond to beyond the obvious green light/red light blinking in the datacenter.  As IT professionals, we need to validate that the services we deliver are being consumed in a manner that enables the user to be productive for the business.

In other words, knowing you have 5 9s of availability for your ERP system is great, but does it really explain the whole story?   If a system is up and available, but the user experience is poor enough to affect productivity, and results in a lower than expected output from that population, what is the net result?

Moving our visibility out to this level is not easy.  We have always relied upon the user to initiate the process and have responded reactively.  With the right framework, we can expand our proactive capabilities, alerting us to potential efficiency issues before the user experience degrades to the point of visibility.  In this way, we move our “cheese” from systems availability to service usability.  The business can then see a direct correlation between what we provided and the actual business value what we provided has delivered.

Some of the management concepts here are not entirely new, but the way they are leveraged may be. Synthetic transactions, round-trip analytics, and bandwidth analysis are a few of the vectors to consider.  But as important is how we react to events in these streams, and how quickly we can return usability to “Normal State.” Auto discovery and re-direction play key roles and parallel process troubleshooting tools can minimize experience impact.

As we move forward, we need to jettison the old concepts of inside-out monitoring and management and a datacenter focus, and move toward service-oriented metrics and measurement across infrastructure layers from delivery engine to consumption point.

Mind the Gap – Service-Oriented Management

IT management used to be about specialization.  We built skills in a swim-lane approach – deep and narrow channels of talent where you could go from point A to B and back in a pretty straight line, all the time being able to see the bottom of the pool.  In essence, we operated like a well-oiled Olympic swim team.  Each team member had a specialty in their specific discipline, and once in a while we’d all get together for a good ole’ medley event.

And because this was our talent base, we developed tools that would focus their skills in those specific areas.  It looked something like this:

"Mind the Gap"

But is this the way IT is actually consumed by the business?  Consumption is by the service, not by the individual layer.  Consumption looks more like this:

"Mind the Gap"

From a user perspective, the individual layers are irrelevant.  It’s about the results of all the layers combined, or to put a common term around it, it’s about a service.  Email is a service, so is Saleforce.com, but both of those have very different implications from a management perspective.

A failure in any one of these underlying layers can dramatically affect to user productivity.  For example, if a user is consuming your email service, and there is a storage layer issue, they may see reduced performance.  The same “result” could be seen if there is a host, network layer, bandwidth or local client issue.  So when a user requests assistance, where do you start?

Most organizations will work from one side of the “pool” to the other using escalations between the lanes as specific layers are eliminated, starting with Help Desk services and ending up in the infrastructure team.  But is this the most efficient way to provide good service to our customers?  And what if the service was Salesforce.com and not something we fully manage internally? Is the same methodology still applicable?

Here is where we need to start looking at a service-level management approach.  Extract the individual layers and combine them into an operating unit that delivers the service in question.  The viewpoint should be from how the service is consumed, not what individually makes up that service.  Measurement, metrics, visibility and response should span the lanes in the same direction as consumption.  This will require us to alter the tools and processes we use to respond to events.

Some scary thoughts here, if you consider the number of “services” our customers consume, and the implications of a hybrid cloud world.  But the alternative is even more frightening.  As platforms that we do not fully manage (IaaS, PaaS, SaaS) become more integral to our environments, the blind spots in our vision will expand.  So, the question is more of a “when” do we move in this direction rather than an “if.”  We can continue to swim our lanes, and maybe we can shave off a tenth of a second here or there.  But, true achievement will come when we can look across all the lanes and see the world from the eyes of our consumers.

 

Cloud Isn’t Social, It’s Business

Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS.

Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model.

While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization.

By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations.

A service subscription business model:

  • Makes it easier to project costs across entire infrastructure
    Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project.
  • Easier to justify cost of infrastructure
    Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not.
  • Enables business to manage costs over time 
    Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not.
  • Easier to start up a project/application and ramp up over time as associated revenue increases
    Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets.
  • Enables consistent comparison with off-premise cloud computing
    A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required.

The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand.

That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well.

One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT.


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

VMworld Recap: Day One

Day 1 at VMworld 2012 has been pretty action packed.  The first order of business was the official handing over of the reins from Paul Maritz to Pat Gelsinger as CEO of VMware.  Paul will remain involved as he is taking the Chief Strategist role at EMC which owns 80% of VMware so I would not expect his influence to go away anytime soon.  From conversations I’ve had with others both inside and outside of VMware, the primary reason for this move seems to be purely operational.  Paul is an absolute visionary and has taken VMware to some fantastic heights over his four-year tenure, however there have been some challenges on the operational side in executing on the great visions.  This is where Pat comes into the picture as he’s historically been a pure operations guy so I envision the team of Paul and Pat to do some great things for VMware going forward.

Some other key highlights from the Keynote are as follows:

  1. It is estimated that 60% of all x86 server workloads in the world are now virtualized and 80% of that 60% are virtualized on ESX/vSphere.
  2. There are now 125,000 VCP certified engineers worldwide, almost a 5-fold increase from 4 years ago
  3. The dreaded vRAM allocation licensing model for vSphere 5 is now officially dead with the release of vSphere 5.1.  VMware is going back to per socket licensing and neither RAM nor cores matter.  Personally, I am not sure this was a great move as I think most people were over the headache of vRAM and in reality I never saw a single customer who was adversely affected by it.  When Pat announced this, I think he thought the entire auditorium would roar in appreciation but that was not the case.  Yes, there was some cheering, but even Pat made mention of the fact that it wasn’t the full on reaction he expected.
  4. There are a lot of new certifications and certification tracks that were announced to better align with VMware’s definition of the new “stack.”  These tracks include the pre-existing datacenter infrastructure certs plus new ones around Cloud (think vCloud Director here), Desktop (View and Wanova/Mirage), and Apps (SpringSource).  I’ll be taking the new VCP-IaaS exam tomorrow so wish me luck!
  5. There was a light touch on both the Dynamic Ops and Nicira acquisitions.  Both of these have huge implications for VMware but really not much was announced at the show.  Both of these are very recent acquisitions so it will take some time for VMware to get them integrated but I am very excited about the possibilities of each.
  6. There was an announcement of the vCloud Suite, which essentially is a bundling of existing VMware products under a singular license model.  There are the typical Standard, Enterprise, and Enterprise Plus editions of the suite which include different pieces and parts, but the Enterprise Plus edition throws in about everything and the kitchen sink including….
    1. vSphere 5.1 Enterprise Plus
    2. vCenter Operations Enterprise
    3. vCloud Director
    4. vCloud networking/security (I assume this will eventually include Nicira networking virtualization and the vShield product family)
    5. Site Recovery Manager
    6. vFabric Application Director
    7. Lots of focus on virtualization of business critical applications and not just the usual suspects of SQL, Oracle, Exchange, etc.  There was a cool demo of Hadoop via Project Serengeti which automates the spinning up/down of various Hadoop VMs and this is delivered as a single virtual appliance.  GreenPages has done a lot in the business critical app virtualization space over the past couple of years and we remain excited about the possibilities that virtualization brings to these beefy apps.
    8. One of the big geeky announcements is around the concept of shared nothing vMotion.  This means that you can now move a live running VM between two host servers but without any requirement for shared storage, basically vMotion without a SAN.  This has massive implications in the SMB and branch office spaces where the cost of shared storage was very prohibitive.  Now you can get some of the cool benefits of virtualization using only very cheap direct attached storage!
    9. The final piece of the keynote showed VMware’s vision for virtualization of “everything” including compute, storage, and networking.  Look for some very cool stuff coming over the next 6 months or so in relation to new ways of thinking about networking and storage within a virtual environment.  These are two elements that really have not fundamentally changed how they work since the advent of x86 virtualization and we are now running into limitations due to this.  VMware is leading the charge in changing the way we think about these two critical elements and looking at very interesting ways to attack design and in the end making it much simpler to work with networking and storage technologies within virtualized environments.

Have to jump back over for Day 2 activities now, but be on the lookout for some upcoming GreenPages events where we’ll dive deeper into the announcements from the show!

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…