Category Archives: Cloud computing

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Rapid Fire Summary of Opening Keynote at VMworld 2013

By Chris Ward, CTO, LogicsOne

For those of you who aren’t out in San Francisco at the 10th annual VMworld event, here is a quick overview of what was covered in the opening keynote delivered by CEO Pat Gelsinger’s opening:

  • Social, Mobile, Cloud & Big Data are the 4 largest forces shaping IT today
  • Transitioned from Mainframe –>Client Server –>Mobile Cloud
  • Pat sets the stage that the theme of this year’s event is networking – basically setting the stage for a ton of Nicira/NSX information. I think VMware sees the core of the software defined datacenter as networking-based, and they are in a very fast race to beat out the competition in that space
  • Pat also mentioned that his passion is to get every x86 application/workload 100% virtualized. He drew parallels to Bill Gates saying his dream was a PC on every desk in every home that runs Microsoft software.
  • Next came announcements around vSphere 5.5 & vCloud Suite 5.5…here are some of the highlights:
    • 2x CPU and Memory limits and 32x storage capacity per volume to support mission critical and big applications
    • Application Aware high availability
    • Big Data Extensions – multi-tenant Hadoop capability via Serengeti
    • vSAN officially announced as public beta and will be GA by 1st half of 2014
    • vVOL is now in tech preview
    • vSphere Flash Read Cache included in vSphere 5.5

Next, we heard from Martin Casado. Martin is the CTO – Networking at VMware and came over from the Nicira acquisition and was speaking about VMware NSX. NSX is a combination of vCloud Network and Security (vCNS) and Nicira. Essentially, NSX is a network hypervisor that abstracts the underlying networking hardware just like ESX abstracts underlying server hardware.

Other topics to note:

  • IDC names VMware #1 in Cloud Management
  • VMware hypervisor fully supported as part of OpenStack
  • Growing focus on hybrid cloud. VMware will have 4 datacenters soon (Las Vegas, Santa Clara, Sterling, & Dallas). Also announcing partnerships with Savvis in NYC & Chicago to provide vCHS services out of Savvis datacenters.
  • End User Computing
    • Desktop as a Service on vCHS is being announced (I have an EUC Summit Dinner later on tonight so I will be able to go into more detail afterward that).

So, all-in-all a good start to the event. Network virtualization/NSX is clearly the focus of this conference and vCHS is a not too distant 2nd. Something that was omitted from the keynote was the rewritten SSO engine for vCenter 5.5. The piece was weak for 5.1 and has been vastly improved with 5.5…this could be addressed tomorrow as most of the tech staff is in Tuesday’s general session.

If you’re at the event…I’ll actually be speaking on a panel tomorrow at 2:30 about balancing agility with service standardization. I’ll be joining Khalid Hakim and Kurt Milne of VMware, along with Dave Bartoletti of Forrester Research and Ian Clayton of Service Management 101. I will also be co-presenting on Wednesday with my colleague John Dixon at 2:30-3:30 in the Moscone West Room 2011 about deploying a private cloud service catalogue. Hopefully you can swing by.

More to come soon!

 

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

Cloud Corner Series – Unified Communications in the New IT Paradigm

http://www.youtube.com/watch?v=XHp6Q5RMMR8

 

In this segment of Cloud Corner, former CEO of Qoncert, and new GreenPages-LogicsOne employee, Lou Rossi answers questions around how unified communications fits into the new IT paradigm moving forward.

We’ll be hosting a free webinar on 8/22: How to Securely Enable BYOD with VMware’s Next Gen EUC Platform. Register Now!

Predict Cloud Revenue With This One Simple Trick

We’ve all seen the ads touting “one simple/crazy trick” to lower your insurance (or weight, or electric bill, or whatever). Now GigaOm has it’s own variant for cloud vendors to predict annual revenue:

Just take your July revenue and multiply it by 12.  Or if you want to get even trickier, take your daily revenue on July 15 and multiply it by 365.

They’re both embarrassingly simple, but surprisingly accurate. For a subscription business with a consistent trajectory, it’ll get you extremely close to the ultimate answer – usually within a couple percentage points.

There is more to it of course.

Seeking Better IT Mileage? Take a Hybrid Out for a Spin

Guest Post by Adam Weissmuller, Director of Cloud Solutions at Internap

As IT pros aim to make the most efficient use of their budgets, there is a rapidly increasing range of infrastructure options at their disposal. Gartner’s prediction that public cloud spending in North America will increase from $2 billion in 2011 to $14 billion in 2016, and 451 Research’s expectation that colocation demand will outpace supply in most of the top 10 North American markets through 2014 are just two examples of the growing need for all types of outsourced IT infrastructure.

While public cloud services in particular have exploded in popularity, especially for organizations without the resources to operate their own data centers, a “one size fits all” myth has also emerged, suggesting that this is the most efficient and cost-effective option for all scenarios.

In reality, the public cloud may be the sexy new sports car – coveted for its horsepower and handling – but sometimes a hybrid model can be the more sensible approach, burning less gas and still getting you where you need to go.  It all depends on what kind of trip you’re taking. Or, put in data center terminology, the most effective approach depends on the type of application or workload and is often a combination of infrastructure services – ranging from public, private and “bare metal” cloud to colocation and managed hosting, as well as in-house IT resources.

The myth of cloud fuel economy
Looking deeper into the myth of “cloud costs,” as part of a recent “Data Center Services Landscape” report, Internap recently surveyed 100 IT decision makers to gain a cross-sectional view into their existing current and future use of IT infrastructure. Almost 65 percent of respondents said they are considering public cloud services, and 41 percent reported they are doing so in order to reduce costs.

But when you compare the “all-in” costs of operating thousands of servers over several years in a well-run corporate data center or colocating in a multi-tenant data center against the cost of attaining that same capacity on a pay-as-you-go basis via public cloud, the cloud service will lose out nearly every time.

The fact that colocation can be more cost-efficient than cloud often comes as a surprise to organizations and is something of a dirty little secret within the IaaS industry. But for predictable workloads and core infrastructure that is “always on,” the public cloud is a more expensive option because the customer ultimately pays a premium for pay-as-you-go pricing and scalable capacity that they don’t need – similar to driving a gas-guzzling truck even when there’s nothing you need to tow.

Balancing the racecar handling of cloud with the safety of a family hybrid
This is not to suggest that cloud is without its benefits. Public cloud makes a lot of sense for unpredictable workloads. Enterprises can leverage it to expand capacity on-demand without incurring capital expenditures on new servers. Workloads with variable demand and significant traffic peaks and valleys, such as holiday shopping spikes for online retailers or a software publisher rolling out a new product, are generally well-suited for public clouds because the customer doesn’t pay for compute capacity that they don’t need or use on a consistent basis.

One of the biggest benefits of cloud services is agility. This is where the cloud truly shines, providing accessibility and immediacy to the end-user.  However, the need for a hybrid approach also arises here, when agility comes at the expense of security and control. For example, the agility vs. control challenge is often played out in some version of the following use case:  A CIO becomes upset when she finds out that employees within most of the company’s business units are leveraging public cloud services – without her knowledge. This is especially unsettling, given that she has just spent millions of dollars building two new corporate data centers that were only half full. Something has gone wrong here, and it’s related to agility.

A major contributing factor to the surprise popularity of public cloud services is the perceived lack of agility of internal IT organizations. For example, it’s not uncommon for it to take IT executives quite some time to turn up new servers in corporate data centers. And this isn’t necessarily the fault of IT since there are a number of factors that can, and often do, present roadblocks, such as the need to seek budgetary approval, place orders, get various sign-offs, install the servers, and finally release the infrastructure to the appropriate business units – a process that can easily take several months. As a result, employees and business units often begin to side-step IT altogether and go straight to public cloud providers, corporate credit card in hand, in an effort to quickly address IT issues. The emergence of popular cloud-based applications made this scenario a common occurrence, and it illustrates perfectly how the promise of agility can end up pulling the business units toward the public cloud – at the risk of corporate security.

The CIO is then left scrambling to regain control, with users having bypassed many important processes that the enterprise spent years implementing. Unlike internal data centers or colocation environments, with a public cloud, enterprises have little to no insight into the servers, switches, and storage environment.

So while agility is clearly a big win for the cloud, security and control issues can complicate matters. Again, a hybrid, workload-centric approach can make sense. Use the cloud for workloads that aren’t high security, and consider the economics of the workload in the decision, too. Some hybrid cloud solutions even allow enterprises to reap the agility benefits of the cloud in their own data center – essentially an on-premise private cloud.

As businesses continue to evolve, it will be critical to go beyond the industry’s cloud hype and instead build flexible, centrally-managed architectures that take a workload-centric approach and apply the best infrastructure environment to the job at hand. Enterprises will find such a hybrid solution is usually of greater value than the sum of its individual parts.

Carpooling with “cloudy colo”
One area that has historically been left out of the hybridization picture is colocation. While organizations can already access hybridized public and private and even “bare metal” cloud services today, colocation has always existed in a siloed environment, without the same levels of visibility, automation and integration with other infrastructure that are often found in cloud and hosting services.

But these characteristics are likely to impact the way colocation services are managed and delivered in the future. Internap’s survey found strong interest in “cloudy colo” – colocation with cloud-like monitoring and management capabilities that provides remote visibility into the colocation environment and seamless hybridization with cloud and other infrastructure, such as dedicated and managed hosting.

Specifically, a majority of respondents (57 percent) cited interest in hybrid IT environments; and, combined with 72 percent of respondents expressing interest in hybridizing their colocation environment with other IT infrastructure services via an online portal, the results show strong emerging interest in data center environments that can support hybrid use cases as well as unified monitoring and management via a “single pane of glass.”

Driving toward a flexible future
A truly hybrid architecture – one that incorporates a full range of infrastructure types, from public and private cloud to dedicated and managed hosting, and even colocation – will provide organizations with valuable, holistic insight and streamlined monitoring and management of all of their resources within the data center, as well as consolidated billing.

For example, through a single dashboard, organizations could perform tasks, such as: remotely manage bandwidth, inventory, and power utilization for their colocation environment; rapidly move a maturing application from dedicated hosting to colocation; turn cloud services up and down as needed or move a cloud-based workload to custom hosting. Think of it as your hybrid’s in-car navigation system with touchscreen controls for everything from radio to air conditioning to your rear view camera.

The growing awareness of the potential benefits of hybridizing IT infrastructure services reflects the onset of a shift in how cloud, hosting and even colocation will be delivered in the future. The cloud model, with its self-service features, is one of the key drivers for this change, spurring interest among organizations in maximizing visibility and efficiency of their entire data center infrastructure ecosystem.

AdamWeissmuller

Adam Weissmuller is the Director of Cloud Solutions at Internap, where he led the recent launch of the Internap cloud solution suite. A 10-year veteran of the hosting industry, he recently presented on “Overcoming Latency: The Achilles Heel of Cloud Computing” at Cloud Expo West.

Deutsche Börse Launching Cloud Capacity Trading Exchange

Deutsche Börse says it will launch a trading venue for outsourced cloud storage and cloud computing capacity in the beginning of 2014. Deutsche Börse Cloud Exchange AG is a new joint venture formed together with Berlin-based Zimory GmbH to create the first “neutral, secure and transparent trading venue” for cloud computing resources.

The primary users for the new trading venue will be companies, public sector agencies and also organisations such as research institutes that need additional storage and computing resources, or have excess capacity that they want to offer on the market.

“With its great expertise in operating markets, Deutsche Börse is making it possible for the first time to standardise and trade fully electronically IT capacity in the same way as securities, energy and commodities,” said Michael Osterloh, Member of the Board of Deutsche Börse Cloud Exchange.

What’s Missing from Today’s Hybrid Cloud Management – Leveraging Brokerage and Governance

By John Dixon, Consulting Architect, LogicsOne

Recently GreenPages and our partner Gravitant hosted a webinar on Cloud Service Broker technology. Senior Analyst Dave Bartoletti gave a preface to the webinar with Forrester’s view on cloud computing and emerging technology. In this post we’ll give some perspective on highlights from the webinar. In case you missed it, you can also watch a replay of the webinar here: http://bit.ly/12yKJrI

Ben Tao, Director of Marketing for Gravitant, kicks off the discussion by describing the traditional data center sourcing model. Two key points here:

  1. Sourcing decisions, largely based on hardware selection, are separated by years
  2. In a cloud world, sourcing decisions can be separated by months or even weeks

 

The end result is that cloud computing can drive the benefit of a multi-sourcing model for IT, where sourcing decisions are made in close proximity to the use of services. This has the potential of enabling organizations to adjust their sourcing decisions more often to best suit the needs of their applications.

Next, Dave Bartoletti describes the state of cloud computing and the requirements for hybrid cloud management. The core of Dave’s message is that the use of cloud computing is on the rise, and that cloud is being leveraged for more and more complex applications – including those with sensitive data.

Dave’s presentation is based on the statement, “what IT must do to deliver on the hybrid cloud promise…”

Some key points here:

  • Cloud is about IT services first, infrastructure second
  • You won’t own the infrastructure, but you’ll own the service definitions; take control of your own service catalog
  • The cloud broker is at the center of the SaaS provider, cloud VAR, and cloud integrator
  • Cloud brokers can accelerate the cloud application lifecycle

 

Dave does an excellent job of explaining the things that IT must do in order to deliver on the hybrid cloud promise. Often, conversations on cloud computing are purely about technology, but I think there’s much more at stake. For example, Dave’s first two points above really resonate with me. You can also read “cloud computing” as ITIL-style sourcing. Cloud computing puts service management back in focus. “Cloud is about IT services first, infrastructure second,” and “You won’t own the infrastructure […]” also suggests that cloud computing may influence a shift in the makeup of corporate IT departments – fewer   core technologists and more “T-shaped” individuals. So called T-shaped individuals have knowledge and experience with a broad set of technologies (the top of the “T”), but have depth in one or more areas like programming, Linux, or storage area networking. My prediction is that there will still be a need for core technologists; but that some of them may move into roles to do things like define customer-facing IT services. For this reason, our CMaaS product also includes optional services to deal with this type of workforce transformation. This is an example of a non-technical item that must be made when considering cloud computing. Do you agree? Do you have other non-technical considerations for cloud computing?

Chris Ward, CTO of LogicsOne, then dives in to the functionality of the Cloud Management as a Service, or CMaaS offering. The GreenPages CMaaS product implements some key features that can be used to help customers advance to the lofty points that Dave suggests in his presentation. CMaaS includes a cloud brokerage component and a multi-cloud monitoring and management component. Chris details some main features from the brokerage tool, which are designed to address the key points that Dave brought up:

  • Collaborative Design
  • Customizable Service Catalog
  • Consistent Access for Monitoring and Management
  • Consolidated Billing Amongst Providers
  • Reporting and Decision Support

Chris then gives an example from the State of Texas and the benefits that they realized from using cloud through a broker. Essentially, with the growing popularity of e-voting and the use of the internet as an information resource on candidates and issues, the state knew the demand for IT resources would skyrocket on election day. Instead of throwing away money to buy extra infrastructure to satisfy a temporary surge in demand, Texas utilized cloud brokerage to seamlessly provision IT resources in real time from multiple public cloud sources to meet the variability in demand.

All in all, the 60-minute webinar is time well spent and gives clients some guidance to think about cloud computing in the context of a service broker.

To view this webinar in it’s entirety click here or download this free whitepaper to learn more about hybrid cloud management

 

How RIM Can Improve Efficiency and Add Value To Your IT Ops

This is a guest post from Chris Joseph, VP, Product Management & Marketing, NetEnrich

 

Cloud, virtualization and hybrid IT technologies are being used in small and large IT enterprises everywhere to both modernize, and achieve business goals and objectives. As such, a top concern for today’s IT leaders is whether the investments being made in these technologies are delivering on the promise of IT modernization. Another concern is finding ways to free up IT funds currently spent on routine maintenance of IT infrastructure, so that they can invest in these new and strategic IT modernization projects.

Don’t Waste Time, Money and Talent on Blinking Lights

Everyone knows that IT organizations simply can’t afford to have a team of people dedicated to watching for blinking lights and waiting for something to fix.  It’s a waste of talent and will quickly burn through even the most generous of IT budgets. Yet, according to a Gartner study, 80% of an enterprise IT budget is generally spent on routine IT, while only 20% is spent on new and strategic projects.

If this scenario sounds familiar, then you may want to consider taking a long and hard look at third-party Remote Infrastructure Management (RIM) services for your IT infrastructure management. In fact, RIM services have been shown to reduce spending on routine IT operations by 30-40%, but how is this possible?

(1)     First of all, RIM services rationalize, consolidate and integrate the tools that are used to power the functionality of the monitoring and management of IT infrastructure within an enterprise.  According to Enterprise Management Associates, a leading IT and data management research and consulting firm, a typical enterprise has nearly 11 such tools running in its environment, and these typically include IT Operations Management (ITOM) tools and IT Service Management (ITSM) tools. As any IT professional can attest to, while there is significant overlap, some of these tools tend to be deficient in their capabilities, and they can be a significant source of noise and distraction, especially when it comes to false alerts and tickets. Yet, through RIM, IT organizations can eliminate many of these tools and consolidate their IT operations into a single pane of glass view, which can result in significant cost savings.

(2)     Secondly, by leveraging RIM, IT teams can be restructured and organized into shared services delivery groups, which can result in better utilization of skilled resources, while supporting the transformation of IT into a new model that acts as a service provider to business units.  Combine these elements of RIM with remote service delivery, and not only will you improve economies of scale and scope, but you will also promote cost savings.

(3)     Thirdly, RIM services consistently look to automation, analytics, and best practices to promote cost savings in the enterprise. Manual processes and runbooks are not only costly, but also time consuming and error prone. Yet, to automate processes effectively, IT organizations must rely on methodologies, scripts, and tools. This is where RIM comes into play. In fact, within any enterprise, 60-80% of manual processes and runbooks can easily be automated with RIM.

Download this free whitepaper to learn how to avoid focusing on ”keeping the lights on” to allow your team to focus on strategic initiatives

Beyond Cost Savings and Greater Efficiency: Building a Case for RIM

In addition to reducing routine spending and improving the efficiency of your IT operations, there are several other benefits to leveraging third-party RIM services:

  • 24×7 IT operations support.  Third-party RIM services often provide 24×7 IT ops support.  IT organizations benefit from around the clock monitoring and management of their IT infrastructures without additional headcount, or straining internal resources, which saves operating costs.
  • Be the first to know. 24×7 IT operations support means that you are always the first to know when customer-facing IT systems such as the company’s website, online shopping portal, mobile apps and cloud-based solutions go down. And, the issue is resolved in many cases by RIM services teams before the end-user has time to notice.
  • Skills and expertise. Third-party RIM services can provide your IT organization with certified engineers in various IT infrastructure domains. These engineers are responsible for monitoring, alerting, triaging, ticketing, incident management, and the escalation of critical outages or errors to you and your IT staff, if they cannot be immediately resolved. In addition, they may also be available on an on-demand basis if you are looking for skills and expertise in a specific domain.

The bottom line: by leveraging RIM services, IT organizations like yours can not only enhance their service capabilities and bolster service levels, but they can also can say goodbye to the fire drills and late night calls that plague your IT staff.  Proactive management of your IT infrastructure through RIM ensures that it is always running at peak performance.

To hear more from Chris, visit the NetEnrich blog

To learn more about how GreenPages can help you monitor and manage your IT Operations fill out this form

Part 2: Want to Go Cloud? What’s the Use Case?

By Lawrence Kohan, Senior Consultant, LogicsOne

 

Recap:

In Part 1 of this blog post, I started by reiterating the importance of having a strategy for leveraging the Cloud before attempting to migrate services to it in order to achieve the best results.  Using an example use case, I showed the basic pros and cons of considering moving a company’s e-mail services to the Cloud.  Then, delving further into the additional factors to consider, based on the size and breadth of the company, I showed that in that particular scenario, that an e-mail migration to the Cloud would provide more benefit to small businesses and startups instead of medium to large enterprises; wherein such a migration may actually be more detrimental than helpful.

Use the Cloud to level the playing field!

Historically, a small business is typically at a disadvantage to their larger counterparts, as they generally have less capital to work with.  However, the Cloud Era may prove to be the great equalizer.  The nimbleness and portability of a small business may prove to be quite an advantage when it comes to reducing operating costs to give the small business a competitive edge.  A small business with a small systems footprint may be able to consider strategies for moving most—if not all—of their systems to the Cloud.  A successful migration would greatly reduce company overhead, administrative burden, and increased office space and real estate by repurposing decommissioned server rooms.  Thus, a small business is able to leverage the Cloud in a way to gain a competitive advantage in a way that is (most likely) not an option for a medium or large enterprise.

So, what is a good Cloud use case for a medium to large business?

The Cloud can’t be all things to all people.  However, the Cloud can be many things to many people.  While the enterprise may not have the same options as the small business, they still have many options available to them to reduce their costs or expand their resources to accommodate their needs in a cost-effective way.

Enterprise Use Case 1: Using IaaS for public website hosting

A good low-risk Cloud option that an enterprise can readily consider: moving non-critical, non-confidential informational data to the Cloud.  A good candidate for initial Cloud migration would be a corporate website with marketing materials or information about product or service offerings.  It is important that a company’s website containing product photos, advertising information, hours of operation and location and contact information is available 24/7 for customer and potential customer access.  In this case, the enterprise can leverage a Cloud Service Provider’s Infrastructure as a Service (IaaS) in order to host their website.  For a monthly service fee, the Cloud Service Provider will host the enterprise’s website on redundant, highly available infrastructure and proactively monitor the site to ensure maximum uptime.  (The enterprise should consider the Cloud Service Provider’s SLA when determining their uptime needs).

By this strategy, the enterprise is able to ensure maximum uptime for it important revenue-generating web materials, while offloading the costs associated with hosting and maintenance of the website.  At the same time, the data being presented online is not confidential in nature, so there is little risk in having it hosted externally.  This is an ideal use case of a Public Cloud.

In addition to the above, a Hybrid Cloud approach can also be adopted: the public-facing website could conduct e-commerce transactions by redirecting purchase requests to privately hosted e-commerce applications and customer databases that are secure and PCI compliant.  Thus, we have an effective, hybrid use of Cloud resources to leverage high availability, while still keeping confidential customer and credit card data secure and internally hosted. We’ll actually be hosting a webinar tomorrow with guest speakers from Forrester Research and Gravitant that will talk about hybrid cloud management. If you’re interested in learning more about how to properly manage your IT environment, I’d highly recommend sitting in.

Enterprise Use Case 2: Using Cloud Bursting to accommodate increased resource demands as needed

Another good Public Cloud use case: let’s say a company, operating at maximum capacity, has periodic or seasonal needs to accommodate spikes in workload.  This could either be increased demands on applications and infrastructure, or needing extra staff to perform basic clerical or administrative functions on a limited basis.  It would be a substantial investment to procure additional office space and computer hardware for limited use—not to mention the additional expenses of maintaining the hardware and office space.  In such a case, an enterprise using a Cloud Service Provider’s IaaS would be able to rapidly provision virtual servers and desktops that can be accessed via space-saving thinclients, or even remotely.  Once the project is completed, those virtual machines can be deleted.  Upon future need, new virtual machines could easily be provisioned in the same way.  And most importantly, the company only pays for what it needs, when it needs it.  This is another great way for an enterprise to leverage the Cloud’s elasticity to accommodate its dynamic needs!

Enterprise Use Case 3: Fenced testing environments for application development

Application teams often need to simulate production conditions for testing, while not effecting actual production.  When dealing with traditional hardware infrastructure, setting up a dedicated development infrastructure could be an expensive and time consuming proposition.  In addition, the Apps team may require many identical setups for multiple teams’ testing, or to simulate many scenarios using the same parameters such as IP and MAC addresses.  With traditional hardware setups, this is an extremely difficult task to achieve in a productive, isolated manner.  However, with Cloud services, such as VMware’s vCloud Suite, isolated fenced applications can be provisioned and mass-produced quickly for an Apps team’s use without affecting production, and then can be rapidly decommissioned as well.  In this particular example use case of the vCloud Suite, VMware’s Chargeback Manager can also be used to get a handle on the costs associated with development environment setup, which can then provide showback and chargeback reports to a department, organization, or other business entity.  This is yet another good example of an efficient and cost-effective use of the Cloud to solve a complex business need.

 

Consider your strategy first!  Then, use the Cloud to your advantage!

So, as we have seen, the Cloud offers various time-saving, flexible, efficient solutions, that can accommodate businesses of any size or nature.  However, the successful transition to the Cloud depends—more than anything else—on the initial planning and strategy that goes into its adoption.

Of course, there are many other options and variables to consider in a Cloud adoption strategy, such as choice of providers, consulting services, etc.  However, before even looking into the various Cloud vendors and options, start out by asking the important internal questions, first:

  • What are our business goals?
  • What are our intended use case(s) for the Cloud?
  • What are we looking to achieve from its use?
  • What is the problem that we are trying to solve?  (And is the Cloud the right choice for that particular problem?)
  • What type of Cloud service would address our need? (Public, Private, Hybrid?)
  • What is our timetable for transition to the Cloud?
  • What is our plan?  Is it feasible?
  • What is our contingency plan?  (How do we backup and/or back-out?)

When a company has solid answers for question such as the above, they are ready to begin their own journey to the cloud.

 

Last chance to register for tomorrow’s webinar on leveraging cloud brokerage. Speakers from GreenPages, Forrester Research, and Gravitant.