Category Archives: Public Cloud

Managing Resources in the Cloud: How to Control Shadow IT & Enable Business Agility

 

In this video, GreenPages CTO Chris Ward discusses the importance of gaining visibility into Shadow IT and how IT Departments need to offer the same agility to its users that public cloud offerings like Amazon can provide.

 

http://www.youtube.com/watch?v=AELrS51sYFY

 

 

If you would like to hear more from Chris, download his on-demand webinar, “What’s Missing in Today’s Hybrid Cloud Management – Leveraging Cloud Brokerage”

You can also download this ebook to learn more about the evolution of the corporate IT department & changes you need to make to avoid being left behind.

 

 

 

Have You Met My Friend, Cloud Sprawl?

By John Dixon, Consulting Architect

 

With the acceptance of cloud computing gaining steam, more specific issues related to adoption are emerging. Beyond the big-show topics of self-service, security, and automation, cloud sprawl is one of the specific problems that organizations face when implementing cloud computing. In this post, I’ll take a deep dive into this topic, what it means, how it’s caused, and some options for dealing with it now and in the future.

Cloud Sprawl and VM Sprawl

First, what is cloud sprawl? Simply put, cloud sprawl is the proliferation of IT resources – that provide little or no value – in the cloud. For the purposes of this discussion, we’ll consider cloud to be IaaS, and the resources to be individual server VMs. VM sprawl is a similar concept that happens when a virtual environment goes unchecked. In that case, it was common for an administrator, or someone with access to vCenter, to spin up a VM for testing, perform some test or development activity, and then forget about it. The VM stayed running, consuming resources, until someone or something identified it, determined that it was no longer being used, and shut it down. It was a good thing that most midsize organizations limited vCenter or console access to perhaps 10 individuals.  So, we solved VM sprawl by limiting access to vCenter, and by maybe installing some tools to identify little-used VMs.

So, what are the top causes of cloud sprawl? In IT operations terms, we have the following:

  • Self-service is a central advantage of cloud computing, and essentially cloud means opening up a request system to many users
  • Traditional IT service management (a.k.a. ITIL) is somewhat limited in dealing with cloud, specifically configuration management and change management processes
  • There remains limited visibility into the costs of IT resources, though cloud improves this since resource consumption ends up as a dollar amount on a bill…somewhere

How is Cloud Sprawl Different?

One of the main ideas behind cloud computing – and a differentiator between plain old virtualization and centralization – is the notion of self-service. In the language of VMware, self-service IaaS might be interpreted as handing out vCenter admin access to everyone in the company. Well, in a sense, cloud computing is kind of like that – anyone who wants to provision IaaS can go out to AWS and do just that. What’s more? They can request all sorts of things, aside from individual VMs. Entire platform stacks can be provisioned with a few clicks of the mouse. In short, users can provision a lot more resources, spend a lot more money, and cause a lot of problems in the cloud.

We have seen one of our clients estimate their cloud usage at a certain amount, only to discover that actual usage was over 10 times their original estimate!

In addition, cloud sprawl can go in different directions than plain old VM sprawl. Since there are different cloud providers out there, the proliferation of processes and automation becomes something to watch out for. A process to deal with your internal private cloud may need to be tweaked to deal with AWS. And it may need to be tweaked again to deal with another cloud provider. In the end, you may end up with a different process to deal with each provider (including your own datacenter). That means more processes to audit and bring under compliance. The same goes for tools – tools that were good for your internal private cloud may be completely worthless for AWS. I’ve already seen some of my clients filling their toolboxes with point solutions that are specific to one cloud provider. So, bottom line is that cloud sprawl has the potential to drag on resources in the following ways:

  1. Orphaned VMs – a lot like traditional VM sprawl, resulting in increased spend that is completely avoidable
  2. Proliferation of processes – increased overhead for IT operations to stay compliant with various regulations
  3. Proliferation of tools – financial and maintenance overhead for IT operations

 

Download John’s ebook “The Evolution of Your Corporate IT Department” to learn more

 

How Can You Deal with Cloud Sprawl?

One way to deal with cloud sprawl is to apply the same treatment that worked for VM sprawl: limit access to the console, and install some tools to identify little-used VMs. At GreenPages, we don’t think that’s a very realistic option in this day and age. So, we’ve conceptualized two new approaches:

  1. Adopt request management and funnel all IaaS requests through a central portalThis means using the accepted request-approve-fulfill paradigm that is a familiar concept from IT service management.
  2. Sync and discoverGive users the freedom to obtain resources from the supplier of their choosing, whenever and wherever they want. IT operations then discovers what has been done, and runs their usual governance processes (e.g., chargeback, showback) on the transactions.

Both options have been built in to our Cloud Management and a Service (CMaaS) platform. I see the options less as an “either/or” decision, and more of a progression of maturity within an organization. Begin with Option 2 – Sync and Discover, and move toward Option 1 – Request Management.

As I’ve written before, and I’ll highlight here again, IT service management practices become even more important in cloud. Defining services, using proper configuration management, change management, and financial management is crucial to operating cloud computing in a modern IT environment. The important thing to do now is to automate configuration and change management to prevent impeding the speed and agility that comes with cloud computing. Just how do you automate configuration and change management? I’ll explore that in an upcoming post.

See both options in action in our upcoming webinar on cloud brokerage and governance. Our CTO Chris Ward will cover:

  • Govern cloud without locking it down: see how AWS transactions can be automatically discovered by IT operations
  • Influence user behavior: see how showback reports can influence user behavior and conserve resources, regardless of cloud provider
  • Gain visibility into costs: see how IaaS costs can be estimated before provisioning an entire bill of materials

 

Register for our upcoming webinar being held on May 22nd @ 11:00 am EST. “The Rise of Unauthorized AWS Use. How to Address Risks Created by Shadow IT.

 

 

Google & Amazon Cut Prices & Microsoft is Next. Why Not Take Advantage of Them All?

By Ben Stephenson, Journey to the Cloud

 

There’s been a lot of talk this week about price cuts coming from cloud providers. First Google announced several price reductions for most of its cloud services. In response, Amazon announced a round of price cuts as well. This marked the 42nd time AWS has reduced prices since 2006. This means that Microsoft Azure will most likely get in on the action as well. Last April, Microsoft pledged that it would match any price drops from AWS. In early 2014, Microsoft did just that when it lowered prices to match a reduction made by Amazon. TechCrunch has nice write-ups on the specifics of the Google & Amazon  price reductions.

Obviously price cuts are beneficial to organizations using these platforms, but wouldn’t it make sense to take advantage of price cuts from multiple providers at the same time to maximize cost savings and performance? What if you moved different applications to different clouds – or even different parts of an application to different clouds?

Let’s say you have some applications for your database that require high-end performance, and you’re willing to pay more for performance.  But if you use a more expensive provider exclusively, you may be overspending in other areas that do not require as high performance. So, instead of running all your apps on the same provider, you could move some, say, commodity web-based applications that don’t require as much performance to the cheapest provider. You also have to keep in mind that the best option could be to keep the application on premise. This is only one example. John Dixon wrote a great ebook about the evolution of the corporate IT department and gives a more in depth look at the “which app, which cloud” philosophy that I highly recommend downloading.

So why don’t more companies split applications across multiple cloud providers? It’s simple; it’s complex and painful to manage. Furthermore, price cuts can happen at the spur of the moment so you need to be able to take advantage in real time to maximize savings.

This is where you need a management platform like GreenPages’ Cloud Management as a Service (CMaaS) Brokerage and Governance offering. CMaaS gives you the ability to match the right applications to the right cloud providers and compare the true cost of running your resources at a CSP before even placing an order. The platform eliminates cloud sourcing complexity with a central portal where business and IT users can quickly and easily aggregate, procure, and pay for cloud solutions. It answers the “which app, which cloud?” question across both internal private and public cloud environments.

Has your organization looked into spreading different applications across different clouds? What are your thoughts?

 

Download whitepaper: Cloud Management, Now

 

 

Are We All Cloud Service Brokers Now?

By John Dixon, Consulting Architect

 

Robin Meehan of Smart421 recently wrote a couple of great posts on cloud service brokers (CSBs) and the role that they play for consumers of cloud services. (http://smart421.wordpress.com/2014/02/24/were-mostly-all-cloud-services-brokers-now/ and http://smart421.wordpress.com/2014/02/25/cloud-brokerage-and-dynamic-it-workload-migration/). I’m going to write two blogs about the topic. The first will be a background on my views and interpretations around cloud service brokers. In the second post, I will break down some of Robin’s points and explain why I agree or disagree.

Essentially, a cloud broker offers consumers three key things that a single cloud provider does not (these are from the NIST definition of a Cloud Service Broker):

  • Intermediation
  • Aggregation
  • Arbitrage (run-time, deployment-time, plan-time)

My interpretation of these is as follows. We’ll use Amazon Web Services as the example IaaS cloud provider and GreenPages as the example of the cloud broker:

Intermediation. As a cloud broker, GreenPages, sits between you, the consumer, and AWS. GreenPages and other CSBs do this so they can add value to the core AWS offering. Why? Billing and chargeback is a great example. A bill from AWS includes line item charges for EC2, S3, and whichever other services you used during the past month – so you would be able to see that EC2 charges for January were $12,502.90 in total. GreenPages takes this bill and processes it so that you would be able to get more granular information about your spend in January. We would be able to show you:

  • Spend per application
  • Spend per environment (development, test, production)
  • Spend per tier (web, application, database)
  • Spend per resource (CPU, memory, storage, managed services)
  • Compare January 2014 to December, or even January 2013
  • Estimate the spend for February 2014

So, going directly to AWS, you’d be able to answer a question like, “how much did I spend in total for compute in January?”

And, going through GreenPages as a cloud broker, you’d be able to answer a question like, “how much did the development environment for Application X cost in January, and how does that compare with the spend in December?”

I think you’d agree that it is easier to wrap governance around the spend information from a cloud service broker rather than directly from AWS. This is just one of the advantages of using a CSB in front of a cloud provider – even if you’re like many customers out there and choose to use only one provider.

Aggregation. As a CSB, GreenPages aggregates the offerings from many providers and provides a simple interface to provision resources to any of them. Whether you choose AWS, Terremark, Savvis, or even your internal vSphere environment, you’d use the same procedure to provision resources. On the provider side, CSBs also aggregate demand from consumers and are able to negotiate rates. Why is this important? A CSB can add value in three ways here:

1) By allowing you to compare the offerings of different providers – in terms of pricing, SLA guarantees, service credits, supported configurations, etc.

2) By placing a consistent approval framework in front of requests to any provider.

3) By using aggregated demand to negotiate special pricing and terms with providers – terms that may not be available to an individual consumer of cloud services

The approval framework is of course optional – if you wish, you could choose to allow any user to provision infrastructure to any provider. Either way, a CSB can establish a request management framework in front of “the cloud” and can, in turn, provide things like an audit trail of requests and approvals. Perhaps you want to raise an ITIL-style change whenever a cloud request is fulfilled? A CSB can integrate with existing systems like Remedy or ServiceNow for that.

Arbitrage. Robin Meehan has a follow-on post that alludes to cloud arbitrage and workload migration. Cloud arbitrage is somewhat science fiction at this time, but let’s look forward to the not-too-distant future.

First, what is arbitrage and cloud arbitrage? NIST says it is an environment where the flexibility to CSB has the flexibility to choose, on the customer’s behalf, where to best run the customer’s workload. In theory, the CSB would always be on the lookout for a beneficial arrangement, automatically migrate the workload, and likely capture the financial benefit of doing so. This is a little bit like currency arbitrage, where a financial institution is looking for discrepancies in the market for various currencies, and makes various transactions to come up with a beneficial situation. If you’ve ever seen the late-night infomercials for forex.com, don’t believe the easy money hype. You need vast sums of money and perfect market information (e.g., you’re pretty much a bank) to play in that game.

So, cloud arbitrage and “just plain currency arbitrage” are really only similar when it comes to identifying a good idea. This is where we break it down cloud arbitrage into three areas:

  • Run-time arbitrage
  • Deployment-time arbitrage
  • Plan-time arbitrage

In my next post, I will break down cloud arbitrage as well as go over some specific points Robin makes in his posts and offer my opinions on them.

 

To learn more about transforming your IT Department to a broker of IT services download this ebook

 

 

The Big Shift: From Cloud Skeptics & Magic Pills to ITaaS Nirvana

By Ron Dupler, CEO GreenPages Technology Solutions

Over the last 4-6 quarters, we have seen a significant market evolution, with our customers and the overall market moving from theorizing about cloud computing to defining strategies and plans to reap the benefits of cloud computing solutions and implement hybrid cloud models. In a short period of time we’ve seen IT thought leaders move from debating the reality and importance of cloud computing, to trying to understand how to most effectively grasp the benefits of cloud computing to improve organizational efficiency, velocity, and line of business empowerment. Today, we see the leading edge of the market aggressively rationalizing their application architectures and driving to hybrid cloud computing models.

Internally, we call this phenomenon The Big Shift. Let’s discuss what we know about The Big Shift. First for all of the cloud skeptics reading this, it is an undeniable fact that corporate application workloads are moving from customer owned architectures to public cloud computing platforms. RW Baird released an interesting report in Q’4 of 2013 that included the following observations:

  • Corporate workloads are moving to the public cloud.
  • Much of the IT industry has been asleep at the wheel as Big Shift momentum has accelerated due to the fact that public cloud spending still represents a small portion of overall IT spend.
  • Traditional IT spending is growing in the low single digits. 2-3% per year is a good approximation.
  • Cloud spending is growing at 40% plus per year.
  • What we call The Big Shift is accelerating and is going to have a tremendous impact on the traditional IT industry in the coming years. For every $1.00 increase in public cloud spending, there is a corresponding $3.00-$4.00 decrease in customer-owned IT spend.

There are some other things we know about The Big Shift:

The Big Shift is disrupting old industry paradigms and governance models. We see market evidence of this in traditional IT industry powerhouses like HP and Dell struggling to adapt and reinvent themselves and to maintain relevance and dominance in the new ITaaS era. We even saw perennial powerhouse Cisco lower its 5 year growth forecast during last calendar Q’4 due to the forces at play in the market. In short, the Big Shift is driving disruption throughout the entire IT supply chain. Companies tied to the traditional, customer-owned IT world are finding themselves under financial pressures and are struggling to adapt. Born in the cloud companies like Amazon are seeing tremendous and accelerating growth as the market embraces ITaaS.

In corporate America, the Big Shift is causing inertia as corporate IT leaders and their staffs reassess their IT strategies and strive to determine how best to execute their IT initiatives in the context of the tremendous market change going on around them. We see many clients who understand the need to drive to an ITaaS model and embrace hybrid cloud architectures but do not know how best to attack that challenge and prepare to manage in a hybrid cloud world. This lack of clarity is causing delays in decision making and stalling important IT initiatives.

Let’s discuss cloud for a bit. Cloud computing is a big topic that elicits emotional reactions. Cloud-speak is pervasive in our industry. By this point, the vast majority of your IT partners and vendors are couching their solutions as cloud, or as-a-service, solutions. Some folks in the industry are bold enough to tell you that they have the magic cloud pill that will lead you to ITaaS nirvana. Due to this, many IT professionals that I speak with are sick of talking about cloud and shy away from the topic. My belief is that this avoidance is counterproductive and driven by cloud pervasiveness, lack of precision and clarity when discussing cloud, and the change pressure the cloud revolution is imposing on all professional technologists. The age old mandate to embrace change or die has never been more relevant. Therefore, we feel it is imperative to tackle the cloud discussion head on.

Download our free whitepaper “Cloud Management, Now

Let me take a stab at clarifying the cloud discussion. Figure 1 below represents the Big Shift. As noted above, it is undeniable that workloads are shifting from private, customer owned IT architectures, to public, customer rented platforms, i.e. the public cloud. We see three vectors of change in the industry that are defining the cloud revolution.

Cloud Change Vectors

The first vector is the modernization of legacy, customer-owned architectures. The dominant theme here over the past 5-7 years has been the virtualization of the compute layer. The dominant player during this wave of transformation has been VMware. The first wave of virtualization has slowed in the past 4-6 quarters as the compute virtualization market has matured and the vast majority of x86 workloads have been virtualized. There is a new second wave that is just forming and that will be every bit as powerful and important as the first wave. This wave is represented by new, advanced forms of virtualization and the continued abstraction of more complex components of traditional IT infrastructure: networking, storage, and ultimately entire datacenters as we move to a world of software defined datacenter (SDDC) in the coming years.

The second vector of change in the cloud era involves deploying automation, orchestration, and service catalogues to enable private cloud computing environments for internal users and lines of business. Private cloud environments are the industry and corporate IT’s reaction to the public cloud providers’ ability to provide faster, cheaper, better service levels to corporate end users and lines of business. In short, the private cloud change vector is driven by the fact that internal IT now has competition. Their end users and lines of business, development teams in particular, have new service level expectations based on their consumer experiences and their ability to get fast, cheap, commodity compute from the likes of Amazon. To compete, corporate IT staffs must enable self-service functionality for their lines of business and development teams by deploying advanced management tools that provide automation, orchestration, and service catalogue functionality.

The third vector of change in the cloud era involves tying the inevitable blend of private, customer-owned architectures together with the public cloud platforms in use today at most companies. The result is a true hybrid cloud architectural model that can be managed, preserving the still valid command and control mandates of traditional corporate IT,  and balancing those mandates with the end user empowerment and velocity expected in today’s cloud world.

In the context of these three change vectors we see several approaches within our customer base. We see some customers taking a “boil the ocean” approach and striving to rationalize their entire application portfolios to determine best execution venues and define a path to a true hybrid cloud architecture. We see other customers taking a much more cautious approach and leveraging cloud-based point solutions like desktop and disaster recovery as-a-service to solve old business problems in new ways. Both approaches are valid and depend on uses cases, budgets, and philosophical approach (aggressive, leading-edge, versus conservative follow-the-market thinking).

GreenPages business strategy in the context of the ITaaS and cloud revolution is simple. We have built an organization that has the people, process, and technologies to provide expert strategic guidance and proven cloud-era solutions for our clients through a historical inflection point in the way that information technology is delivered to corporate end users and lines of business. Our cloud management as a service offering (CMaaS) provides a technology platform that helps customers integrate the disparate management tools deployed in their environments and federate alerts through an enterprise command center approach that gives a singular view into physical, virtual, and public cloud workloads. CMaaS also provides cloud service brokerage and governance capabilities allowing our customers to view price-performance analytics across private and public cloud environments, design service models and view the related bills of material, and view and consolidate billings across multiple public cloud providers. What are your thoughts on the Big Shift? How is your organization addressing the changes in the IT landscape?

Grading the Internet’s 2014 Tech Predictions

 

The time is here for bloggers across the internet to make their tech predictions for 2014 and beyond (we have made some ourselves around storage and cloud). In this post, a couple of our authors have weighed in to grade predictions made by others across the web.

Prioritizing Management Tool Consolidation vs. New Acquisitions

Enterprise customers will want to invest in new tools only when necessary. They should look for solutions that can address several of their needs so that they do not have to acquire multiple tools and integrate them. The ability to cover multiple areas of management (performance, configuration and availability) to support multiple technologies (e.g., application tiers) and to operate across multiple platforms (Unix, Windows, virtual) will be important criteria for enterprises to assess what management tools will work for them.  (eweek)

Agree – I have been saying this for a while.  If you want a new tool, get rid of 5 and consolidate and use what you have now or get one that really works. (Randy Becker)

 

Bigger big data spending

IDC predicts spending of more than $14 billion on big data technologies and services or 30% growth year-over-year, “as demand for big data analytics skills continues to outstrip supply.” The cloud will play a bigger role with IDC predicting a race to develop cloud-based platforms capable of streaming data in real time. There will be increased use by enterprises of externally-sourced data and applications and “data brokers will proliferate.” IDC predicts explosive growth in big data analytics services, with the number of providers to triple in three years. 2014 spending on these services will exceed $4.5 billion, growing by 21%. (Forbes)

Absolutely agree with this.  Companies of all sizes are constantly looking to garner more intelligence from the data they have.  Even here at GreenPages we have our own big data issues and will continue to invest in these solutions to solve our own internal business needs. (Chris Ward)

 

Enterprises Will Shift From Silo to Collaborative Management

 In 2014, IT organizations will continue to feel increased pressure from their lines of business. Collaborative management will be a key theme, and organizations will be looking to provide a greater degree of performance visibility across their individual silo tiers to the help desk, so it is easier and faster to troubleshoot problems and identify the tier that is responsible for a problem. (eweek)

Agree – cross domain technology experts are key!  (Randy Becker)

 

New IT Will Create New Opportunities

Mobility, bring-your-own device (BYOD) and virtual desktops will all continue to gain a foothold in the enterprise. The success of these new technologies will be closely tied to the performance that users can experience when using these technologies. Performance management will grow in importance in these areas, providing scope for innovation and new solutions in the areas of mobility management, VDI management and so on. (eweek)

Disagree – This is backwards. The business is driving change and accountability.  It is not IT that creates new opportunities – it is the business demanding apps that work and perform for the people using them. (Randy Becker)

 

Here comes the Internet of Things

By 2020, the Internet of Things will generate 30 billion autonomously connected end points and $8.9 trillion in revenues. IDC predicts that in 2014 we will see new partnerships among IT vendors, service providers, and semiconductor vendors that will address this market. Again, China will be a key player:  The average Chinese home in 2030 will have 40–50 intelligent devices/sensors, generating 200TB of data annually. (Forbes)

Totally agree with this one.  Everything and everybody is eventually going to be connected.  I wish I were building a new home right now because there are so many cool things you can do by having numerous household items connected.  I also love it because I know that in 10 years when my daughter turns 16 that I’ll no doubt know in real-time where she is and what she is doing.  However, I doubt she’ll appreciate the ‘coolness’ of that.  Although very cool, this concept does introduce some very real challenges around management of all of these devices.  Think about 30 billion devices connected to the net….  We might actually have to start learning about IPv6 soon… (Chris Ward)

 

Cloud service providers will increasingly drive the IT market

As cloud-dedicated datacenters grow in number and importance, the market for server, storage, and networking components “will increasingly be driven by cloud service providers, who have traditionally favored highly componentized and commoditized designs.” The incumbent IT hardware vendors will be forced to adopt a “cloud-first” strategy, IDC predicts. 25–30% of server shipments will go to datacenters managed by service providers, growing to 43% by 2017. (Forbes)

Not sure I agree with this one for 2014 but I do agree with it in the longer term.  As more and more applications/systems get migrated to public cloud providers, that means less and less hardware/software purchased directly from end user customers and thus more consolidation at the cloud providers.  This could be a catch 22 for a lot of the traditional IT vendors like HP and Dell.  When’s the last time you walked into an Amazon or Google datacenter and saw racks and racks of HP or Dell gear?  Probably not too recently as these providers tend to ‘roll their own’ from a hardware perspective.  One thing is for sure…this will get very interesting over the next 24 to 36 months… (Chris Ward)

 

End-User Experience Will Determine Success

Businesses will expect IT to find problems before their users do, pinpoint the root cause of the problem and solve the problem as early as possible. IT organizations will seek solutions that will allow them to provide great user experience and productivity. (eweek)

Agree – 100% on this one. Need a good POC and Pilot that is well managed with clear goals and objectives. (Randy Becker)

 

Amazon (and possibly Google) to take on traditional IT suppliers

Amazon Web Services’ “avalanche of platform-as-a-service offerings for developers and higher value services for businesses” will force traditional IT suppliers to “urgently reconfigure themselves.” Google, IDC predicts, will join in the fight, as it realizes “it is at risk of being boxed out of a market where it should be vying for leadership.” (Forbes)

I agree with this one to an extent.  Amazon has certainly captured a good share of the market in two categories, developers and large scale-out applications and I see them continuing to have dominance in these 2 spaces.  However, anyone who thinks that customers are forklift moving traditional production business applications from the datacenter to the public cloud/Amazon should really get out in the field and talk to CIOs and IT admins as this simply isn’t happening.  I’ve had numerous conversations with our own customers around this topic, and when you do the math it just doesn’t make sense in most cases – assuming the customer has an existing investment in hardware/software and some form of datacenter to house it.  That said, where I have seen an uptake of Amazon and other public cloud providers is from startups or companies that are being spun out of a larger parent. Bottom line, Amazon and others will absolutely compete with traditional IT suppliers, just not in a ubiquitous manner. (Chris Ward)

 

The digitization of all industries

By 2018, 1/3 of share leaders in virtually all industries will be “Amazoned” by new and incumbent players. “A key to competing in these disrupted and reinvented industries,” IDC says, “will be to create industry-focused innovation platforms (like GE’s Predix) that attract and enable large communities of innovators – dozens to hundreds will emerge in the next several years.” Concomitant with this digitization of everything trend, “the IT buyer profile continues to shift to business executives. In 2014, and through 2017, IT spending by groups outside of IT departments will grow at more than 6% per year.” (Forbes)

I would have to agree with this one as well.  The underlying message here is that IT spending decisions continue to shift away from IT and into the hands of the business.  I have seen this happening more and more over the past couple of years and can’t help but believe it will continue in that direction at a rapid pace. (Chris Ward)

What do you think about these predictions? What about Chris and Randy’s take on them?

Download this free eBook about the evolution of the corporate IT department.

 

 

A Guide to Successful Cloud Adoption

Last week, I met with a number of our top clients near the GreenPages HQ in Portsmouth, NH at our annual Summit event to talk about successful adoption of cloud technologies. In this post, I’ll give a summary of my cloud adoption advice, and cover some of the feedback that I heard from customers during my discussions. Here we go…

The Market for IT Services

I see compute infrastructure looking more and more like a commodity, and that there is intense competition in the market for IT services, particularly Infrastructure-as-a-Service (IaaS).

  1. Every day, Amazon installs as much computing capacity in AWS as it used to run all of Amazon in 2002, when it was a $3.9 billion company.” – CIO Journal, May 2013
  2. “[Amazon] has dropped the price of renting dedicated virtual server instances on its EC2 compute cloud by up to 80 percent […]  from $10 to $2 per hour” – ZDNet,  July 2013
  3. “…Amazon cut charges for some of its services Friday, the 25th reduction since its launch in 2006.” – CRN, February 2013

I think that the first data point here is absolutely stunning, even considering that it covers a time span of 11 years. Of course, a simple Google search will return a number of other similar quotes. How can Amazon and others continue to drop their prices for IaaS, while improving quality at the same time? From a market behavior point of view, I think that the answer is clear – Amazon Web Services and others specialize in providing IaaS. That’s all they do. That’s their core business. Like any other for-profit business, IaaS providers prefer to make investments in projects that will improve their bottom line. And, like any other for-profit business, those investments enable companies like AWS to effectively compete with other providers (like Verizon/Terremark, for example) in the market.

Register for our upcoming webinar on 8/22 to learn how to deal with the challenges of securely managing corporate data across a broad array of computing platforms. 

With network and other technologies as they are, businesses now have a choice of where to host infrastructure that supports their applications. In other words, the captive corporate IT department may be the preferred provider of infrastructure (for now), but they are now effectively competing with outside IaaS providers. Why, then, would the business not choose the lowest cost provider? Well, the answer to that question is quite the debate in cloud computing (we’ll put that aside for now). Suffice to say that we think that internal corporate IT departments are now competing with outside providers to provide IaaS and other services to the business and that this will become more apparent as technology advances (e.g., as workloads become more portable, network speeds increase, storage becomes increasingly less costly, etc.).

Now here’s the punch line and the basis for our guidance on cloud computing; how should internal corporate IT position itself to stay competitive? At our annual Summit event last week, I discussed the progression of the corporate IT department from a provider of technology to a provider of services (see my whitepaper on cloud management for detail). The common thread is that corporate IT evolves by becoming closer and closer to the requirements of the business – and may even be able to anticipate requirements of the business or suggest emerging technology to benefit the business. To take advantage of cloud computing, one thing corporate IT can do is source commodity services to outside providers where it makes sense. Fundamentally, this has been commonplace in other industries for some time – manufacturing being one example. OEM automotive manufacturers like GM and Ford do not produce the windshields and brake calipers that are necessary for a complete automobile – it just isn’t worth it for GM or Ford to produce those things. They source windshields, brake calipers, and other components from companies who specialize. GM, Ford, and others are then left with more resources to invest in designing, assembling and marketing a product that appeals to end users like you and I.

So, it comes down to this: how do internal corporate IT departments make intelligent sourcing decisions? We suggest that the answer is in thinking about packaging and delivering IT services to the business.

GreenPages Assessment and Design Method

So, how does GreenPages recommend that customers take advantage of cloud computing? Even if you are not considering external cloud at this time, I think it makes sense to prepare your shop for it. Eventually, cloud may make sense for your shop even if, at this time, there is no fit for it. The guidance here is to take a methodical look at how your department is staffed and operated. ITIL v2 and v3 provide a good guide here of what should be examined:

  • Configuration Management
  • Financial Management
  • Incident and Problem Management
  • Change Management
  • Service Level and Availability, and Service Catalog Management
  • Lifecycle Management
  • Capacity Management
  • Business Level Management

 

Assigning a score to each of these areas in terms of repeatability, documentation, measurement, and continuous improvement will paint the picture of how well your department can make informed sourcing decisions. Conducting an assessment and making some housekeeping improvements where needed will serve two purposes:

  1. Plans for remediation could form one cornerstone of your cloud strategy
  2. Doing things according to good practice will add discipline to your IT department – which is valuable regardless of your position on cloud computing at this time

When and if cloud computing services look like a good option for your company, your department will be able to make an informed decision on which services to use at which times. And, if you’re building an internal private cloud, the processes listed above will form the cornerstone of the way you will operate as a service provider.

Case Study: Service Catalog and Private Cloud

Implementing a Service Catalog, corporate IT departments can take a solid first step to becoming a service provider and staying close to the requirements of the business. This year at VMworld in San Francisco, I’ll be leading a session to present a case study of a recent client that did exactly this with our help. If you’re going to be out at VMworld, swing by and listen in to my session!

 

 

Free webinar on 8/22: Horizon Suite – How to Securely Enable BYOD with VMware’s Next Gen EUC Platform.

With a growing number of consumer devices proliferating the workplace, lines of business turning to cloud-based services, and people demanding more mobility in order to be productive, IT administrators are faced with a new generation of challenges for securely managing corporate data across a broad array of computing platforms. 

 

The Private Cloud Strikes Back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree.  His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth.  Yes, cost is indeed a factor as is sharing risk but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not. He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions.  I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon but if your main concern is how to better equip and provide the environment and tools necessary to innovate within your organization, the whole social thing is a red herring for selling you things that you don’t need.

Traditional status quo within IT is deeply encumbered by mostly manual processes—optimized for people carrying out commodity IT tasks such as provisioning servers and OSes—that cannot be optimized any further, therefore a different, much better way had to be found.  That way is the private cloud which takes those commodity IT tasks and elevates them to automated and orchestrated, well defined workflows and then utilizes a policy-driven system to carry them out.  Whether these workflows are initiated by a human or as a result of a specific set of monitored criteria, the system dynamically creates and recreates itself based on actual business and performance need—something that is almost impossible to translate into the public cloud scenario.

Not that public cloud cannot be leveraged where appropriate, but the enterprise’s requirement is much more granular and specific than any public cloud can or should allow…simply to JP’s point that they must share the risk among many players and that risk is generic by definition within the public cloud.  Once you start creating one-off specific environments, the commonality is lost and it loses the cost benefits because now you are simply utilizing a private cloud whose assets are owned by someone else…sound like co-lo?

Finally, I wouldn’t expect someone whose main revenue source is based on the idea that a public cloud is better than a private cloud to say anything different than what JP has said, but I did expect some semblance of clarity as to where his loyalties lie…and it looks like it’s not with the best interests of the enterprise customer.