Category Archives: Cloud computing

Breaking Down the Management Barriers to Adopting Hybrid Cloud Technologies

By Geoff Smith, Sr. Solutions Architect

It is inarguable that change is sweeping the IT industry.  Over the last five years a number of new technologies that provide huge technological advantages (and create management headaches) have been developed.  We have attempted to leverage these advances to the benefit of our organizations, while at the same time struggling with how to incorporate them into our established IT management methodologies.  Do we need to throw out our mature management protocols in order to partake in the advantages provided by these new technologies, or can we modify our core management approaches and leverage similar advances in management methodologies to provide a more extensible platform that enables adoption of advanced computing architectures?

Cloud computing is one such advance.  One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  Technically, cloud services are beyond our direct control because we do not “own” the underlying infrastructure and have limited say in how those services are designed and deployed.  But are they beyond our ability to evaluate and influence?

There are the obvious challenges in enabling these technologies within our organizations.  Cloud services are provided by and managed by those whom we consume them from, not within our four-walled datacenter.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and service consumption crosses that void beyond our current management capabilities?

{Download this free whitepaper to learn more about GreenPages Cloud Management as a Service offering; a revolutionary way organizations can manage hybrid cloud environments}

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  We still have to manage and maintain our own datacenters as they exist today.  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage externally consumed resources as part of your overall architecture, you step outside of the physical and virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT services.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems not our own.

So the root question is, how do we accomplish this?  There are four distinct areas of change that we need to consider:

  • Tools – the toolsets we utilize to perform our management processes need to both understand these new technologies, and expand our end-to-end visibility and evaluation capabilities
  • Techniques – we need to modify the way we perform our daily IT functions and apply our organizational policies in order to consider the new computing platforms we will be consuming.  Our ability to validate, influence and directly control IT consumption will vary, however our underlying responsibilities to deliver effective and efficient services to our organizations should not
  • Talent – we are faced with adopting not only new technologies, but also new sets of responsibilities within our IT support organizations.  The entire lifecycle of IT is moving under the responsibility of the support organization.  We can develop the appropriate internal talent or we can extend our teams with external support organizations, but in either case the talent needed will expand in proportion to the capabilities of the platforms we are enabling
  • Transparency – the success of enabling new technologies will be gauged on how well those technologies meet business needs.  Through comprehensive analysis, reporting and auditing, IT will be able to demonstrate the value of both the technology decisions and the management structures

First and foremost, we must modify our concepts of what is critical to monitor and manage.  We need to be able to move our viewpoints from individual silos of technology to a higher level of awareness.  No longer can we isolate what is happening at the network layer from what is transpiring within our storage facilities.  The scope of what we are responsible for is expanding, and the key metrics are changing.  No longer is availability the key success factor.  Usability is how our teams will be judged.

In the past, a successful IT team may have strived for five 9s of availability.  In this new paradigm, availability is now a foundational expectation.  The ability of our delivered services to be used in a manner that enables the business to meet its objectives will become the new measuring stick.  Business units will define what the acceptable usability metrics are, basing them on how they leverage these services to complete their tasks.  IT will in fact be driven to meet these service level agreements.

Secondly, we have to enable our support teams to work effectively with these new technologies.  This is a multifaceted issue, consisting of providing the right tools, processes and talent.   Tools will need to expand our ability to view, interface and influence systems and services beyond our traditional reach.  Where possible, the tools should provide an essential level of management across all platforms regardless of where those services are delivered from (internal, SaaS, PaaS, IaaS).  Likewise, our processes for responding to, managing, and remediating events will need to change.  Tighter enforcement of service level commitments and the ability to validate them will be key.  Our staff will need to be authorized to take appropriate actions to resolve issues directly, limiting escalations and handoffs.  And we will need to provide the talent (internally or via partners) necessary to deliver on the entire IT lifecycle, including provisioning, de-provisioning and procurement.

Last, IT will be required to prove the effectiveness not only of their support teams, but also of the selection of cloud-based service providers.  Because we consume external services does not release us from the requirements of service delivery to our organizations.  Our focus will need to shift toward demonstrating that service usability requirements have been met.  This will require transparency between our internally delivered systems and our externally consumed services.

This is a transition, not a light-switch event.  And as such, our approach to management change must mirror that pace.  Our priorities and focus will need to shift in concert with our shift from delivered services toward consumed services.

Would you like to learn more about our Cloud Management as a Service offering? Fill out this form and we will get in touch with you shortly!

Colocation: 55+ living for your IT equipment

I recently sat on a planning call with an extremely smart and agreeable client. We had discussed a modest “data center” worth of equipment to host the environment he’s considering putting into production. I asked the simple enough question of “where are you going to deploy this gear?” I have to admit not being very surprised when he responded: “Well, I’ve cleaned out a corner of my office.” Having spent some early days of my IT career working in a server closet, I knew that if the hum of the equipment fans didn’t get to him quickly, the heat output would for sure. This is not an uncommon conversation. Clearly the capital expense of building out a “data center” onsite was not an appealing topic. So, if building isn’t an option, why not rent?

In a similar vein, not too far back I watched several “senior” members of my family move into 55+ communities after years of resisting. Basically, they did a “capacity planner” and realized the big house was no longer needed. They figured somebody else could worry about the landscaping, snow plowing and leaky roofs. The same driving forces should have many IT pros considering a move into a colocation facility.

The opportunities to move into a hosted data center (colo facility) are plentiful today. You simply don’t have as much gear any longer (assuming you’re mostly virtualized). Your desire to “do it all” yourself has waned (let someone else worry about keeping the lights on and network connected). The added bonus of providing redundant network paths, onsite security and almost infinite expansion are driving many “rental” conversations today. Colos are purpose-built facilities which are ideal for core data center gear such as servers, storage (SANs), routers and core switches, to name a few.  Almost all of them have dual power feeds, backup battery systems and generators. HVAC (heating, ventilation, and air-conditioning) units keep appropriate environmental conditions for the operation of this critical equipment.

Many businesses don’t fully realize just how much power consumption is required to operate a data center. The energy bills achieved for just the cooling component alone can leave many IT managers, well, frosted. Even still, the need to see the healthy status green blinking lights is like a digital comfort blanket. Speaking with many IT execs, we hear over and over “This was the best move we could have made.” From our own experience, we’ve seen our internal IT team shift focus to strategic initiatives and end user support.

While it is certainly not a one-size-fits-all endeavor, there is something for most organizations when it comes to colo. Smaller organizations with one rack of equipment have seen tremendous advantages as have clients approaching the “enterprise” size with dozens of cabinets of gear. Redundancy, security, cost control, predictable budgets and 7x24x365 support are all equally attractive reasons to move into a “colo.” Call it a “colominium” if you will. Colo could be the right step toward a more efficient and effective IT existence.

 

Guest Post: Why Midmarket Business Needs Cloud Services in 2013

Guest Post: Grant Davis

This is a guest post and does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

The global market is becoming more and more competitive by the second, thus requiring businesses to operate very efficiently with regards to organizational structure. Businesses, specifically midmarket size, are faced with tall tasks in 2013. With a growing enterprise, information increases as the operations do. A growing company requires higher level data management, and this leads to more intricate demands when it comes to IT organization and communication.

If a midmarket is firing on all cylinders, acquiring new clients and consumers by the day, there is only so much that an unorganized or incapable operations model can withstand. The IT department can only cope with so many networks and so much data. A commonality among growing business in America throughout modern society is the implementation of cloud services. Cloud offers a way to outsource data and network management with the ability to focus resources and time on more intricate and fundamental aspects of the business.

Below I list the main ways that midmarket businesses can benefit from the utilization of cloud services in 2013, and the critical reasons for the argument.

1.       Cost.

Cloud services can be financially viable in the right situation. Using an outsourced data storage center can decrease the cost of real estate, software and employee payroll. For one, a midmarket that works with a cloud vendor does not have to physically house as much data. This is substantial benefit, mainly because of the physical space but also the operational costs of a larger company with high energy consumption.

Secondly, a cloud provider would be responsible for software agreements and also network operations. This is a huge burden off of a midmarket, as serious growth tends to take focus away from standard processing issues.  This responsibility being shifted to the cloud provider alleviates cost in the sense that a business can reduce or relocate IT staff for better efficiency. It can also benefit an enterprise to not have to worry about multiple SLAs with various software providers. It can save money to have the agreement consolidated and maintained by the cloud vendor.

Why this is crucial: Midmarket business can only reach maximum efficiency if all of the parts are in place. Part of this is allocating resources in a way that gets the most out of each aspect of the company. If a Data Modeler or System Admin can be utilized more effectively in this crucial phase of business development, maybe it’s better to outsource their daily role to a cloud vendor. It’s possible that their creativity and focus needs to be distributed in another area of the business different than process management. Innovation is key right now, and this is part of the process.

2.       Flexibility.

Cloud services may be a good idea for IT decision makers within a midmarket because employees are able to be more flexible. For instance, a cloud vendor allows for immediate access to business information from various portals, including mobile devices. In 2013 a typical cloud vendor seamlessly supplies business leadership and operations teams with the ability to access information from all angles of daily routines. This is a huge benefit in modern society where nearly everything is immediate and in constant real-time.

Also, because midmarket business is often trying to compete and outreach in a competitive market, traveling off location will be much less detrimental to work efficiency. If the company CIO is going to a tradeshow in Phoenix, they should still be able to access any processes being maintained by the cloud vendor.

Why this is crucial: Midmarket business in 2013 requires collaboration to be successful. With information being stored in a cloud storage center, information can be accessed from diverse locations. This increases both internal and external business collaboration. Modern society is far too demanding to have anything that is inefficient, and flexibility is directly related to efficiency when it comes to a growing business and data management.

3.       Scalability.

IT is the backbone of most business operations.  Modern information is too complex to handle manually, and we rely on computers and networks to transport and maintain data. An additional advantage of a midmarket acquiring cloud services is that the business can upscale or downscale IT services based on specific need. For instance, if the midmarket has stagnant growth over the holiday season, they can scale back their service agreement with the vendor to save money during that time. Similarly, if business continues to grow, the cloud service can easily expand and accommodate the new volume of data management that it performs for the business. This is not as viable with internal data management, as new software and hardware will need to be purchased with each major alteration in IT requirements. This can lead to wasted money and lost resources.

Why this is crucial: Business in general is too unpredictable in the current economy to assume anything, even when it comes to IT requirements. Cloud vendors allow for leeway with regards to data storage and this is important when a midmarket is concerned because often times these businesses don’t have the margin of error to make up for any inefficiency. They need the exact amount of storage they need, when they need it.

Conclusions

Midmarket business is important to the United States economy and affects the lives of many people. Usually these operations are on the cusp of doing something significant, and proper organization within IT can help allocate resources in the right areas for maximum production and business model maintenance. It’s time for IT and business leadership to make note of this and move to action in early 2013.

 

Grant Davis is a Data Modeler by day and a writer by night. His passion for computers started when he discovered instant messaging in junior high school. When Grant isn’t trying to climb through the computer screen he writes for BMC, a leading mainframe management provider.

Are you a midmarket organization looking to bring your IT environment to the next level? Click to learn more about how GreenPages can help!

 

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

WSO2 Opens Beta for Stratos 2.0 PaaS Offering

Today WSO2 unveiled the beta release of WSO2 Stratos 2.0. Newly re-architected, WSO2 Stratos 2.0 is a foundation for implementing a platform-as-a-service (PaaS) that combines support for heterogeneous applications and service-oriented architecture (SOA) platform runtimes with native, secure multi-tenancy. WSO2 Stratos 2.0 also adds the capability to run on any cloud infrastructure, including VMware, Eucalyptus, Amazon and OpenStack.

WSO2 previewed the latest release of WSO2 Stratos at WSO2Con 2013, which runs February 13-14 in London. The company also announced that it has begun accepting customers for the WSO2 Stratos 2.0 beta program.

“Increasingly enterprises view the cloud as a platform for innovation. However, until now they’ve had to make trade-offs between using their favorite development tools and middleware or capitalizing on the multi-tenancy of a native cloud environment. With WSO2 Stratos 2.0, these organizations no longer have to compromise,” said Dr. Sanjiva Weerawarana, WSO2 founder and CEO.

The Stratos 2.0 approach to multi-tenancy goes beyond other PaaS environments to support multiple levels of virtualization—from standard virtual machines, via Linux Containers to intra-JVM isolation. This choice of sharing resources while providing the correct isolation level for multiple tenants is a significant factor in enabling lower costs, greater flexibility, and easier on-ramping into a private or public cloud environment.

The new tenant-aware elastic load balancer in Stratos 2.0 is a first-of-a-kind capability that allows the environment to provide highly tunable performance to different tenants, ranging from “economy class” for low priority workloads up to “private jet” mode for workloads that require dedicated resources.

At the heart of Version 2.0 is a new cartridge architecture for plugging software into WSO2 Stratos to take advantage of cloud-native capabilities, such as multi-tenancy, elastic scaling, self-service provisioning, metering, billing, and resource pooling, among others. As a result, WSO2 Stratos 2.0 is able to run, not only 13 WSO2 Carbon enterprise middleware products, but also a choice of frameworks, databases, and other application services.

WSO2 Stratos 2.0 also significantly enhances PaaS deployment through an integration layer that uses the popular jclouds technology to allow it to run on any infrastructure-as-a-service (IaaS) including OpenStack, VMware, Eucalyptus and CloudStack. Additionally, use of the Puppet open source tool for infrastructure deployment in this release makes it easier than ever to install and configure Stratos in a private or public cloud environment. Like all WSO2 software, WSO2 Stratos 2.0 is 100% open source and will be made available under the Apache License 2.0.

Ideal candidates for the WSO2 Stratos 2.0 beta program are enterprise IT professionals who are planning or evaluating ways to deliver new applications and/or migrate existing ones to the cloud. Participants also must be committed to participating and giving feedback. For more information and to contact WSO2 about joining the beta program, please visit the product Web page: http://wso2.com/cloud/stratos.

Guest Post: Who Controls the Cloud Market – Providers or Consumers?

Guest Post: Ilyas Iyoob, Director, Advanced Analytics and Sr. Research Scientist, PhD at Gravitant

We first went from reserving cloud capacity to securing capacity on-demand, and then we even started to bid for unused capacity in the spot market – all in an effort to decrease cost in the cloud.  Can we take this one step further?  Instead of us bidding for capacity, wouldn’t it be interesting if we can get providers to bid for our demand?

Retail Supply Chain Market Analogy

In fact, this is a common phenomena in the retail supply chain industry.  For example, Walmart has a large amount of freight that needs to be shipped between different cities over the course of the year.  So, every year an auction is conducted in which Walmart lists all their shipments, and carriers such as JB Hunt, Schneider, Yellow etc. bid for the opportunity to carry these shipments using their fleet of trucks.  The reason carriers are bidding for retailer demand is because in general, capacity exceeds demand in the retail industry.

Cloud Computing Market

Keeping this in mind, let us now take a look at the Cloud Computing Market.  Does capacity exceed demand or is it the other way around?  A quick way to find out is by observing spot prices in the cloud market.  In today’s market, Amazon’s Spot Instances are 86% cheaper than their on-demand instances, and Enomaly’s SpotCloud also shows lower spot prices across the board.  This leads us to believe that capacity exceeds demand in the cloud market as well.  A related indicator is the predominance of data center consolidation initiatives in both the commercial and government marketplaces.

Since capacity exceeds demand, consumers have an upper hand and are in control of the cloud market at the moment.  Moreover, they should be able to replicate what is being done in the retail supply chain industry.  In other words, cloud consumers should be able to auction off their demand to the best fit lowest price cloud provider.

So, …

Consumers should seize the opportunity and control the market while the odds are in their favor i.e. Demand < Capacity.  At the same time, Service Integrators and Value Added Resellers can help Enterprise IT consumers in this process by conducting Primary-Market auctions using Cloud Service Brokerage technology.

This post was originally published on Gravitant’s blog.

Be Nimble, Be Quick: A CRN Interview with GreenPages’ CEO

CRN Senior Editor and industry veteran Steve Burke sat down with GreenPages’ CEO Ron Dupler to discuss shifts in ideology in the industry as well as GreenPages new Cloud Management as a Service (CMaaS) offering. The interview, which was originally posted on CRN.com, is below. What are your thoughts on Ron’s views of the changing dynamics of IT?

 

CRN:Talk about your new cloud offering.

Dupler:It is available today. We can support physical, virtual and cloud-based infrastructure through a single pane of glass today. We are actually using the technology internally as well.

There is another part of CMaaS that goes into cloud governance and governance models in a cloud world and cloud services brokerage. That is what we are integrating and bringing to market very soon.

CRN:How big a game-changer is CMaaS?

Dupler:I think we are going to be well out in front of the market with this. I personally believe we can go have discussions right now and bring technologies to bear to support those discussions that no one else in the industry can right now.

That said, we know that the pace of innovation is rapid and we expect other organizations are trying to work on these types of initiatives as well. But we believe we’ll be out front certainly for this year.

CRN:How does the solution provider business model change from 2013 to 2018?

Dupler:The way we are looking at our job and the job of the solution provider channel over the next several years through 2018 is to provide IT plan, build, run and governance services for the cloud world.

The big change is that the solution provider channel for many years has made their money off the fact that infrastructure fundamentally doesn’t work very well. And it has been all about architecting and integrating physical technologies and software platforms to support the apps and data that really add value for the business.

When we move to the cloud world, this is now about integrating service platforms as opposed to physical technologies. So it is about architecting and integrating on-premise and cloud service platforms really to create IT-as-a-Service to support the apps and data for the platform. That is the transition that is under way.

CRN:Does the GreenPages brand become bigger than the vendor brand and how does that affect vendor relations in the CMaaS era?

Dupler:We continue to closely evaluate all our key partner relationships. That is managed very closely. What we try to do is make sure we are partnered with the right companies that are really leading this transformation. And our number one partner because they are driving this transformation is VMware. With this whole software-defined data center concept and initiative, VMware has really laid out a great vision for where this market is going.

NEXT: Does Size Matter?

CRN:There is a prevailing view that solution providers need to go big or go home, with many solution providers selling their businesses. Do you see scale becoming more important — that you need to scale?

Dupler:No. People have been saying that for years. It is all about customer value and the talent of your team, if you are adding value for clients. You need to be able to service the client community. And they care about quality of service and the ability of your team. Not necessarily that you are huge. I have been down the M&A road and, as you know, we do M&A here on a smaller scale. And I will tell you there are pros and cons to it. You aggregate talent, but you also have got the inertia of pulling companies together and integrating companies and people and executive teams and getting through that.

I absolutely do not subscribe and never have subscribed to the fact that size in itself gives competitive advantage. There are some advantages, but there are also costs to doing that.

CRN:What is the ultimate measure for success in this new world?

Dupler:It is a combination of three things: technology, and I will firmly say it doesn’t have to be homegrown. It could be homegrown or it could be commercial off-the-shelf. It is the way the technology is leveraged and having the technologies with the ability to drive the services you are trying to provide. What we are trying to do with CMaaS is single pane of glass management for the physical, virtual and cloud infrastructure, which I have mentioned, as well as cloud service brokerage and cloud governance services. You can either develop those on your own or integrate partner technologies or both, but you need the supporting technology base and you need people and you need process.

CRN:How big a transition is this and what percentage of VARs do you think will make it to 2018?

Dupler:The companies that I think are going to have a huge challenge are the big product-centric organizations right now. The DMR [direct marketer] community. They have some big challenges ahead of them over time. All these guys are trying to come up with cloud strategies as well.

Right now there is a premium on being nimble. That is the word of the day for me in 2013. Nimble. You need nimble people and you need a nimble business organization because things are moving faster than they ever have. You just have to have a culture and people that can change quickly.

Going back to is it good just to be big? Sometimes it is hard to maintain [that agility] as you get really big. The magnitude of the change that is required to succeed over the next five years is extremely significant. And people that aren’t already under way with that change have a big challenge ahead of them.

CRN:What is the pace of change like managing in this business as a CEO vs. five years ago?

Dupler:It is exponential.

CRN:Is it tougher to manage in an environment like this?

Dupler:You say it is tougher, but there is more opportunity than ever because of the pace of change to really differentiate yourself. So it can be challenging but it is also very stimulating and exciting.

CRN:Give me five tips you need to thrive in 2018.

Dupler:First of all, you need hybrid cloud management capabilities.

Number two, you need cloud services brokerage capabilities. It is ultimately an ability to provide a platform for clients to acquire as-a-service technologies from GreenPages. To be able to sell the various forms of infrastructure, platform and software as a service.

Number three is cloud architecture and integration capabilities.

Fourth is product revenue and profit streams are not central to supporting the business. The service model needs to become a profitable, thriving stand-alone entity without the product revenue streams.

The fifth thing and it is the biggest challenge. One thing is migrating your technology organization. Then the next thing you need to do is create a services-based sales culture.

CRN:Talk about how big a change that is.

Dupler:It is a huge change. Again, if people are not already under way with this change they have a huge challenge ahead of them. Everybody I speak with in the industry — whether it is at [UBM Tech Channel’s] BoB conference or at partner advisory councils — everybody is challenged with this right now. The sales force in the solution provider industry has been old paradigm physical-technology-based and needs to move into a world where it is leading with professional and managed services. And that game is very different. So I think there are two ways to address that: one is hiring new types of talent or helping the talent we all have transform. It is going to be a combination of both that gets us ultimately where we need to be.

CRN:What do you think is the biggest mistake being made right now by competitors or vendors?

Dupler:What I see is people that are afraid to embrace the change that is under way and are really hanging on to the past. The biggest mistake I see right now is people continuing to evangelize solutions to customers that aren’t necessarily right by the customer, but conform to what they know and drive the most profit for their organizations.

Short-term gain isn’t going to drive long-term customer value. And we need to lead the customers forward through this transformation as opposed to perpetuating the past. The market needs leadership right now. The biggest challenge for people is not moving fast enough to transform their businesses.

This interview was originally posted on CRN.com

To learn more about GreenPages’ CMaaS offering click here!

Guest Post: A Wrinkle in the IT Universe

By Kai Gray, VP of Operations at Carbonite

I feel like tectonic plates are shifting beneath the IT world. I’ve been struggling to put my finger on what it is that is making me feel this way, but slowly things have started to come into focus. These are my thoughts on how cloud computing has forever changed the economics of IT by shifting the balance of power.

The cloud has fundamentally changed business models; it has shifted time-to-market, entry points and who can do what. These byproducts of massive elasticity are wrapped up in an even greater evolutionary change that is occurring right now: The cloud is having a pronounced impact on the supply chain, which will amount to a tidal wave of changes in the near-term that will cause huge pain for some and spawn incredible innovation and wealth for others. As I see it, the cloud has started a chain of events that will change our industry forever:

1) Big IT used to rule the datacenter. Not long ago, large infrastructure companies were at the heart of IT. The EMCs, Dells, Ciscos, HPs and IBMs were responsible for designing, sourcing, supplying and configuring the hardware that was behind nearly all of the computing and storage power in the world. Every server closest was packed full of name-brand equipment and the datacenter was no different. A quick tour of any datacenter would – and still will – showcase the wares of these behemoths of the IT world. These companies developed sophisticated supply and sales channels that produced great margins businesses built on some very good product. This included the OEMs and ODMs that produced bent metal to the VARs and distributors who then sold their finish products. Think of DeBeers, the diamond mine owner and distributor. What are the differences between a company like HP and DeBeers? Not very much, but the cloud began to change all that.

2) Cloud Computing. Slowly we got introduced to the notion of cloud computing. We started using products that put the resource away from us, and (slowly) we became comfortable with not needing to touch the hardware. Our email “lived” somewhere else, our backups “lived” somewhere else and our computing cycles “lived” somewhere else. With each incremental step, our comfort levels rose until it stopped being a question and turned into an expectation. This process set off a dramatic shift in supply chain economics.

3) Supply Chain Economics. The confluence of massive demand coupled with near-free products (driven by a need to expand customer acquisition) changed how people had to think about infrastructure. All of a sudden, cloud providers had to think about infrastructure in terms of true scalability. This meant acquiring and managing massive amounts of infrastructure at the lowest possible cost. This was/is fundamentally different from the way the HPs and Dells and Ciscos thought about the world. All of a sudden, those providers were unable to address the needs of this new market in an effective way. This isn’t to say that the big IT companies can’t, just that it’s hard for them. It’s hard to accept shrinking margin and “openness.”  The people brave enough to promote such wild ideas are branded as heretics and accused of rocking the boat (even as the boat is sinking). Eventually the economic and scale requirements forced cloud providers to tackle the supply chain and go direct.

4) Going Direct. As cloud providers begin to develop strong supply chain relationships and build up their competencies around hardware engineering and logistics, they begin to become more ingrained with the ODMs (http://en.wikipedia.org/wiki/Original_design_manufacturer) and other primary suppliers. Huge initiatives came into existence from the likes of Amazon, Google and Facebook that are focused on driving down the cost of everything. For example, Google began working directly with Intel and AMD to develop custom chipsets that allow them to run at efficiency levels never before seen, and Facebook started the Open Compute Project that seeks to open-source design schematics that were once locked in vaults.

In short, the supply chain envelope gets pushed by anyone focused on cost and large-scale.

…and here it gets interesting.

Cloud providers now account for more supplier revenue than the Big IT companies. Or, maybe better stated — cloud providers account for more hope of revenue (HoR) than Big IT. So, what does that mean? That means that the Big IT companies no longer receive the biggest discounts available from the suppliers. The biggest discounts are going to the end users and the low-margin companies built solely on servicing the infrastructure needs of cloud providers. This means that Big IT is at even more of a competitive disadvantage than they already were. The cycle is now in full swing. If you think this isn’t what is happening, just look at HP and Dell right now. They don’t know how to interact with a huge set of end users without caving in their margins and cannibalizing their existing businesses. Some will choose to amputate while others will go down kicking, but margin declines and openness of information will take their toll with excruciating pain.

What comes of all this? I don’t know. But here are my observations:

1) Access to the commodity providers (ODMs and suppliers) is relatively closed. To be at all interesting to ODMs and suppliers you have to be doing things at enough volume that it is worthwhile for them to engage with you. That will change. The commodity suppliers will learn how to work in different markets but there will be huge opportunity for companies that help them get there. When access to ODMs and direct suppliers gets opened up to traditional Enterprise companies so they can truly and easily take advantage of commodity hardware through direct access to suppliers then, as they say, goodnight.

2) Companies that provide some basic interfaces between the suppliers and the small(er) consumers will do extremely well. For me, this means configuration management of some sort, but it could be anything that helps accelerate the linkage between supplier and end-user . The day will come when small IT shops have direct access to suppliers and are able to custom-build hardware in same way that huge cloud providers do today. Some might argue that there is no need for small shops to do this — that they can use other cloud providers, that it’s too time consuming to do it on their own, and that their needs are not unique enough to support such a relationship. Yes, yes, and yes… for right now. Make it easy for companies to realize the cost and management efficiencies of direct supplier access and I don’t know of anyone that wouldn’t take you up on that. Maybe this is the evolution of the “private cloud” concept but all I know is that, right now, the “private cloud” talk is being dominated by the Big IT folks so the conflict of interest is too great.

3) It’s all about the network. I don’t think the network is being addressed in the same way as other infrastructure components. I almost never hear about commodity “networks,” yet I constantly hear about commodity “hardware.” I’m not sure why. Maybe Cisco and Juniper and the other network providers are good at deflecting or maybe it’s too hard of a problem to be solved or maybe the cost isn’t a focal point (yet). Whatever the reason, I think this is a huge problem/opportunity. Without the network, everything else can just go away. Period. The entire conversation driving commodity-whatever is predicated around delivering lots of data to people at very low-cost. The same rules that drive commoditization need to be applied to the network and right now I only know of 1 or 2 huge companies that are even thinking in these terms.

There are always multiple themes in play at any given time that, when looking back, we summarize as change. People say that the Internet changed everything. And, before that, the PC changed everything. What we’re actually describing is a series of changes that happened over a period of time that have the cumulative effect of making us say, “How did we ever do X without Y?” I believe that the commoditization of infrastructure is just one theme among the change that will be described as Cloud Computing. I contend, however, the day is almost upon us when everybody, from giant companies to the SMB, will say, “Why did we ever buy anything but custom hardware directly from the manufacturer?”

This post originally appeared on kaigray.com.  It does not necessarily reflect the views or opinions of GreenPages Technology Solutions.

To Learn more about GreenPages Cloud Computing Practice click here.

Is Cloud Computing Ready for Prime Time?

By John Dixon, Senior Solutions Architect

 

A few weeks ago, I took part in another engaging tweetchat on Cloud Computing. The topic: is cloud computing ready for enterprise adoption? You can find the transcript here.

 

As usual with tweetchats hosted by CloudCommons, five questions are presented a few days in advance of the event. This time around, the questions were:

  1. Is Public Cloud mature enough for enterprise adoption?
  2. Should Public Cloud be a part of every business’s IT strategy?
  3. How big of a barrier are legacy applications and hardware to public cloud adoption?
  4. What’s the best way to deal with cloud security?
  5. What’s the best way to get started with public cloud?

 

As far as Question #1, the position of most people in the chat session this time was that Public Cloud is mature enough for certain applications in enterprises today. The technology certainly exists to run applications “in the cloud” but regulations and policies may not be ready to handle an application’s cloud deployment. Another interesting observation from the tweetchat was that most enterprises are indeed running applications “in the cloud” right now. GreenPages considers applications such as Concur and Salesforce.com as running “in the cloud.” And of course, many organizations large and small run these applications successfully. I’d also consider ADP as a cloud application. And of course, many organizations make use of ADP for payroll processing.

Are enterprises mature enough for cloud computing?

Much of the discussion during question #1 turned the question on end – the technology is there, but enterprises are not ready to deploy applications there. GreenPages’ position is that, even if we assume that cloud computing is not yet ready for prime time, then it certainly will be soon. Organizations should prepare for this eventuality by gaining a deep understanding of the IT services they provide, and how much a particular IT service costs. When one or more of your IT services can be substituted for one that runs (reliably and inexpensively) in the cloud, will your company be able to make the right decision to take advantage of that condition? Also, another interesting observation: some public cloud offerings may be enterprise-ready, but not all public cloud vendors are enterprise-grade. We agree.

Should every business have a public cloud strategy?

Most of the discussion here pointed to a “yes” answer. Or that an organization’s strategy will eventually, by default, include consideration for public cloud. We think of cloud computing as a sourcing strategy in and of itself – especially when thinking of IaaS and PaaS. Even now, IaaS vendors are essentially providers of commodity IT services. Most commonly, IaaS vendors can provide you with an operating system instance: Windows or Linux. For IaaS, the degree of abstraction is very high, as an operating system instance can be deployed on a wide range of systems – physical, virtual, paravirtual, etc. The consumer of these services doesn’t mind where the OS instance is running, as long as it is performing to the agreed SLA. Think of Amazon Web Services here. Depending on the application that I’m deploying, there is little difference whether I’m using infrastructure that is running physically in Northern Virginia or in Southern California. At GreenPages, we think that this degree of abstraction will move in to the enterprise as corporate IT departments evolve to behave more like service providers… and probably evolve in to brokers of IT services – supported by a public cloud strategy.

Security and legacy applications

Two questions revolved around legacy applications and security as barriers to adoption. Every organization has a particular application that will not be considered for cloud computing. The arguments are similar for the reasons why we never (or, are just beginning to) virtualize legacy applications. Sometimes, virtualizing specialized hardware is, well, really hard and just not worth the effort.

What’s the best way to get started with public cloud?

“Just go out and use Amazon,” was a common response to this question, both in this particular tweetchat and in other discussions. Indeed, trying Amazon for some development activities is not a bad way to evaluate the features of public cloud. In our view, the best way to get started with cloud is to begin managing your datacenter as if it were a cloud environment, with some tool that can manage traditional and cloud environments the same way. Even legacy applications. Even applications with specialized hardware. Virtual, physical, paravirtual, etc. Begin to monitor and measure your applications in a consistent manner. This way, when an application is deployed to a cloud provider, your organization can continue to monitor, measure, and manage that application using the same method. For those of us who are risk-averse, this is the easiest way to get started with cloud! How is this done? We think you’ll see that Cloud Management as a Service (CMaaS) is the best way.

Would you like to learn more about our new CMaaS offering? Click here to receive some more information.

Getting Out of the IT Business

Randy Weis, Director of Solutions Architecture

Strange title for a blog from an IT solutions architect? Not really.

Some of our clients—a lumber mill, a consulting firm, a hospital—are starting to ask us how to get out of “doing IT.” What do these organizations all have in common? They all have a history of challenges in effective technology implementations and application projects leading to the CIO/CTO/CFO asking, “Why are we in the IT business? What can we do to offload the work, eliminate the capital expenses, keep operating expenses down, and focus our IT efforts on making our business more responsive to shifting demands and reaching more customers with a higher satisfaction rate?”

True stories.

If you are in the business of reselling compute, network, or storage gear, this might not be the kind of question you want to hear.

If you are in the business of consulting on technology solutions to meet business requirements, this is exactly the kind of question you should be preparing to answer. If you don’t start working on those answers, your business will suffer for it.

Technology has evolved to the point where the failed marketing terms of grid or utility computing are starting to come back to life—and we are not talking about zombie technology. Cloud computing used to be about as real as grid or utility computing, but “cloud” is no longer just a marketing term. We now have new, proven, and emerging technologies that actually can support a utility model for information technology. Corporate IT executives now are starting to accept that the new cloud computing infrastructure-as-a-service is reliable (recent AWS outages not withstanding) predictable, and useful to a corporate strategy. Corporate applications still need to be evaluated for requirements that restrict deployment and implementation strategies–latency, performance, concerns over satisfying legal/privacy/regulatory issues, and so on. However, the need to have elastic, scalable, on-demand IT services that are accessible anywhere is starting to force even the most conservative executives to look at the cloud for offloading non-mission critical workloads and associated costs (staff, equipment, licensing, training and so on). Mission critical applications can still benefit from cloud technology, perhaps only as internal or private cloud, but the same factors still apply—reduce time to deploy or provision, automate workflow, scale up or down as dictated by business cycles, and push provisioning back out into the business (while holding those same units accountable for the resources they “deploy”).

Infrastructure as a service is really just the latest iteration of self-service IT. Software as a service has been with us for some time now, and in some cases is the default mode—CRM is the best example (e.g. Salesforce). Web-based businesses have been virtualizing workloads and automating deployment of capacity for some time now as well. Development and testing have also been the “low hanging fruit” of both virtualization and cloud computing. However, when the technology of virtualization reached a certain critical mass, primarily driven by VMware and Microsoft (at least at the datacenter level), then everyone started taking a second look at this new type of managed hosting. Make no mistake—IaaS is managed hosting, but New and Improved. Anyone who had to deal with provisioning and deployment at AT&T or other large colocation data centers (and no offense meant) knew that there was no “self-service” involved at all. Deployments were major projects with timelines that rivaled the internal glacial pace of most IT projects—a pace that led to the historic frustration levels that drove business units to run around their own IT and start buying IT services with a credit card at Amazon and Rack Space.

If you or your executives are starting to ask yourselves if you can get out of the day-to-day business of running an internal datacenter, you are in good company. Virtualization of compute, network and storage has led to ever-greater efficiency, helping you get more out of every dollar spent on hardware and staff. But it has also led to ever-greater complexity and a need to retrain your internal staff more frequently. Information Technology services are essential to a successful business, but they can no longer just be a cost center. They need to be a profit center; a cost of doing business for sure, but also a way to drive revenues and shorten time-to-market.

Where do you go for answers? What service providers have a good track record for uptime, customer satisfaction, support excellence and innovation? What technologies will help you integrate your internal IT with your “external” IT? Where can you turn to for management and monitoring tools? What managed services can help you with gaining visibility into all parts of your IT infrastructure, that can deal with a hybrid and distributed datacenter model, that can address everything from firewalls to backups? Who can you ask?

There is an emerging cadre of thought leaders and technologists that have been preparing for this day, laying the foundation, developing the expertise, building partner relationships with service providers and watching to see who is successful and growing…and who is not. GreenPages is in the very front line of this new cadre. We have been out in front with virtualization of servers. We have been out in front with storage and networking support for virtual datacenters. We have been out in front with private cloud implementations. We are absolutely out in front of everyone in developing Cloud Management As A Service.

We have been waiting for you. Welcome. Now let’s get to work.For more information on our Cloud Management as a Service Offering click here