Todas las entradas hechas por John Dixon

The Cloud is Dead! Long Live the Cloud! Twitter Chat Recap

Last week, Cloud Commons hosted a Twitter Chat on the end of Cloud Computing. If you’re not familiar with a tweetchat, they are discussions hosted on Twitter where people can join at a specific time by following a certain hashtag. The Cloud Commons tweetchats usually have around ten panelists and have been kicked off with a few thought-provoking questions. The participants then respond and share ideas in real time. The discussion is focused enough to be useful – 1 hour session, responses limited to 140 characters, but large enough to capture different perspectives.

This week’s tweetchat began with several questions:

  1. Adoption rates are rising for private cloud. Is this a stepping stone to hybrid/public cloud?
  2. What needs to happen before enterprises start to fully embrace cloud computing?
  3. What does the future model for enterprise cloud adoption look like?
  4. What should CSPs be doing more of to meet the needs of the enterprise?
  5. What needs to happen so that cloud becomes so ubiquitous that it’ll no longer be referred to as cloud? When will it happen?

The first question, “Is private cloud a stepping stone to hybrid/public cloud?” drew approximately 32 tweets. From the transcript, it appears as though participants in the marketplace are improving their understanding of cloud computing in terms of service and delivery models (private, public, hybrid, IaaS, PaaS, SaaS). The popular viewpoint was that private cloud is not exactly a stepping stone to hybrid/public cloud. A few tweets took the position that private cloud is seen as an alternate path to hybrid/public cloud. Many tweets indicated that IT departments want to retain tight control of their environment. Interesting tweet… “private cloud does not necessarily mean on-premises.” More on this later.

47 tweets in response to the second question, “What needs to happen before enterprises start to fully embrace cloud computing?” Overwhelmingly, the responses in this part of the chat were filled with terms like “services led,” “business value,” “SLA,” and “reduce FUD.” The responses to question 1 covered some territory here as well – enterprises will fully embrace cloud computing if and when they agree to give up some control of their infrastructure. There was an interesting tweet that mentioned transparency – “…it’s not always about control, as it is transparency.” We would argue that transparency is not needed here. To me, full transparency would require that the business is able to access minute detail about infrastructure, such as the amount of RAM installed on the application server that runs their slice of CRM at Salesforce.com. The business should be hidden from this kind of detail. Abstraction plays heavily here. So, we don’t need transparency as much as we need subtraction. What is an important concept that provides abstraction? You guessed it, Service Level Management. The GreenPages view is that processes need to improve before enterprises start to fully embrace cloud computing. See my earlier post, “What Should I Do about Cloud?” that goes in to much more detail on this topic.

I count about the same number of tweets in response to question 3 as I do question 2. Question 3 was a little more open-ended, so a critical mass of ideas never really took shape. The GreenPages’ view is that cloud computing will evolve to look like modern supply chains that can be seen in other industries, such as manufacturing. Enterprises may purchase IT Services from a SaaS provider, Salesforce.com for example. Salesforce.com may purchase its platform from another PaaS provider. That PaaS provider may purchase its basic infrastructure from an IaaS provider. Some value is added at each level, as the IaaS provider becomes more experienced in providing only infrastructure. The PaaS provider has an extremely robust platform for providing only a platform. The SaaS provider may ultimately become an expert at assembling and marketing these components into a service that provides value for the enterprise that ultimately consumes it. Compare this to the supply chain that auto manufacturers leverage to assemble a vehicle. In the early days of manufacturing, some companies produced every part of a vehicle, and assembled it into a finished product. I can think of one prominent example where the work to assemble a finished automobile took place in a single factory around the River Rouge in Detroit. Fast forward to present day, and you’ll be hard pressed to find an auto manufacturer who produces their own windshield glass. Or brake pads. Or smelts their own aluminum. The supply chain has specialized. Auto manufacturers design, assemble, and market finished vehicles. That’s about it. Cloud computing could bring the same specialization to IT.

Most tweets in response to question 4 were clearly around Service Level Management and SLAs, mitigating unknowns in security, and avoiding vendor lock-in. We agree, and think that a standard will emerge to define IT services in a single, consistent format. Kind of like OVF, the Open Virtual Machine Format, for virtualization. I can see an extension to OVF that defines a service’s uptime requirements, maximum ping time to a database server, etc. Such a standard would promote portability of IT Services.

Question 5 really went back to the topics discussed in question 3. When will enterprises embrace cloud? When will cloud computing become ubiquitous?

Right now, Corporate IT and The Business are two individuals living in a virtual “company town.” What I mean is that customers, (the business) are forced to purchase their services from the company store (corporate IT). GreenPages’ view is that there is a market for IT services and that emergence of cloud computing will serve to broaden this market. We recommend that organizations understand the value and costs of providing their own IT services in order to participate in the market – just like the business does. Overall, another insightful chat with some intelligent people!

Translating a Vision for IT Amid a “Severe Storm Watch”

IT departments adopt technology from two perspectives: from a directive by the CIO to a “rogue IT” suggestion or project from an individual user. The former represents a top-down condition, while the latter has technology adoption from the bottom-up. Oftentimes, there seems to be confusion somewhere in the middle, resulting in a smorgasbord of tools at one end, and a grand, ambitious strategy at the other end. This article suggests a framework to implement a vision from strategy, policy, process, and ultimately tools.

Vision for IT -> Strategies -> Policies -> Processes -> Procedures -> Tools and Automation

Revenue Generating Activities -> Business Process -> IT Services

As a solutions architect and consultant, I’ve met with many clients in the past few years. From director-level staff to engineers to support staff in the trenches, IT has taken on a language of its own. Every organization has its own acronyms, sure. Buzzwords and marketing hype strangle the English language inside the datacenter. Consider the range of experience present in many shops, and it is easy to imagine the confusion. The seasoned, senior executive talks about driving standards and reducing spend for datacenter floor space, and the excited young intern responds with telecommuting, tweets, and cloud computing, all in a proof-of-concept that is already in progress. What the…? Who’s right?

 

It occurred to me a while ago that there is a “severe storm watch” for IT. According to the National Weather Service, a “watch” is issued when conditions are favorable for [some type of weather chaos]. Well, in IT, more than in other departments, one can make these observations:

  • Generationally-diverse workforce
  • Diverse backgrounds of workers
  • Highly variable experience of workers
  • Rapidly changing products and offerings
  • High complexity of subject matter and decisions

My colleague, Geoff Smith, recently posted a five-part series (The Taxonomy of IT) describing the operations of IT departments. In the series, Geoff points out that IT departments take on different shapes and behaviors based on a number of factors. The series presents a thoughtful classification of IT departments and how they develop, with a framework borrowed from biology. This post presents a somewhat more tactical suggestion on how IT departments can deal with strategy and technology adoption.

Yet Another Framework

A quick search on Google shows a load of articles on Business and IT Alignment. There’s even a Wikipedia article on the topic. I hear it all the time, and I hate the term. This term suggests that “IT” simply does the bidding of “The Business,” whatever that may be. I prefer to see Business and IT Partnership. But anyway, let’s begin with a partnership within IT departments. Starting with tools, do you know the value proposition of all of the tools in your environment? Do you know about all of the tools in your environment?

 

A single Vision for IT should first translate into one or more Strategies. I’m thinking of a Vision statement for IT that looks something like the following:

“Acme IT exists as a competitive, prime provider of information technology services to enable Acme Company to generate revenue by developing, marketing, and delivering its products and services to its customers. Acme IT stays competitive by providing Acme Company with relevant services that are delivered with the speed, quality and reliability that the company expects. Acme IT also acts as a technology thought leader for the company, proactively providing services that help Acme Company increase revenue, reduce costs, attract new customers, and improve brand image.”

Wow, that’s quite a vision for an IT department. How would a CIO begin to deliver on a vision like that? Just start using VMware, and you’re all set! Not quite! Installing VMware might come all the way at the end of the chain… at “Tool A” in the diagram above.

First, we need one or more Strategies. One valid Strategy may indeed be to leverage virtualization to improve time to market for IT services, and reduce infrastructure costs by reducing the number of devices in the datacenter. Great ideas, but a couple of Policies might be needed to implement this strategy.

One Policy, Policy A in the above diagram, might be that all application development should use a virtual server. Policy B might mandate that all new servers will be assessed as virtualization candidates before physical equipment is purchased.

Processes then flow from Policies. Since I have a policy that mandates that new development should happen on a virtual infrastructure, eventually I should be able to make a good estimate of the infrastructure needed for my development efforts. My Capacity Management process could then requisition and deploy some amount of infrastructure in the datacenter before it is requested by a developer. You’ll notice that this process, Capacity Management, enables a virtualization policy for developers, and neatly links up with my strategy to improve time to market for IT services (through reduced application development time). Eventually, we could trace this process back to our single Vision for IT.

But we’re not done! Processes need to be implemented by Procedures. In order to implement a capacity management process properly, I need to estimate demand from my customers. My customers will be application developers if we’re talking about the policy that developers must use virtualized equipment. Most enterprises have some sort of way to handle this, so we’d want to look at the procedure that developer customers use to request resources. To enable all of this, the request and the measurement of demand, I may want to implement some sort of Tool, like a service catalog or a request portal. That’s the end of the chain – the Tool.

Following the discussion back up to Vision, we can see how the selection of a tool is justified by following the chain back to procedure, process, policy, strategy, and ultimately vision.

This framework provides a simple alignment that can be used in IT departments for a number of advantages. One significant advantage is that it provides a common language for everyone in the IT department to understand the reasoning behind the design of a particular process, the need for a particular procedure, or the selection of a particular tool over another.

In a future blog post, I’ll cover the various other advantages of using this framework.

Food for Thought

  1. Do you see a proliferation of tools and a corresponding disconnect with strategy in your department?
  2. Who sets the vision and strategy for IT in your department?
  3. Is your IT department using a similar framework to rationalize tools?
  4. Do your IT policies link to processes and procedures?
  5. Can you measure compliance to your IT policies?

Going Rogue: Do the Advantages Outweigh the Risks?

Are all rogue IT projects bad things? Could this type of activity be beneficial? If rogue IT projects could be beneficial, should they be supported or even encouraged?

Recently, I took part in a live Twitter chat hosted by the Cloud Commons blog (thanks again for the invite!) that was focused on Rogue IT. After hearing from, and engaging with, some major thought leaders in the space, I decided to write a blog summarizing my thoughts on the topic.

What does “Rogue IT” mean anyway?

I think that there are rogue IT users and there are rogue IT projects. There’s the individual user scheduling meetings with an “unauthorized” iPad. There’s also a sales department, without the knowledge of corporate IT, developing an iPhone app to process orders for your yet-to-be-developed product. Let us focus on the latter – rogue IT projects. Without a doubt, rogue IT projects have been, and will continue to be, an issue for corporate IT departments. A quick web search will return articles on “rogue IT” dating back around 10 years. However, as technology decreases in cost and increases in functionality, the issue of rouge IT projects seems to be moving up on the list of concerns.

What does rogue IT have to do with cloud computing?

Cloud Computing opens up a market for IT Services. With Cloud Computing, organizations have the ability to source IT services to the provider that can deliver the service most efficiently. Sounds a lot like specialization and division of labor, doesn’t it? (We’ll stay away from The Wealth of Nations, for now.) Suffice to say that Rogue IT may be an indication that corporate IT departments need to compete with outside providers of IT services. Stated plainly, the rise of Cloud Computing is encouraging firms to enter the market for IT services. Customers, even inside a large organization, have choices (other than corporate IT) on how to acquire the IT services that they need. Maybe corporate IT is not able to deliver a new IT service in time for that new sales campaign. Or, corporate IT simply refuses to develop a new system requested by a customer. That customer, in control of their own budget, may turn to an alternative service offering “from the cloud.”

What are the advantages of rogue IT? Do they outweigh the risks?

Rogue IT is a trend that will continue as the very nature of work changes (e.g. long history of trends to a service-based economy means more and more knowledge workers). Rogue IT can lead to some benefits… BYOD or “bring your own device” for example. BYOD can drive down end-user support costs and improve efficiency. BYOD will someday also mean “bring your own DESK” and allow you to choose to work when and where it is most convienent for you to do so (as long as you’re impacting the bottom line, of course). Another major benefit is increased pace of innovation. As usual, major benefits are difficult to measure. Take the example of the Lockheed Martin “Skunkworks” that produced some breakthroughs in stealth military technology –would the organization have produced such things if they had been encumbered by corporate policies and standards?

Should CIOs embrace rogue IT or should it be resisted?

CIOs should embrace this as the new reality of IT becoming a partner with the business, not simply aligning to it. Further, CIOs can gain some visibility into what is going on with regard to “rogue IT” devices and systems. With some visibility, the corporate IT departments can develop meaningful offerings and meet the demands of their customers.

Corporate IT departments should also bring some education as to what is acceptable and what is not acceptable: iPad at work- ok, but protect it with a password. Using Google Docs to store your company’s financial records…there might be a better place for that.

Two approaches for corporate IT:

– “Embrace and extend:” Allow rogue IT, learn from the experiences of users, adopt the best systems/devices/technologies, and put them under development

  • IT department gets to work with their customers and develop new technologies

– “Judge and Jury:” Have IT develop and enforce technology standards

  • IT is more/less an administrative group, always the bad guy, uses justification by keeping the company and its information safe (rightly so)

CIOs should also consider when rogue IT is being used. Outside services, quick development, and sidestepping of corporate IT policies may be beneficial for projects in conceptual or development phases. You can find the transcript from the Cloud Commons twitter chat here: http://bit.ly/JNovHT

What Should I Do about Cloud?

The word of the day is “Cloud.” Nearly every software and hardware vendor out there has a product and shiny marketing to help their customers go “to the cloud.” Every IT trade rag has seemingly unique, seemingly agnostic advice on how their audience can take advantage of cloud computing. Standards bodies have published authoritative descriptions of cloud computing models. If you’re an IT decision maker or influencer, you’re in luck! Many reputable players in the industry have published reams of information to help you on your journey to take advantage of cloud computing. Pick your poison… Public, Private, Hybrid, Community, SaaS, IaaS, PaaS… even XaaS (anything as a service!). On-premises, off-premises… or even “on-premise” if you want!

Starting with an on-premises private cloud of your own seems like a sensible choice. A cloud environment of your own, that you can keep cool and dry inside of your own datacenter. Architects can design and build it with the components of their choice, management can have the control that they’re used to, and administrators can manage it alongside every other system. Security issues can be handled deftly by your consultant or cloud-champion – after all, your cloud is internal and private!

Another perspective is to skip out on a cloud strategy, forgo some early benefits, and wait for all of the chips to fall before making any investments. This is the respectable “do nothing” alternative, and it’s a valid one.

Yet another perspective is to take a close look at cloud concepts and prepare your company to act, when appropriate. Prepare, act, appropriate time. Sounds like a strategy brewing.