Category Archives: Featured

The Cloud is Dead! Long Live the Cloud! Twitter Chat Recap

Last week, Cloud Commons hosted a Twitter Chat on the end of Cloud Computing. If you’re not familiar with a tweetchat, they are discussions hosted on Twitter where people can join at a specific time by following a certain hashtag. The Cloud Commons tweetchats usually have around ten panelists and have been kicked off with a few thought-provoking questions. The participants then respond and share ideas in real time. The discussion is focused enough to be useful – 1 hour session, responses limited to 140 characters, but large enough to capture different perspectives.

This week’s tweetchat began with several questions:

  1. Adoption rates are rising for private cloud. Is this a stepping stone to hybrid/public cloud?
  2. What needs to happen before enterprises start to fully embrace cloud computing?
  3. What does the future model for enterprise cloud adoption look like?
  4. What should CSPs be doing more of to meet the needs of the enterprise?
  5. What needs to happen so that cloud becomes so ubiquitous that it’ll no longer be referred to as cloud? When will it happen?

The first question, “Is private cloud a stepping stone to hybrid/public cloud?” drew approximately 32 tweets. From the transcript, it appears as though participants in the marketplace are improving their understanding of cloud computing in terms of service and delivery models (private, public, hybrid, IaaS, PaaS, SaaS). The popular viewpoint was that private cloud is not exactly a stepping stone to hybrid/public cloud. A few tweets took the position that private cloud is seen as an alternate path to hybrid/public cloud. Many tweets indicated that IT departments want to retain tight control of their environment. Interesting tweet… “private cloud does not necessarily mean on-premises.” More on this later.

47 tweets in response to the second question, “What needs to happen before enterprises start to fully embrace cloud computing?” Overwhelmingly, the responses in this part of the chat were filled with terms like “services led,” “business value,” “SLA,” and “reduce FUD.” The responses to question 1 covered some territory here as well – enterprises will fully embrace cloud computing if and when they agree to give up some control of their infrastructure. There was an interesting tweet that mentioned transparency – “…it’s not always about control, as it is transparency.” We would argue that transparency is not needed here. To me, full transparency would require that the business is able to access minute detail about infrastructure, such as the amount of RAM installed on the application server that runs their slice of CRM at Salesforce.com. The business should be hidden from this kind of detail. Abstraction plays heavily here. So, we don’t need transparency as much as we need subtraction. What is an important concept that provides abstraction? You guessed it, Service Level Management. The GreenPages view is that processes need to improve before enterprises start to fully embrace cloud computing. See my earlier post, “What Should I Do about Cloud?” that goes in to much more detail on this topic.

I count about the same number of tweets in response to question 3 as I do question 2. Question 3 was a little more open-ended, so a critical mass of ideas never really took shape. The GreenPages’ view is that cloud computing will evolve to look like modern supply chains that can be seen in other industries, such as manufacturing. Enterprises may purchase IT Services from a SaaS provider, Salesforce.com for example. Salesforce.com may purchase its platform from another PaaS provider. That PaaS provider may purchase its basic infrastructure from an IaaS provider. Some value is added at each level, as the IaaS provider becomes more experienced in providing only infrastructure. The PaaS provider has an extremely robust platform for providing only a platform. The SaaS provider may ultimately become an expert at assembling and marketing these components into a service that provides value for the enterprise that ultimately consumes it. Compare this to the supply chain that auto manufacturers leverage to assemble a vehicle. In the early days of manufacturing, some companies produced every part of a vehicle, and assembled it into a finished product. I can think of one prominent example where the work to assemble a finished automobile took place in a single factory around the River Rouge in Detroit. Fast forward to present day, and you’ll be hard pressed to find an auto manufacturer who produces their own windshield glass. Or brake pads. Or smelts their own aluminum. The supply chain has specialized. Auto manufacturers design, assemble, and market finished vehicles. That’s about it. Cloud computing could bring the same specialization to IT.

Most tweets in response to question 4 were clearly around Service Level Management and SLAs, mitigating unknowns in security, and avoiding vendor lock-in. We agree, and think that a standard will emerge to define IT services in a single, consistent format. Kind of like OVF, the Open Virtual Machine Format, for virtualization. I can see an extension to OVF that defines a service’s uptime requirements, maximum ping time to a database server, etc. Such a standard would promote portability of IT Services.

Question 5 really went back to the topics discussed in question 3. When will enterprises embrace cloud? When will cloud computing become ubiquitous?

Right now, Corporate IT and The Business are two individuals living in a virtual “company town.” What I mean is that customers, (the business) are forced to purchase their services from the company store (corporate IT). GreenPages’ view is that there is a market for IT services and that emergence of cloud computing will serve to broaden this market. We recommend that organizations understand the value and costs of providing their own IT services in order to participate in the market – just like the business does. Overall, another insightful chat with some intelligent people!

Top Takeaways From EMC World 2012

A little over a week has gone by since the end of EMC World, and all the product announcements have gotten out of the bag. So, why another article about EMC World, if there are no “big reveals” left? Because I want to make sense of all of the hype, product announcements, and strategic discussions. What do the over 40 new products mean to GreenPages’ customers—both present and future? How many of those products were just cosmetic makeovers and how many are actual game changers? Why should you, our friends and extended business family, care, and what should you care about?

I will start by saying that this EMC World really did reveal some technology-leading thoughts and products, and proved that EMC has taken the lead in major storage technology strategy. EMC has always been the 800-pound gorilla of the storage industry, but for many years was far from the front of the pack. This has changed, and in a big way. Innovation still takes place mostly in the small companies on the bleeding edge of storage (SSD, virtualization across platforms, innovative file systems), but EMC has become the leading investor in storage R&D, and it shows. While they may not be inventing the coolest and most striking new storage and hardware, their pace of development and integration of that cool stuff has exponentially increased. Time to market and product refresh cycles are picking up pace. Relationships with the people who get the products in front of you (resellers, integrators and distributors) are vastly improved and much friendlier to the commercial world we all live in (as opposed to the rarified heights of the largest enterprises). The relevance of EMC products to the virtualized datacenter is clear, and the storage engineers who ran the technical sessions and laid out all the new storage, DR, and virtualization roadmaps proved that EMC is the leading storage technology firm in the world.

What are the highlights for GreenPages’ world?

Product Announcements:

Probably the biggest technology in terms of impact, IMHO, is Isilon. This is the fastest, most scalable, easy-to-manage NAS systems ever. It can grow to the petabyte range, and there is no downtime or forklift upgrades. It is “scale-out” storage, meaning you add nodes that contain processing (CPU), RAM for Cache and additional bandwidth, along with capacity in three flavors (SSD, 15K and 7.2K).  This is the system of choice for any healthcare PACs application or Life Sciences data storage. It is a fantastic general-purpose NAS system as well. Isilon is the system of choice for anyone managing Big Data (large amounts of unstructured data). The entry point for this system is around 10 TB, so you don’t have to be a large company to find the value here. Isilon also has the advantage of being a true scale-out system. Some technical nuggets around Isilon OneFS Upgrade: 90% greater throughput, or 740 GB/sec; roles-based admin – SEC 17a-4 compliance; better caching (50% reduction in latency of IO intensive apps; VMware Integration: VAAI (vStorage APIs for Array Integration) and VASA (vStorage APIs for Storage Awareness).

If you are going to jump up into the big time storage array arena, the new VMAX line is arguably the one to get, for power, performance and integration with the virtualized datacenter. It has expanded to the VMAX 10, 20 (current), and 40. The top of the line sports 8 controllers, scales up to 4 PB, has up to 32 2.8 GHz Xeon 6-core processors, 1 TB usable RAM, 2.5” drives,  and uses MLC SSD drives (bringing that cost of the flash drive down into the lower atmosphere). The latest development of the auto-tiering software FAST allows IBM and HDS storage to be a “tier” of storage for the VMAX. Other arrays will be added soon.

VNXe 3150 storage system offers up to 50% more performance and capacity in an entry level system. This system includes 10 GbE connectivity, Solid State Storage and the first production storage system (that I have heard of) that uses the latest Intel CPU, Sandy Bridge. Who says EMC product lifecycles are slow and behind the times??

VPLEX Metro/VPLEX Geo solutions have some significant upgrades, including integration with RecoverPoint and SRM, more performance and scalability; and Oracle RAC up to 100 KM apart. If you want to federate your datacenters, introduce “stretch clusters” and have both an HA and DR strategy, this is the industry leader now.

The VNX Series  has  more than a few improvements: lower price SSDs, RAID types that can be mixed in FAST; 256 snaps per LUN; connector for vCOPs; EMC Storage Analytics Suite based on vCOPs; AppSync to replace/improve Replication Manager.

The new VSPEX Proven Infrastructure includes EMC’s VNX and VNXe hybrid storage arrays, along with Avamar software and Data Domain backup appliances. The cloud platform also includes processors from Intel, switches from Brocade, servers from Cisco, and software from Citrix, Microsoft HyperV and VMware.  Avamar and Data Domain products will offer data deduplication to users, while EMC’s Fully Automated Storage Tiering (FAST), will offer data migration between varying disk storage arrays based on data use patterns. There are initially 14 VSPEX configurations, which EMC said represent the most popular use cases for companies moving to cloud computing.

Data Domain & Avamar upgrades include the DD990 with an Intel Sandy Bridge CPU, doubling the performance of the DD890 – 28 PB, 16 TB/hr throughput; tight integration of Avamar with VMware, including Hyper-V, SAP, Sybase, SQL2012 – recovery is 30 times faster than NBU/V-Ray.

Vfcache PCIe NAND Flash Card is a server side I/O enhancement that pushes Flash Cache to the server, but integrates Cache management with the VNX array FAST Cache. This will prove to be a huge deal for mission critical applications running on VMware, since I/O will no longer be a bottleneck even for the most demanding applications. Combine this with Sandy Bridge CPUs and the UCS system with the latest M3 servers and you will have the world’s most powerful server virtualization platform!

DataBridge is a “mash-up” of nearly any storage or system management tool into a common pane of glass, not intended to be a discovery or management tool but, rather, to be a place where all of the discovery tools can deliver their data to one place. This combines EMC infrastructure data sources along with non-EMC data sources with business logic from customer organizations. Stay tuned for more on this.

There are lots of other deep technical messages that were talked about in the sessions that ran for three solid days, not counting the unbelievable Lab sessions. Those Lab sessions are now available for demo purposes. You can see any EMC technology from implementation to configuration just by contacting GreenPages and asking for your Friendly Neighborhood Storage Guy!!

One final thought I would like to stress: efficiency. EMC is sending a smart business message of efficiency, using VNX as example. Storage is far outstripping storage advances and IT budgets. All is not hopeless, however. You can improve efficiency with dedupe/compression, auto tiering; Flash allows storage to keep up with Moore’s Law; you can consolidate file servers with virtual file servers (we have done this with many GreenPages customers when consolidating servers in VMware). Files are the main culprit. How will you manage it, quotas or content management? What will you chose? How will you manage your data without the money or work force you think you might need?

Contact GreenPages if you need help answering these questions! Meanwhile, watch for more storage technology breakthroughs to come from EMC in the coming months.

Translating a Vision for IT Amid a “Severe Storm Watch”

IT departments adopt technology from two perspectives: from a directive by the CIO to a “rogue IT” suggestion or project from an individual user. The former represents a top-down condition, while the latter has technology adoption from the bottom-up. Oftentimes, there seems to be confusion somewhere in the middle, resulting in a smorgasbord of tools at one end, and a grand, ambitious strategy at the other end. This article suggests a framework to implement a vision from strategy, policy, process, and ultimately tools.

Vision for IT -> Strategies -> Policies -> Processes -> Procedures -> Tools and Automation

Revenue Generating Activities -> Business Process -> IT Services

As a solutions architect and consultant, I’ve met with many clients in the past few years. From director-level staff to engineers to support staff in the trenches, IT has taken on a language of its own. Every organization has its own acronyms, sure. Buzzwords and marketing hype strangle the English language inside the datacenter. Consider the range of experience present in many shops, and it is easy to imagine the confusion. The seasoned, senior executive talks about driving standards and reducing spend for datacenter floor space, and the excited young intern responds with telecommuting, tweets, and cloud computing, all in a proof-of-concept that is already in progress. What the…? Who’s right?

 

It occurred to me a while ago that there is a “severe storm watch” for IT. According to the National Weather Service, a “watch” is issued when conditions are favorable for [some type of weather chaos]. Well, in IT, more than in other departments, one can make these observations:

  • Generationally-diverse workforce
  • Diverse backgrounds of workers
  • Highly variable experience of workers
  • Rapidly changing products and offerings
  • High complexity of subject matter and decisions

My colleague, Geoff Smith, recently posted a five-part series (The Taxonomy of IT) describing the operations of IT departments. In the series, Geoff points out that IT departments take on different shapes and behaviors based on a number of factors. The series presents a thoughtful classification of IT departments and how they develop, with a framework borrowed from biology. This post presents a somewhat more tactical suggestion on how IT departments can deal with strategy and technology adoption.

Yet Another Framework

A quick search on Google shows a load of articles on Business and IT Alignment. There’s even a Wikipedia article on the topic. I hear it all the time, and I hate the term. This term suggests that “IT” simply does the bidding of “The Business,” whatever that may be. I prefer to see Business and IT Partnership. But anyway, let’s begin with a partnership within IT departments. Starting with tools, do you know the value proposition of all of the tools in your environment? Do you know about all of the tools in your environment?

 

A single Vision for IT should first translate into one or more Strategies. I’m thinking of a Vision statement for IT that looks something like the following:

“Acme IT exists as a competitive, prime provider of information technology services to enable Acme Company to generate revenue by developing, marketing, and delivering its products and services to its customers. Acme IT stays competitive by providing Acme Company with relevant services that are delivered with the speed, quality and reliability that the company expects. Acme IT also acts as a technology thought leader for the company, proactively providing services that help Acme Company increase revenue, reduce costs, attract new customers, and improve brand image.”

Wow, that’s quite a vision for an IT department. How would a CIO begin to deliver on a vision like that? Just start using VMware, and you’re all set! Not quite! Installing VMware might come all the way at the end of the chain… at “Tool A” in the diagram above.

First, we need one or more Strategies. One valid Strategy may indeed be to leverage virtualization to improve time to market for IT services, and reduce infrastructure costs by reducing the number of devices in the datacenter. Great ideas, but a couple of Policies might be needed to implement this strategy.

One Policy, Policy A in the above diagram, might be that all application development should use a virtual server. Policy B might mandate that all new servers will be assessed as virtualization candidates before physical equipment is purchased.

Processes then flow from Policies. Since I have a policy that mandates that new development should happen on a virtual infrastructure, eventually I should be able to make a good estimate of the infrastructure needed for my development efforts. My Capacity Management process could then requisition and deploy some amount of infrastructure in the datacenter before it is requested by a developer. You’ll notice that this process, Capacity Management, enables a virtualization policy for developers, and neatly links up with my strategy to improve time to market for IT services (through reduced application development time). Eventually, we could trace this process back to our single Vision for IT.

But we’re not done! Processes need to be implemented by Procedures. In order to implement a capacity management process properly, I need to estimate demand from my customers. My customers will be application developers if we’re talking about the policy that developers must use virtualized equipment. Most enterprises have some sort of way to handle this, so we’d want to look at the procedure that developer customers use to request resources. To enable all of this, the request and the measurement of demand, I may want to implement some sort of Tool, like a service catalog or a request portal. That’s the end of the chain – the Tool.

Following the discussion back up to Vision, we can see how the selection of a tool is justified by following the chain back to procedure, process, policy, strategy, and ultimately vision.

This framework provides a simple alignment that can be used in IT departments for a number of advantages. One significant advantage is that it provides a common language for everyone in the IT department to understand the reasoning behind the design of a particular process, the need for a particular procedure, or the selection of a particular tool over another.

In a future blog post, I’ll cover the various other advantages of using this framework.

Food for Thought

  1. Do you see a proliferation of tools and a corresponding disconnect with strategy in your department?
  2. Who sets the vision and strategy for IT in your department?
  3. Is your IT department using a similar framework to rationalize tools?
  4. Do your IT policies link to processes and procedures?
  5. Can you measure compliance to your IT policies?

Cloud Corner Series – Dissecting Virtualization



www.youtube.com/watch?v=pL29FHWXa3U

 

In this segment of Cloud Corner, we bring on Solutions Architect Chris Chesley to discuss various aspects of virtualization. Chris also gets quizzed on how well he knows his fellow Journey to the Cloud Bloggers. Let us know if you agree or disagree with the points Chris makes. We asked Chris the following questions:

1. If I’m virtualized, am I in the cloud?

2. How virtualized would you recommend organizations become?

3. What is the biggest aspect organizations misunderstand about virtualization?

4. What is the single biggest benefit of virtualization?

5. What does it mean to be 100% virtualized, and what are the benefits?

6. Where should companies who have not virtualized anything start?

Check out Episode 1 and Episode 2 of Cloud Corner!

Going Rogue: Do the Advantages Outweigh the Risks?

Are all rogue IT projects bad things? Could this type of activity be beneficial? If rogue IT projects could be beneficial, should they be supported or even encouraged?

Recently, I took part in a live Twitter chat hosted by the Cloud Commons blog (thanks again for the invite!) that was focused on Rogue IT. After hearing from, and engaging with, some major thought leaders in the space, I decided to write a blog summarizing my thoughts on the topic.

What does “Rogue IT” mean anyway?

I think that there are rogue IT users and there are rogue IT projects. There’s the individual user scheduling meetings with an “unauthorized” iPad. There’s also a sales department, without the knowledge of corporate IT, developing an iPhone app to process orders for your yet-to-be-developed product. Let us focus on the latter – rogue IT projects. Without a doubt, rogue IT projects have been, and will continue to be, an issue for corporate IT departments. A quick web search will return articles on “rogue IT” dating back around 10 years. However, as technology decreases in cost and increases in functionality, the issue of rouge IT projects seems to be moving up on the list of concerns.

What does rogue IT have to do with cloud computing?

Cloud Computing opens up a market for IT Services. With Cloud Computing, organizations have the ability to source IT services to the provider that can deliver the service most efficiently. Sounds a lot like specialization and division of labor, doesn’t it? (We’ll stay away from The Wealth of Nations, for now.) Suffice to say that Rogue IT may be an indication that corporate IT departments need to compete with outside providers of IT services. Stated plainly, the rise of Cloud Computing is encouraging firms to enter the market for IT services. Customers, even inside a large organization, have choices (other than corporate IT) on how to acquire the IT services that they need. Maybe corporate IT is not able to deliver a new IT service in time for that new sales campaign. Or, corporate IT simply refuses to develop a new system requested by a customer. That customer, in control of their own budget, may turn to an alternative service offering “from the cloud.”

What are the advantages of rogue IT? Do they outweigh the risks?

Rogue IT is a trend that will continue as the very nature of work changes (e.g. long history of trends to a service-based economy means more and more knowledge workers). Rogue IT can lead to some benefits… BYOD or “bring your own device” for example. BYOD can drive down end-user support costs and improve efficiency. BYOD will someday also mean “bring your own DESK” and allow you to choose to work when and where it is most convienent for you to do so (as long as you’re impacting the bottom line, of course). Another major benefit is increased pace of innovation. As usual, major benefits are difficult to measure. Take the example of the Lockheed Martin “Skunkworks” that produced some breakthroughs in stealth military technology –would the organization have produced such things if they had been encumbered by corporate policies and standards?

Should CIOs embrace rogue IT or should it be resisted?

CIOs should embrace this as the new reality of IT becoming a partner with the business, not simply aligning to it. Further, CIOs can gain some visibility into what is going on with regard to “rogue IT” devices and systems. With some visibility, the corporate IT departments can develop meaningful offerings and meet the demands of their customers.

Corporate IT departments should also bring some education as to what is acceptable and what is not acceptable: iPad at work- ok, but protect it with a password. Using Google Docs to store your company’s financial records…there might be a better place for that.

Two approaches for corporate IT:

– “Embrace and extend:” Allow rogue IT, learn from the experiences of users, adopt the best systems/devices/technologies, and put them under development

  • IT department gets to work with their customers and develop new technologies

– “Judge and Jury:” Have IT develop and enforce technology standards

  • IT is more/less an administrative group, always the bad guy, uses justification by keeping the company and its information safe (rightly so)

CIOs should also consider when rogue IT is being used. Outside services, quick development, and sidestepping of corporate IT policies may be beneficial for projects in conceptual or development phases. You can find the transcript from the Cloud Commons twitter chat here: http://bit.ly/JNovHT

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.