Category Archives: Cloud computing

News Round-Up 5/19/12: Google’s Cloud, Future of Data Centers, Cloud IPOs, Cloud Security Myths Busted and More


There have been some exciting announcements and fascinating news articles recently regarding cloud services and service providers. Every week we will round up the most interesting topics from around the globe and consolidate them into a weekly summary.


Hitch a Ride Through Google’s Cloud

Your Gmail box lives somewhere in the jumble of servers, cables, and hard drives known as the “cloud” but it often migrates in search of the ideal location. Find out what happens when you hit send. 


The Future of Data Centers: Is 100% Cloud Possible?

Guest blogger Robert Offley explains how the market is shifting today, what barriers remain for total cloud adoption, and if an evolution to 100% cloud is likely to occur.


Big Data is Worth Nothing Without Big Science

As with gold or oil, data has no intrinsic value, writes Webtrends CEO Alex Yoder. Big science, which bridges the gap between knowledge and insight, is where the real value is.


The Hottest IPO You’ve Never Heard Of

With an expected valuation of close to $100 billion, it’s understandable that no one can stop talking about Facebook’s initial public offering this week.  But while Facebook basks in the social media spotlight, companies tackling tough business problems are exciting investors, if not consumers. Workday, for example, is expected to be among the largest IPOs this year in the business software market.


Five Busted Myths of Cloud Security

“Cloud” is one of the most over used and least understood words in technology these days, so it’s little surprise that there’s so much confusion about its security. This article busts 5 myths about cloud security.


Also in the news:



Going Rogue: Do the Advantages Outweigh the Risks?

Are all rogue IT projects bad things? Could this type of activity be beneficial? If rogue IT projects could be beneficial, should they be supported or even encouraged?

Recently, I took part in a live Twitter chat hosted by the Cloud Commons blog (thanks again for the invite!) that was focused on Rogue IT. After hearing from, and engaging with, some major thought leaders in the space, I decided to write a blog summarizing my thoughts on the topic.

What does “Rogue IT” mean anyway?

I think that there are rogue IT users and there are rogue IT projects. There’s the individual user scheduling meetings with an “unauthorized” iPad. There’s also a sales department, without the knowledge of corporate IT, developing an iPhone app to process orders for your yet-to-be-developed product. Let us focus on the latter – rogue IT projects. Without a doubt, rogue IT projects have been, and will continue to be, an issue for corporate IT departments. A quick web search will return articles on “rogue IT” dating back around 10 years. However, as technology decreases in cost and increases in functionality, the issue of rouge IT projects seems to be moving up on the list of concerns.

What does rogue IT have to do with cloud computing?

Cloud Computing opens up a market for IT Services. With Cloud Computing, organizations have the ability to source IT services to the provider that can deliver the service most efficiently. Sounds a lot like specialization and division of labor, doesn’t it? (We’ll stay away from The Wealth of Nations, for now.) Suffice to say that Rogue IT may be an indication that corporate IT departments need to compete with outside providers of IT services. Stated plainly, the rise of Cloud Computing is encouraging firms to enter the market for IT services. Customers, even inside a large organization, have choices (other than corporate IT) on how to acquire the IT services that they need. Maybe corporate IT is not able to deliver a new IT service in time for that new sales campaign. Or, corporate IT simply refuses to develop a new system requested by a customer. That customer, in control of their own budget, may turn to an alternative service offering “from the cloud.”

What are the advantages of rogue IT? Do they outweigh the risks?

Rogue IT is a trend that will continue as the very nature of work changes (e.g. long history of trends to a service-based economy means more and more knowledge workers). Rogue IT can lead to some benefits… BYOD or “bring your own device” for example. BYOD can drive down end-user support costs and improve efficiency. BYOD will someday also mean “bring your own DESK” and allow you to choose to work when and where it is most convienent for you to do so (as long as you’re impacting the bottom line, of course). Another major benefit is increased pace of innovation. As usual, major benefits are difficult to measure. Take the example of the Lockheed Martin “Skunkworks” that produced some breakthroughs in stealth military technology –would the organization have produced such things if they had been encumbered by corporate policies and standards?

Should CIOs embrace rogue IT or should it be resisted?

CIOs should embrace this as the new reality of IT becoming a partner with the business, not simply aligning to it. Further, CIOs can gain some visibility into what is going on with regard to “rogue IT” devices and systems. With some visibility, the corporate IT departments can develop meaningful offerings and meet the demands of their customers.

Corporate IT departments should also bring some education as to what is acceptable and what is not acceptable: iPad at work- ok, but protect it with a password. Using Google Docs to store your company’s financial records…there might be a better place for that.

Two approaches for corporate IT:

– “Embrace and extend:” Allow rogue IT, learn from the experiences of users, adopt the best systems/devices/technologies, and put them under development

  • IT department gets to work with their customers and develop new technologies

– “Judge and Jury:” Have IT develop and enforce technology standards

  • IT is more/less an administrative group, always the bad guy, uses justification by keeping the company and its information safe (rightly so)

CIOs should also consider when rogue IT is being used. Outside services, quick development, and sidestepping of corporate IT policies may be beneficial for projects in conceptual or development phases. You can find the transcript from the Cloud Commons twitter chat here:

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.

News Round-Up 5/5/2012: What Makes the Cloud Cool, Feds in the Cloud, 10 Things Your Cloud Contract Needs


There have been some exciting announcements and fascinating news articles recently regarding cloud services and service providers. Every week we will round up the most interesting topics from around the globe and consolidate them into a weekly summary.


Cloud Computing Gains in Federal Government

The Federal Government is warming to the speed, agility and functionality of cloud computing.


State companies helping Army with cloud computing

The U.S. Army has turned to cloud computing, and to Wisconsin companies, to improve its intelligence gathering in Afghanistan.


Saas Offering Provides Detailed Analysis of Your Software Portfolio

Are you faced with the need to do a software portfolio analysis but find the prospect daunting given the scattered nature of your operation? A new SaaS-based offering might fit the bill.


SaaS Business Apps Drive SMB Cloud Computing Adoption

Lots of small and medium businesses have discovered the benefits of software-as-a-service. These SaaS applications are driving cloud adoption among SMBs. 


Here’s What Makes The Cloud So Cool

Mike Pearl from PriceWaterhouseCooper provides a useful plan of attack for business adoption of cloud computing.


10 Things You Just Gotta Have in Your Cloud Contract

CFO’s guide to the wild and wooly world of cloud services in which contracts are mutable, companies come and go, and politics a continent away could materially impact your business.



Also in the news:




Guest Post: Cloud Management


By Rick Blaisdell; CTO ConnectEDU

Cloud computing has definitely revolutionised the IT industry and transformed the way in which IT Services are delivered. But finding the best way for an organization to perform common management tasks using remote services on the Internet is not that easy.

Cloud management incorporates the task of providing, managing, and monitoring applications into cloud infrastructures that do not require end-user knowledge of the physical location or of the system that delivers the services. Monitoring cloud computing applications and activity into requires cloud management tools to ensure that resources are meeting SLA’s, working optimally and also not effecting systems and users that are leveraging these services.

With appropriate cloud management solutions, private users are now able to manage multiple operating systems on the same dedicated server or move the virtual servers to a shared server all from in the same cloud management solution.  Some cloud companies offer tools to manage this entire process, some will provide this solution using a combination of tools and managed services.

The three core components of cloud environment, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and finally Software as a Service (SaaS), now offer great solutions to manage cloud computing, but the management tools need to be flexible and scalable just as the cloud computing strategy of an organization should be. With the new paradigm of computing, cloud management has to:

  • continue to make cloud easier to use;
  • provide security policies for the cloud environment;
  • allow safe cloud operations and ease migrations;
  • provide for financial controls and tracking;
  • audit and reporting for compliance.

Numerous tasks and tools are necessary for cloud management. A successful cloud management strategy includes performance monitoring in terms of response times, latency, uptime and so on, security and compliance auditing and management, initiating, supervising and management of disaster recovery.

So, why is it so important to implement a cloud management strategy into an organization? By having a cloud management strategy that fits into the cloud computing resources that a company uses, it offers a faster delivery of IT services to businesses, it reduces capital and operating costs, it charges backs automatically for resource usage and reporting and it allows IT departments to monitor their service level requirements.



This post originally appeared on

Avoid the Security Umpire Problem

Have you ever been part of a team or committee working on an initiative and found that the security or compliance person seemed to be holding up your project? They just seemed to find fault with anything and everything and just didn’t add much value to the initiative? If you are stuck with security staff that are like this all the time, that’s a bigger issue that’s not within the scope of this article to solve.  But, most of the time, it’s because this person was brought in very late in the project and a bunch of things have just been thrown at them, forcing them to make quick calls or decisions.

A common scenario is that people feel that there is no need to involve the security folks until after the team has come up with a solution.  Then the team pulls in the security or compliance folks to validate that the solution doesn’t go afoul of the organization’s security or compliance standards. Instead of a team member who can help with the security and compliance aspects of your project, you have ended up with an umpire.

Now think back to when you were a kid picking teams to play baseball.  If you had an odd number of kids then more than likely there would be one person left who would end up being the umpire. When you bring in the security or compliance team member late in the game, you may end up with someone that takes on the role of calling balls and strikes instead of being a contributing member of the team.

Avoid this situation by involving your Security and Compliance staff early on, when the team is being assembled.  Your security SMEs should be part of these conversations.  They should know the business and what the business requirements are.  They should be involved in the development of solutions.  They should know how to work within a team through the whole project lifecycle. Working this way ensures that the security SME has full context and is a respected member of the team, not a security umpire.

This is even more important when the initiative is related to virtualization or cloud. There are so many new things happening in this specific area that everyone on the team needs as much context, background, and lead time as possible so that they can work as a team to come up with solutions that make sense for the business.

What Should I Do about Cloud?

The word of the day is “Cloud.” Nearly every software and hardware vendor out there has a product and shiny marketing to help their customers go “to the cloud.” Every IT trade rag has seemingly unique, seemingly agnostic advice on how their audience can take advantage of cloud computing. Standards bodies have published authoritative descriptions of cloud computing models. If you’re an IT decision maker or influencer, you’re in luck! Many reputable players in the industry have published reams of information to help you on your journey to take advantage of cloud computing. Pick your poison… Public, Private, Hybrid, Community, SaaS, IaaS, PaaS… even XaaS (anything as a service!). On-premises, off-premises… or even “on-premise” if you want!

Starting with an on-premises private cloud of your own seems like a sensible choice. A cloud environment of your own, that you can keep cool and dry inside of your own datacenter. Architects can design and build it with the components of their choice, management can have the control that they’re used to, and administrators can manage it alongside every other system. Security issues can be handled deftly by your consultant or cloud-champion – after all, your cloud is internal and private!

Another perspective is to skip out on a cloud strategy, forgo some early benefits, and wait for all of the chips to fall before making any investments. This is the respectable “do nothing” alternative, and it’s a valid one.

Yet another perspective is to take a close look at cloud concepts and prepare your company to act, when appropriate. Prepare, act, appropriate time. Sounds like a strategy brewing.