Category Archives: Cloud computing

Translating a Vision for IT Amid a “Severe Storm Watch”

IT departments adopt technology from two perspectives: from a directive by the CIO to a “rogue IT” suggestion or project from an individual user. The former represents a top-down condition, while the latter has technology adoption from the bottom-up. Oftentimes, there seems to be confusion somewhere in the middle, resulting in a smorgasbord of tools at one end, and a grand, ambitious strategy at the other end. This article suggests a framework to implement a vision from strategy, policy, process, and ultimately tools.

Vision for IT -> Strategies -> Policies -> Processes -> Procedures -> Tools and Automation

Revenue Generating Activities -> Business Process -> IT Services

As a solutions architect and consultant, I’ve met with many clients in the past few years. From director-level staff to engineers to support staff in the trenches, IT has taken on a language of its own. Every organization has its own acronyms, sure. Buzzwords and marketing hype strangle the English language inside the datacenter. Consider the range of experience present in many shops, and it is easy to imagine the confusion. The seasoned, senior executive talks about driving standards and reducing spend for datacenter floor space, and the excited young intern responds with telecommuting, tweets, and cloud computing, all in a proof-of-concept that is already in progress. What the…? Who’s right?

 

It occurred to me a while ago that there is a “severe storm watch” for IT. According to the National Weather Service, a “watch” is issued when conditions are favorable for [some type of weather chaos]. Well, in IT, more than in other departments, one can make these observations:

  • Generationally-diverse workforce
  • Diverse backgrounds of workers
  • Highly variable experience of workers
  • Rapidly changing products and offerings
  • High complexity of subject matter and decisions

My colleague, Geoff Smith, recently posted a five-part series (The Taxonomy of IT) describing the operations of IT departments. In the series, Geoff points out that IT departments take on different shapes and behaviors based on a number of factors. The series presents a thoughtful classification of IT departments and how they develop, with a framework borrowed from biology. This post presents a somewhat more tactical suggestion on how IT departments can deal with strategy and technology adoption.

Yet Another Framework

A quick search on Google shows a load of articles on Business and IT Alignment. There’s even a Wikipedia article on the topic. I hear it all the time, and I hate the term. This term suggests that “IT” simply does the bidding of “The Business,” whatever that may be. I prefer to see Business and IT Partnership. But anyway, let’s begin with a partnership within IT departments. Starting with tools, do you know the value proposition of all of the tools in your environment? Do you know about all of the tools in your environment?

 

A single Vision for IT should first translate into one or more Strategies. I’m thinking of a Vision statement for IT that looks something like the following:

“Acme IT exists as a competitive, prime provider of information technology services to enable Acme Company to generate revenue by developing, marketing, and delivering its products and services to its customers. Acme IT stays competitive by providing Acme Company with relevant services that are delivered with the speed, quality and reliability that the company expects. Acme IT also acts as a technology thought leader for the company, proactively providing services that help Acme Company increase revenue, reduce costs, attract new customers, and improve brand image.”

Wow, that’s quite a vision for an IT department. How would a CIO begin to deliver on a vision like that? Just start using VMware, and you’re all set! Not quite! Installing VMware might come all the way at the end of the chain… at “Tool A” in the diagram above.

First, we need one or more Strategies. One valid Strategy may indeed be to leverage virtualization to improve time to market for IT services, and reduce infrastructure costs by reducing the number of devices in the datacenter. Great ideas, but a couple of Policies might be needed to implement this strategy.

One Policy, Policy A in the above diagram, might be that all application development should use a virtual server. Policy B might mandate that all new servers will be assessed as virtualization candidates before physical equipment is purchased.

Processes then flow from Policies. Since I have a policy that mandates that new development should happen on a virtual infrastructure, eventually I should be able to make a good estimate of the infrastructure needed for my development efforts. My Capacity Management process could then requisition and deploy some amount of infrastructure in the datacenter before it is requested by a developer. You’ll notice that this process, Capacity Management, enables a virtualization policy for developers, and neatly links up with my strategy to improve time to market for IT services (through reduced application development time). Eventually, we could trace this process back to our single Vision for IT.

But we’re not done! Processes need to be implemented by Procedures. In order to implement a capacity management process properly, I need to estimate demand from my customers. My customers will be application developers if we’re talking about the policy that developers must use virtualized equipment. Most enterprises have some sort of way to handle this, so we’d want to look at the procedure that developer customers use to request resources. To enable all of this, the request and the measurement of demand, I may want to implement some sort of Tool, like a service catalog or a request portal. That’s the end of the chain – the Tool.

Following the discussion back up to Vision, we can see how the selection of a tool is justified by following the chain back to procedure, process, policy, strategy, and ultimately vision.

This framework provides a simple alignment that can be used in IT departments for a number of advantages. One significant advantage is that it provides a common language for everyone in the IT department to understand the reasoning behind the design of a particular process, the need for a particular procedure, or the selection of a particular tool over another.

In a future blog post, I’ll cover the various other advantages of using this framework.

Food for Thought

  1. Do you see a proliferation of tools and a corresponding disconnect with strategy in your department?
  2. Who sets the vision and strategy for IT in your department?
  3. Is your IT department using a similar framework to rationalize tools?
  4. Do your IT policies link to processes and procedures?
  5. Can you measure compliance to your IT policies?

Where Is the Cloud Going? Try Thinking “Minority Report”

I read a news release (here) recently where NVidia is proposing to partition processing between on-device and cloud-located graphics hardware…here’s an excerpt:

“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realizing their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”

As well as the split processing that is handled by the Silk browser on the Kindle Fire (see here), I started thinking about that “processing partitioning” strategy in relation to other aspects of computing and cloud computing in particular.  My thinking is that, over the next five to seven years (at most by 2020), there will be several very important seismic shifts in computing dealing with at least four separate events:  1) user data becomes a centralized commodity that’s brokered by a few major players,  2) a new cloud-specific programming language is developed, 3) processing becomes “completely” decoupled from hardware and location, and, D) end user computing becomes based almost completely on SoC technologies (see here).  The end result will be a world of data and processing independence never seen that will allow us to live in that Minority Report world.  I’ll describe the events and then will describe how all of them will come together to create what I call “pervasive personal processing” or P3.

User Data

Data about you, your reading preferences, what you buy, what you watch on TV, where you shop, etc. exist in literally thousands of different locations and that’s a problem…not for you…but for merchants and the companies that support them.  It’s information that must be stored and maintained and regularly refreshed for it to remain valuable, basically, what is being called “big data.” The extent of this data almost cannot be measured because it is so pervasive and relevant to everyday life. It is contained within so many services we access day in and day out and businesses are struggling to manage it. Now the argument goes that they do this, at great cost, because it is a competitive advantage to hoard that information (information is power, right?) and eventually, profits will arise from it.  Um, maybe yes and maybe no but it’s extremely difficult to actually measure that “eventual” profit…so I’ll go along with “no.” Now even though big data-focused hardware and software manufacturers are attempting to alleviate these problems of scale, the businesses who house these growing petabytes…and yes, even exabytes…of data are not seeing the expected benefits—relevant to their profits—as it costs money, lots of it.  This is money that is taken off the top line and definitely affects the bottom line.

Because of these imaginary profits (and the real loss), more and more companies will start outsourcing the “hoarding” of this data until the eventual state is that there are 2 or 3 big players who will act as brokers. I personally think it will be either the credit card companies or the credit rating agencies…both groups have the basic frameworks for delivering consumer profiles as a service (CPaaS) and charge for access rights.  A big step toward this will be when Microsoft unleashes IDaaS (Identity as a Service) as part of their integrating Active Directory into their Azure cloud. It’ll be a hurdle for them to convince the public to trust them, but I think they will eventually prevail.

These profile brokers will start using IDaaS because then they don’t have to have separate internal identity management systems (for separate data repositories of user data) for other businesses to access their CPaaS offerings.  Once this starts to gain traction you can bet that the real data mining begins on your online, and offline, habits because your loyalty card at the grocery store will be part of your profile…as will your
credit history and your public driving record and the books you get from your local library and…well, you get the picture.  Once your consumer profile is centralized, all kinds of data feeds will appear because the profile brokers will pay for them.  Your local government, always strapped for cash, will sell you out in an instant for some recurring monthly revenue.

Cloud-specific Programming

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely but, to-date, they have been entirely encapsulated within the local machine (or in some cases the nodes of a super computer or HPC cluster which, for our purposes, really is just a large single machine).  What this means is that the programs written for those systems need to know precisely where the functions will be run, what subsystems will run them, the exact syntax and context, etc.  One slight error or a small lag in the response time and the whole thing could crash or, at best, run slowly or produce additional errors.

But, what if you had a computer language that understood the cloud and took into account latency, data errors and even missing data?  A language that was able to partition processing amongst all kinds of different processing locations, and know that the next time, the locations may have moved?  A language that could guess at the best place to process (i.e. lowest latency, highest cache hit rate, etc.) but then change its mind as conditions change?

That language would allow you to specify a type of processing and then actively seek the best place for that processing to happen based on many different details…processing intensity, floating point, entire algorithm or proportional, subset or superset…and fully understand that, in some cases, it will have to make educated guesses about what the returned data will be (in case of unexpected latency).  It will also have to know that the data to be processed may exist in a thousand different locations such as the CPaaS providers, government feeds, or other providers for specific data types.  It will also be able to adapt its processing to the available processing locations such that it elegantly deprecates functionality…maybe based on a probability factor included in the language that records variables over time and uses that to guess where it will be next and line up the processing needed beforehand.  The possibilities are endless, but not impossible…which leads to…

Decoupled Processing and SoC

As can be seen by the efforts NVidia is making is this area, it will soon be that the processing of data will become completely decoupled from where that data lives or is used. What this is and how it will be done will rely on other events (see previous section) but the bottom line is that once it is decoupled, a whole new class of device will appear, in both static and mobile versions, that will be based on System on a Chip (SoC) which will allow deep processing density with very, very low power consumption. These devices will support multiple code sets across hundreds of cores and be able to intelligently communicate their capabilities in real time to distributed processing services that request their local processing services…whether over Wi-Fi, Bluetooth, IrDA, GSM, CDMA, or whatever comes next, the devices themselves will make the choice based on best use of bandwidth, processing request, location, etc.  These devices will take full advantage of the cloud specific computing languages to distribute processing across dozens and possibly hundreds of processing locations and will hold almost no data because they don’t have to, everything exists someplace else in the cloud.  In some cases these devices will be very small, the size of a thin watch for example, but they will be able to process the equivalent of what a super computer can do because they don’t do all of the processing, only what makes sense for the location and capabilities, etc.

These decoupled processing units, Pervasive Personal Processing or P3 units, will allow you to walk up to any workstation or monitor or TV set…anywhere in the world…and basically conduct your business as if you were sitting in from of your home computer.  All of you data, your photos, your documents, and your personal files will be instantly available in whatever way that you prefer.  All of your history for whatever services you use, online and offline, will be directly accessible.  The memo you left off writing that morning in the Houston office will be right where you left it, on that screen you just walked up to in the hotel lobby in Tokyo the next day, with the cursor blinking in the middle of the word you stopped on.

Welcome to Minority Report.

News Round-up 5/25/2012: Privacy in the Era of Big Data, Cloud Threats Telcos, Boom in the Cloud and more

Each week we compile the hottest stories in cloud computing and gather them for you on our blog. Take a look at the top news and stay current with developments.

The world moves to cloud. Is it time for cloud-based mobile management?

The shift to the cloud in consumer services indicates a greater shift of business services to come. How do you manage your employees’ mobile devices? Forbes provides some important questions to answer before deciding on a cloud solution.

 

Innovation isn’t dead, it just moved to the cloud

A recent interview in Atlantic magazine said that innovation was dead in Silicone Valley. Tedd Hoff from High Scalability responds by saying innovation is alive in well in the cloud.

 

Social + Mobile + Cloud = The New Paradigm for Midsize Business

Social media and cloud computing are relatively new in terms of business adoption but their impact is already being felt. It this the new paradigm for business customer relationships?

 

That Boom You Hear Is the Cloud

Did the Facebook IPO bust signal there was a bubble in tech that couldn’t be sustained? Cloud stocks are performing well and signal greater growth in the near future.

 

Private Cloud: ‘Everyone’s Got One. Where’s Yours?’

Forrester blog post warns against ‘cloudwashing’ your business and points out three common mistakes when choosing to migrate to the cloud.

 

How Cloud Computing Is Threatening Traditional Telco

Traditional telecommunications solutions are rapidly turning over to cloud based solutions handled by IT departments. New software and hardware makes it easier than ever to shift away from traditional telcos.

 

Also in the news:

 

 

News Round-up 5/25/2012: Privacy in the Era of Big Data, Cloud Threats Telcos, Boom in the Cloud and more

Each week we compile the hottest stories in cloud computing and gather them for you on our blog. Take a look at the top news and stay current with developments.

The world moves to cloud. Is it time for cloud-based mobile management?

The shift to the cloud in consumer services indicates a greater shift of business services to come. How do you manage your employees’ mobile devices? Forbes provides some important questions to answer before deciding on a cloud solution.

 

Innovation isn’t dead, it just moved to the cloud

A recent interview in Atlantic magazine said that innovation was dead in Silicone Valley. Tedd Hoff from High Scalability responds by saying innovation is alive in well in the cloud.

 

Social + Mobile + Cloud = The New Paradigm for Midsize Business

Social media and cloud computing are relatively new in terms of business adoption but their impact is already being felt. It this the new paradigm for business customer relationships?

 

That Boom You Hear Is the Cloud

Did the Facebook IPO bust signal there was a bubble in tech that couldn’t be sustained? Cloud stocks are performing well and signal greater growth in the near future.

 

Private Cloud: ‘Everyone’s Got One. Where’s Yours?’

Forrester blog post warns against ‘cloudwashing’ your business and points out three common mistakes when choosing to migrate to the cloud.

 

How Cloud Computing Is Threatening Traditional Telco

Traditional telecommunications solutions are rapidly turning over to cloud based solutions handled by IT departments. New software and hardware makes it easier than ever to shift away from traditional telcos.

 

Also in the news:

 

 

Cloud Theory to Cloud Reality: The Importance of Partner Management

 

Throughout my career at GreenPages I’ve been lucky enough to work with some top-shelf IT leaders. These folks possess many qualities that make them successful – technical smarts, excellent communication skills, inspired leadership, and killer dance moves. Well, at least those first three.

But there’s one skill that’s increasingly critical as more IT shops move from cloud theory to cloud reality: partner management.

IT leaders who effectively and proactively leverage partners will give their organization a competitive advantage during the journey to the cloud. Why? Because smart solution providers accelerate the time needed to research, execute, and support a technology project.

Let’s use the example of building a house. You could learn how to do some drafting on your own, but most folks are more comfortable using the experienced services of an architect who can work with the homeowner on what options are feasible within a given budget and timeframe.

Once you settle on a design, do you interview and manage the foundation contractor, framers, electricians, plumbers, carpenters, roofers, drywallers, painters, and so on? Probably not. Most prefer to hire a general contractor who has relationships with the right people with the right skills, and can coordinate all of the logistics within the design.

Ditto for a technology initiative. Sure, you could attempt that Exchange migration in-house but you’ll probably sleep better having an engineer who has done dozens of similar migrations, can avoid common pitfalls, and can call in reinforcements when needed.

The stakes are even higher for a cloud initiative, for a few reasons. First, we all know that “cloud” is among the most over-marketed tech terms in history. It’s so bad that I’ve asked my marketing team to replace every instance of cloud with “Fluffernutter” just to be unique (no word on that yet). Despite the hype, bona fide “cloud architect” skillsets are few and far between. IT leaders need to make sure their partner’s staff has the skills and track record to qualify, justify, scope, build, and support a Fluff, er, cloud infrastructure.

Second, cloud is such a broad concept that it can be overwhelming trying to figure out where to start. A smart partner will work with you to identify use cases that have been successful for other firms. This can range from a narrowly-focused project such as cloud backup to a full-blown private cloud infrastructure that completely modernizes the role of IT within an organization. The key here is talking to folks who have actually done the work and can speak to the opportunities and challenges.

Now, I’m certainly not suggesting that you don’t do your own independent vetting outside of your partner community. But once your due diligence is done, a great partner can act as an extended part of your team and put much-needed cycles back into your day. IT leaders who are proactive with these relationships will find the payback sweet indeed.

News Round-Up 5/19/12: Google’s Cloud, Future of Data Centers, Cloud IPOs, Cloud Security Myths Busted and More

 

There have been some exciting announcements and fascinating news articles recently regarding cloud services and service providers. Every week we will round up the most interesting topics from around the globe and consolidate them into a weekly summary.

 

Hitch a Ride Through Google’s Cloud

Your Gmail box lives somewhere in the jumble of servers, cables, and hard drives known as the “cloud” but it often migrates in search of the ideal location. Find out what happens when you hit send. 

 

The Future of Data Centers: Is 100% Cloud Possible?

Guest blogger Robert Offley explains how the market is shifting today, what barriers remain for total cloud adoption, and if an evolution to 100% cloud is likely to occur.

 

Big Data is Worth Nothing Without Big Science

As with gold or oil, data has no intrinsic value, writes Webtrends CEO Alex Yoder. Big science, which bridges the gap between knowledge and insight, is where the real value is.

 

The Hottest IPO You’ve Never Heard Of

With an expected valuation of close to $100 billion, it’s understandable that no one can stop talking about Facebook’s initial public offering this week.  But while Facebook basks in the social media spotlight, companies tackling tough business problems are exciting investors, if not consumers. Workday, for example, is expected to be among the largest IPOs this year in the business software market.

 

Five Busted Myths of Cloud Security

“Cloud” is one of the most over used and least understood words in technology these days, so it’s little surprise that there’s so much confusion about its security. This article busts 5 myths about cloud security.

 

Also in the news:

 

 

Going Rogue: Do the Advantages Outweigh the Risks?

Are all rogue IT projects bad things? Could this type of activity be beneficial? If rogue IT projects could be beneficial, should they be supported or even encouraged?

Recently, I took part in a live Twitter chat hosted by the Cloud Commons blog (thanks again for the invite!) that was focused on Rogue IT. After hearing from, and engaging with, some major thought leaders in the space, I decided to write a blog summarizing my thoughts on the topic.

What does “Rogue IT” mean anyway?

I think that there are rogue IT users and there are rogue IT projects. There’s the individual user scheduling meetings with an “unauthorized” iPad. There’s also a sales department, without the knowledge of corporate IT, developing an iPhone app to process orders for your yet-to-be-developed product. Let us focus on the latter – rogue IT projects. Without a doubt, rogue IT projects have been, and will continue to be, an issue for corporate IT departments. A quick web search will return articles on “rogue IT” dating back around 10 years. However, as technology decreases in cost and increases in functionality, the issue of rouge IT projects seems to be moving up on the list of concerns.

What does rogue IT have to do with cloud computing?

Cloud Computing opens up a market for IT Services. With Cloud Computing, organizations have the ability to source IT services to the provider that can deliver the service most efficiently. Sounds a lot like specialization and division of labor, doesn’t it? (We’ll stay away from The Wealth of Nations, for now.) Suffice to say that Rogue IT may be an indication that corporate IT departments need to compete with outside providers of IT services. Stated plainly, the rise of Cloud Computing is encouraging firms to enter the market for IT services. Customers, even inside a large organization, have choices (other than corporate IT) on how to acquire the IT services that they need. Maybe corporate IT is not able to deliver a new IT service in time for that new sales campaign. Or, corporate IT simply refuses to develop a new system requested by a customer. That customer, in control of their own budget, may turn to an alternative service offering “from the cloud.”

What are the advantages of rogue IT? Do they outweigh the risks?

Rogue IT is a trend that will continue as the very nature of work changes (e.g. long history of trends to a service-based economy means more and more knowledge workers). Rogue IT can lead to some benefits… BYOD or “bring your own device” for example. BYOD can drive down end-user support costs and improve efficiency. BYOD will someday also mean “bring your own DESK” and allow you to choose to work when and where it is most convienent for you to do so (as long as you’re impacting the bottom line, of course). Another major benefit is increased pace of innovation. As usual, major benefits are difficult to measure. Take the example of the Lockheed Martin “Skunkworks” that produced some breakthroughs in stealth military technology –would the organization have produced such things if they had been encumbered by corporate policies and standards?

Should CIOs embrace rogue IT or should it be resisted?

CIOs should embrace this as the new reality of IT becoming a partner with the business, not simply aligning to it. Further, CIOs can gain some visibility into what is going on with regard to “rogue IT” devices and systems. With some visibility, the corporate IT departments can develop meaningful offerings and meet the demands of their customers.

Corporate IT departments should also bring some education as to what is acceptable and what is not acceptable: iPad at work- ok, but protect it with a password. Using Google Docs to store your company’s financial records…there might be a better place for that.

Two approaches for corporate IT:

– “Embrace and extend:” Allow rogue IT, learn from the experiences of users, adopt the best systems/devices/technologies, and put them under development

  • IT department gets to work with their customers and develop new technologies

– “Judge and Jury:” Have IT develop and enforce technology standards

  • IT is more/less an administrative group, always the bad guy, uses justification by keeping the company and its information safe (rightly so)

CIOs should also consider when rogue IT is being used. Outside services, quick development, and sidestepping of corporate IT policies may be beneficial for projects in conceptual or development phases. You can find the transcript from the Cloud Commons twitter chat here: http://bit.ly/JNovHT

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.

News Round-Up 5/5/2012: What Makes the Cloud Cool, Feds in the Cloud, 10 Things Your Cloud Contract Needs

 

There have been some exciting announcements and fascinating news articles recently regarding cloud services and service providers. Every week we will round up the most interesting topics from around the globe and consolidate them into a weekly summary.

 

Cloud Computing Gains in Federal Government

The Federal Government is warming to the speed, agility and functionality of cloud computing.

 

State companies helping Army with cloud computing

The U.S. Army has turned to cloud computing, and to Wisconsin companies, to improve its intelligence gathering in Afghanistan.

 

Saas Offering Provides Detailed Analysis of Your Software Portfolio

Are you faced with the need to do a software portfolio analysis but find the prospect daunting given the scattered nature of your operation? A new SaaS-based offering might fit the bill.

 

SaaS Business Apps Drive SMB Cloud Computing Adoption

Lots of small and medium businesses have discovered the benefits of software-as-a-service. These SaaS applications are driving cloud adoption among SMBs. 

 

Here’s What Makes The Cloud So Cool

Mike Pearl from PriceWaterhouseCooper provides a useful plan of attack for business adoption of cloud computing.

 

10 Things You Just Gotta Have in Your Cloud Contract

CFO’s guide to the wild and wooly world of cloud services in which contracts are mutable, companies come and go, and politics a continent away could materially impact your business.

 

 

Also in the news:

 

 

 

Guest Post: Cloud Management

 

By Rick Blaisdell; CTO ConnectEDU

Cloud computing has definitely revolutionised the IT industry and transformed the way in which IT Services are delivered. But finding the best way for an organization to perform common management tasks using remote services on the Internet is not that easy.

Cloud management incorporates the task of providing, managing, and monitoring applications into cloud infrastructures that do not require end-user knowledge of the physical location or of the system that delivers the services. Monitoring cloud computing applications and activity into requires cloud management tools to ensure that resources are meeting SLA’s, working optimally and also not effecting systems and users that are leveraging these services.

With appropriate cloud management solutions, private users are now able to manage multiple operating systems on the same dedicated server or move the virtual servers to a shared server all from in the same cloud management solution.  Some cloud companies offer tools to manage this entire process, some will provide this solution using a combination of tools and managed services.

The three core components of cloud environment, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and finally Software as a Service (SaaS), now offer great solutions to manage cloud computing, but the management tools need to be flexible and scalable just as the cloud computing strategy of an organization should be. With the new paradigm of computing, cloud management has to:

  • continue to make cloud easier to use;
  • provide security policies for the cloud environment;
  • allow safe cloud operations and ease migrations;
  • provide for financial controls and tracking;
  • audit and reporting for compliance.

Numerous tasks and tools are necessary for cloud management. A successful cloud management strategy includes performance monitoring in terms of response times, latency, uptime and so on, security and compliance auditing and management, initiating, supervising and management of disaster recovery.

So, why is it so important to implement a cloud management strategy into an organization? By having a cloud management strategy that fits into the cloud computing resources that a company uses, it offers a faster delivery of IT services to businesses, it reduces capital and operating costs, it charges backs automatically for resource usage and reporting and it allows IT departments to monitor their service level requirements.

 

 

This post originally appeared on http://www.rickscloud.com/cloud-management/