Category Archives: Cloud computing

Optimize Your Infrastructure; From Hand-built to Mass-production

If you’ve been reading this blog, you’ll know that I write a lot about cloud and cloud technologies, specifically around optimizing IT infrastructures and transitioning them from traditional management methodologies and ideals toward dynamic, cloud-based methodologies.  Recently, in conversations with customers as well as my colleagues and peers within the industry, it is becoming increasingly clear that the public, at least the subset I deal with, are simply fed up with the massive amount of hype surrounding cloud.  Everyone is using that as a selling point and have attached so many different meanings that it has become meaningless…white noise that just hums in the background and adds no value to the conversation.  In order to try to cut through that background noise I’m going to cast the conversation in a way that is a lot less buzzy and a little more specific to what people know and are familiar with.  Let’s talk about cars (haa ha, again)…and how Henry Ford revolutionized the automobile industry.

First, let’s be clear that Henry Ford did not invent the automobile, he invented a way to make automobiles affordable to the common man or as he put it, the “great multitude.”  After the Model A, he realized he’d need a more efficient way to mass produce cars in order to lower the price while keeping them at the same level of quality they were known for. He looked at other industries and found four principles that would further his goal: interchangeable parts, continuous flow, division of labor, and reducing wasted effort. Ford put these principles into play gradually over five years, fine-tuning and testing as he went along. In 1913, they came together in the first moving assembly line ever used for large-scale manufacturing. Ford produced cars at a record-breaking rate…and each one that rolled off the production line was virtually identical to the one before and after.

Now let’s see how the same principles (of mass production) can revolutionize the IT Infrastructure as they did the automobile industry…and also let’s be clear that I am not calling this cloud, or dynamic datacenter or whatever the buzz-du-jour is, I am simply calling it an Optimized Infrastructure because that is what it is…an IT infrastructure that produces the highest quality IT products and services in the most efficient manner and at the lowest cost.

Interchangeable Parts

Henry Ford discovered significant efficiency by using interchangeable parts which meant making the individual pieces of the car the same every time. That way any valve would fit any engine, any steering wheel would fit any chassis. The efficiencies to be gained were proven in the assembly of standardized photography equipment pioneered by George Eastman in 1892. This meant improving the machinery and cutting tools used to make the parts. But once the machines were adjusted, a low-skilled laborer could operate them, replacing the skilled craftsperson that formerly made the parts by hand.

In a traditional “Hand-Built” IT infrastructure, skilled engineers are basically building servers—physical and virtual—and other IT assets from scratch and are typically reusing very little with each build.  They may have a “golden image” for the OS, but they then build multiple images based on the purpose of the server, its language or the geographic location of the division or department it is meant to serve.  They might layer on different software stacks with particularly configured applications or install each application one after another.  These assets are then configured by hand using run books, build lists etc. Then tested by hand, etc. which means that it takes time and skilled effort and there are still unacceptable amounts of errors, failures and expensive rework.

By significantly updating and improving the tools used (i.e. virtualization, configuration and change management, software distribution, etc.), the final state of IT assets can be standardized, the way they are built can be standardized, and the processes used to build them can be standardized…such that building any asset becomes a clear and repeatable process of connecting different parts together; these interchangeable parts can be used over and over and over again to produce virtually identical copies of the assets at much lower costs.

Division of Labor

Once Ford standardized his parts and tools, he needed to divide up how things were done in order to be more efficient. He needed to figure out which process should be done first so he divided the labor by breaking the assembly of the Model T into 84 distinct steps. Each worker was trained to do just one of these steps but always in the exact same order.

The Optimized Infrastructure relies on the same principle of dividing up the effort (of defining, creating, managing and ultimately retiring each IT asset) so that only the most relevant technology, tool or sometimes, yes, human, does the work. As can be seen in later sections, these “tools” (people, process or technology components) are then aligned in the most efficient manner such that it dramatically lowers the cost of running the system as well as guarantees that each specific work effort can be optimized individually, irrespective of the system as a whole.

Continuous Flow

To improve efficiency even more, and lower the cost even further, Ford needed the assembly line to be arranged so that as one task was finished, another began, with minimum time spent in set-up (set-up is always a negative production value). Ford was inspired by the meat-packing houses of Chicago and a grain mill conveyor belt he had seen. If he brought the work to the workers, they spent less time moving about. He adopted the Chicago meat-packers overhead trolley to auto production by installing the first automatic conveyer belt.

In an Optimized Infrastructure, this conveyor belt (assembly line) consists of individual process steps (automation) that are “brought to the worker” (each specific technological component responsible for that process step….see; division of labor) in a well-defined pattern (workflow) and then each workflow arranged in a well-controlled manner (orchestration) because it is no longer human workers doing those commodity IT activities (well, in 99.99% of the cases) but the system itself, leveraging virtualization, fungible resource pools and high levels of standardization among other things. This is the infrastructure assembly line and is how IT assets are mass produced…each identical and of the same high quality at the same low cost.

Reducing Wasted Effort

As a final principle, Ford called in Frederick Winslow Taylor, the creator of “scientific management,” to do time and motion studies to determine the exact speed at which the work should proceed and the exact motions workers should use to accomplish their tasks, thereby reducing wasted effort. In an Optimized Infrastructure, this is done through understanding and using continuous process improvement (CPI), but CPI cannot be done correctly unless you are monitoring the performance details of all the processes and the performance of the system as a whole and then documenting the results on a constant basis. This requires an infrastructure-wide management and monitoring strategy which, as you’ve probably guessed, was what Fredrick Taylor was doing in the Ford plant in the early 1900s.

Whatever You Call It…

From the start, the Model T was less expensive than most other hand-built cars because of expert engineering practices, but it was still not attainable for the “great multitude” as Ford had promised the world. He realized he’d need a more efficient way to produce the car in order to lower the price, and by using the four principles of interchangeable parts, continuous flow, division of labor, and reducing wasted effort, in 1915 he was able to drop the price of the Model T from $850 to $290 and, in that year, he sold 1 million cars.

Whether you prefer to call it cloud, or dynamic datacenter, or the Great Spedini’s Presto-Chango Cave of Magic Data doesn’t really matter…the fact is that those four principles listed above can be used along with the tools, technologies and operational methodologies that exist today—which are not rocket science or bleeding edge—to revolutionize your IT Infrastructure and stop hand-building your IT assets (employing your smartest and best workers to do so) and start mass producing those assets to lower your cost, increase your quality and, ultimately, significantly increase the value of your infrastructure.

With an Optimized Infrastructure of automated tools and processes where standardized/interchangeable parts are constantly reused based on a well-designed and efficiently orchestrated workflow that is monitored end-to-end, you too can make IT affordable for the “great multitude” in your organization.

Arrow ECS EMEA Launches ArrowSphere Cloud Services Platform for IT Channel

Image representing Arrow Electronics as depict...

Arrow Enterprise Computing Solutions, a business segment of Arrow Electronics Inc., today unveiled ArrowSphere, a cloud services aggregation and brokerage platform for the European solution provider community, system integrators, independent software vendors and service providers.

Through ArrowSphere, Arrow ECS is adding new growth opportunities for enterprise and midmarket business solutions for the channel. ArrowSphere will enable the Arrow ECS European channel network to resell aggregated cloud services, such as infrastructure-, platform-, storage- and software-as-a-service solutions, from industry leaders around the world. ArrowSphere brings new dimensions to cloud delivery by facilitating access to more than 60 leading-edge cloud services, in addition to adding flexibility with white-label webstores; increasing simplicity by centralizing billing and provisioning; and improving reliability through trusted single-sign-on solutions.

“By offering turnkey webstores that address the needs of today’s and future businesses, we bridge the gap between cloud service provider innovation and solution provider market reach,” said Laurent Sadoun, president of the Europe, Middle East and Africa region for Arrow ECS. “This approach represents the much-needed catalyst that can drive significant cloud adoption through the channel over the next five years.”

ArrowSphere is available to the IT community in the United Kingdom (beginning July 11) and will be available in September in Denmark, France, Germany and Spain, with other countries to follow.

“Migrating legacy IT systems to the cloud, connecting cloud solutions to existing on-premise infrastructure and supporting these hybrid solutions are complex undertakings for small and midsize enterprises. Solution providers are the trusted advisors that routinely help businesses integrate IT services securely and efficiently,” said Sadoun. “Arrow ECS is proud to offer the IT community a unique opportunity to enter into the cloud. This strategy of investments toward added value and the channel will guide innovation forward for our partners as well as the entire IT industry.”

“The ArrowSphere platform allows us to address new markets and new business in a fast and simple way, and it therefore represents a massive revenue opportunity for us,” said Shamus Kelly, managing director of Portal, an ISV working with Arrow ECS in the U.K. “Being able to leverage a turnkey webstore with our own solutions and the services portfolio developed by Arrow ECS puts us in a solid position to embrace the cloud. Also, it gives us the flexibility to adapt to our customers’ needs.”

More information about the ArrowSphere marketplace for cloud services, including details about the portfolio, is available online at http://sphere.arrow.com.


The Cloud is Dead! Long Live the Cloud! Twitter Chat Recap

Last week, Cloud Commons hosted a Twitter Chat on the end of Cloud Computing. If you’re not familiar with a tweetchat, they are discussions hosted on Twitter where people can join at a specific time by following a certain hashtag. The Cloud Commons tweetchats usually have around ten panelists and have been kicked off with a few thought-provoking questions. The participants then respond and share ideas in real time. The discussion is focused enough to be useful – 1 hour session, responses limited to 140 characters, but large enough to capture different perspectives.

This week’s tweetchat began with several questions:

  1. Adoption rates are rising for private cloud. Is this a stepping stone to hybrid/public cloud?
  2. What needs to happen before enterprises start to fully embrace cloud computing?
  3. What does the future model for enterprise cloud adoption look like?
  4. What should CSPs be doing more of to meet the needs of the enterprise?
  5. What needs to happen so that cloud becomes so ubiquitous that it’ll no longer be referred to as cloud? When will it happen?

The first question, “Is private cloud a stepping stone to hybrid/public cloud?” drew approximately 32 tweets. From the transcript, it appears as though participants in the marketplace are improving their understanding of cloud computing in terms of service and delivery models (private, public, hybrid, IaaS, PaaS, SaaS). The popular viewpoint was that private cloud is not exactly a stepping stone to hybrid/public cloud. A few tweets took the position that private cloud is seen as an alternate path to hybrid/public cloud. Many tweets indicated that IT departments want to retain tight control of their environment. Interesting tweet… “private cloud does not necessarily mean on-premises.” More on this later.

47 tweets in response to the second question, “What needs to happen before enterprises start to fully embrace cloud computing?” Overwhelmingly, the responses in this part of the chat were filled with terms like “services led,” “business value,” “SLA,” and “reduce FUD.” The responses to question 1 covered some territory here as well – enterprises will fully embrace cloud computing if and when they agree to give up some control of their infrastructure. There was an interesting tweet that mentioned transparency – “…it’s not always about control, as it is transparency.” We would argue that transparency is not needed here. To me, full transparency would require that the business is able to access minute detail about infrastructure, such as the amount of RAM installed on the application server that runs their slice of CRM at Salesforce.com. The business should be hidden from this kind of detail. Abstraction plays heavily here. So, we don’t need transparency as much as we need subtraction. What is an important concept that provides abstraction? You guessed it, Service Level Management. The GreenPages view is that processes need to improve before enterprises start to fully embrace cloud computing. See my earlier post, “What Should I Do about Cloud?” that goes in to much more detail on this topic.

I count about the same number of tweets in response to question 3 as I do question 2. Question 3 was a little more open-ended, so a critical mass of ideas never really took shape. The GreenPages’ view is that cloud computing will evolve to look like modern supply chains that can be seen in other industries, such as manufacturing. Enterprises may purchase IT Services from a SaaS provider, Salesforce.com for example. Salesforce.com may purchase its platform from another PaaS provider. That PaaS provider may purchase its basic infrastructure from an IaaS provider. Some value is added at each level, as the IaaS provider becomes more experienced in providing only infrastructure. The PaaS provider has an extremely robust platform for providing only a platform. The SaaS provider may ultimately become an expert at assembling and marketing these components into a service that provides value for the enterprise that ultimately consumes it. Compare this to the supply chain that auto manufacturers leverage to assemble a vehicle. In the early days of manufacturing, some companies produced every part of a vehicle, and assembled it into a finished product. I can think of one prominent example where the work to assemble a finished automobile took place in a single factory around the River Rouge in Detroit. Fast forward to present day, and you’ll be hard pressed to find an auto manufacturer who produces their own windshield glass. Or brake pads. Or smelts their own aluminum. The supply chain has specialized. Auto manufacturers design, assemble, and market finished vehicles. That’s about it. Cloud computing could bring the same specialization to IT.

Most tweets in response to question 4 were clearly around Service Level Management and SLAs, mitigating unknowns in security, and avoiding vendor lock-in. We agree, and think that a standard will emerge to define IT services in a single, consistent format. Kind of like OVF, the Open Virtual Machine Format, for virtualization. I can see an extension to OVF that defines a service’s uptime requirements, maximum ping time to a database server, etc. Such a standard would promote portability of IT Services.

Question 5 really went back to the topics discussed in question 3. When will enterprises embrace cloud? When will cloud computing become ubiquitous?

Right now, Corporate IT and The Business are two individuals living in a virtual “company town.” What I mean is that customers, (the business) are forced to purchase their services from the company store (corporate IT). GreenPages’ view is that there is a market for IT services and that emergence of cloud computing will serve to broaden this market. We recommend that organizations understand the value and costs of providing their own IT services in order to participate in the market – just like the business does. Overall, another insightful chat with some intelligent people!

The Private Cloud Strikes Back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree.  His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth.  Yes, cost is indeed a factor as is sharing risk but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not. He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions.  I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon but if your main concern is how to better equip and provide the environment and tools necessary to innovate within your organization, the whole social thing is a red herring for selling you things that you don’t need.

Traditional status quo within IT is deeply encumbered by mostly manual processes—optimized for people carrying out commodity IT tasks such as provisioning servers and OSes—that cannot be optimized any further, therefore a different, much better way had to be found.  That way is the private cloud which takes those commodity IT tasks and elevates them to automated and orchestrated, well defined workflows and then utilizes a policy-driven system to carry them out.  Whether these workflows are initiated by a human or as a result of a specific set of monitored criteria, the system dynamically creates and recreates itself based on actual business and performance need—something that is almost impossible to translate into the public cloud scenario.

Not that public cloud cannot be leveraged where appropriate, but the enterprise’s requirement is much more granular and specific than any public cloud can or should allow…simply to JP’s point that they must share the risk among many players and that risk is generic by definition within the public cloud.  Once you start creating one-off specific environments, the commonality is lost and it loses the cost benefits because now you are simply utilizing a private cloud whose assets are owned by someone else…sound like co-lo?

Finally, I wouldn’t expect someone whose main revenue source is based on the idea that a public cloud is better than a private cloud to say anything different than what JP has said, but I did expect some semblance of clarity as to where his loyalties lie…and it looks like it’s not with the best interests of the enterprise customer.

Translating a Vision for IT Amid a “Severe Storm Watch”

IT departments adopt technology from two perspectives: from a directive by the CIO to a “rogue IT” suggestion or project from an individual user. The former represents a top-down condition, while the latter has technology adoption from the bottom-up. Oftentimes, there seems to be confusion somewhere in the middle, resulting in a smorgasbord of tools at one end, and a grand, ambitious strategy at the other end. This article suggests a framework to implement a vision from strategy, policy, process, and ultimately tools.

Vision for IT -> Strategies -> Policies -> Processes -> Procedures -> Tools and Automation

Revenue Generating Activities -> Business Process -> IT Services

As a solutions architect and consultant, I’ve met with many clients in the past few years. From director-level staff to engineers to support staff in the trenches, IT has taken on a language of its own. Every organization has its own acronyms, sure. Buzzwords and marketing hype strangle the English language inside the datacenter. Consider the range of experience present in many shops, and it is easy to imagine the confusion. The seasoned, senior executive talks about driving standards and reducing spend for datacenter floor space, and the excited young intern responds with telecommuting, tweets, and cloud computing, all in a proof-of-concept that is already in progress. What the…? Who’s right?

 

It occurred to me a while ago that there is a “severe storm watch” for IT. According to the National Weather Service, a “watch” is issued when conditions are favorable for [some type of weather chaos]. Well, in IT, more than in other departments, one can make these observations:

  • Generationally-diverse workforce
  • Diverse backgrounds of workers
  • Highly variable experience of workers
  • Rapidly changing products and offerings
  • High complexity of subject matter and decisions

My colleague, Geoff Smith, recently posted a five-part series (The Taxonomy of IT) describing the operations of IT departments. In the series, Geoff points out that IT departments take on different shapes and behaviors based on a number of factors. The series presents a thoughtful classification of IT departments and how they develop, with a framework borrowed from biology. This post presents a somewhat more tactical suggestion on how IT departments can deal with strategy and technology adoption.

Yet Another Framework

A quick search on Google shows a load of articles on Business and IT Alignment. There’s even a Wikipedia article on the topic. I hear it all the time, and I hate the term. This term suggests that “IT” simply does the bidding of “The Business,” whatever that may be. I prefer to see Business and IT Partnership. But anyway, let’s begin with a partnership within IT departments. Starting with tools, do you know the value proposition of all of the tools in your environment? Do you know about all of the tools in your environment?

 

A single Vision for IT should first translate into one or more Strategies. I’m thinking of a Vision statement for IT that looks something like the following:

“Acme IT exists as a competitive, prime provider of information technology services to enable Acme Company to generate revenue by developing, marketing, and delivering its products and services to its customers. Acme IT stays competitive by providing Acme Company with relevant services that are delivered with the speed, quality and reliability that the company expects. Acme IT also acts as a technology thought leader for the company, proactively providing services that help Acme Company increase revenue, reduce costs, attract new customers, and improve brand image.”

Wow, that’s quite a vision for an IT department. How would a CIO begin to deliver on a vision like that? Just start using VMware, and you’re all set! Not quite! Installing VMware might come all the way at the end of the chain… at “Tool A” in the diagram above.

First, we need one or more Strategies. One valid Strategy may indeed be to leverage virtualization to improve time to market for IT services, and reduce infrastructure costs by reducing the number of devices in the datacenter. Great ideas, but a couple of Policies might be needed to implement this strategy.

One Policy, Policy A in the above diagram, might be that all application development should use a virtual server. Policy B might mandate that all new servers will be assessed as virtualization candidates before physical equipment is purchased.

Processes then flow from Policies. Since I have a policy that mandates that new development should happen on a virtual infrastructure, eventually I should be able to make a good estimate of the infrastructure needed for my development efforts. My Capacity Management process could then requisition and deploy some amount of infrastructure in the datacenter before it is requested by a developer. You’ll notice that this process, Capacity Management, enables a virtualization policy for developers, and neatly links up with my strategy to improve time to market for IT services (through reduced application development time). Eventually, we could trace this process back to our single Vision for IT.

But we’re not done! Processes need to be implemented by Procedures. In order to implement a capacity management process properly, I need to estimate demand from my customers. My customers will be application developers if we’re talking about the policy that developers must use virtualized equipment. Most enterprises have some sort of way to handle this, so we’d want to look at the procedure that developer customers use to request resources. To enable all of this, the request and the measurement of demand, I may want to implement some sort of Tool, like a service catalog or a request portal. That’s the end of the chain – the Tool.

Following the discussion back up to Vision, we can see how the selection of a tool is justified by following the chain back to procedure, process, policy, strategy, and ultimately vision.

This framework provides a simple alignment that can be used in IT departments for a number of advantages. One significant advantage is that it provides a common language for everyone in the IT department to understand the reasoning behind the design of a particular process, the need for a particular procedure, or the selection of a particular tool over another.

In a future blog post, I’ll cover the various other advantages of using this framework.

Food for Thought

  1. Do you see a proliferation of tools and a corresponding disconnect with strategy in your department?
  2. Who sets the vision and strategy for IT in your department?
  3. Is your IT department using a similar framework to rationalize tools?
  4. Do your IT policies link to processes and procedures?
  5. Can you measure compliance to your IT policies?

Where Is the Cloud Going? Try Thinking “Minority Report”

I read a news release (here) recently where NVidia is proposing to partition processing between on-device and cloud-located graphics hardware…here’s an excerpt:

“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realizing their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”

As well as the split processing that is handled by the Silk browser on the Kindle Fire (see here), I started thinking about that “processing partitioning” strategy in relation to other aspects of computing and cloud computing in particular.  My thinking is that, over the next five to seven years (at most by 2020), there will be several very important seismic shifts in computing dealing with at least four separate events:  1) user data becomes a centralized commodity that’s brokered by a few major players,  2) a new cloud-specific programming language is developed, 3) processing becomes “completely” decoupled from hardware and location, and, D) end user computing becomes based almost completely on SoC technologies (see here).  The end result will be a world of data and processing independence never seen that will allow us to live in that Minority Report world.  I’ll describe the events and then will describe how all of them will come together to create what I call “pervasive personal processing” or P3.

User Data

Data about you, your reading preferences, what you buy, what you watch on TV, where you shop, etc. exist in literally thousands of different locations and that’s a problem…not for you…but for merchants and the companies that support them.  It’s information that must be stored and maintained and regularly refreshed for it to remain valuable, basically, what is being called “big data.” The extent of this data almost cannot be measured because it is so pervasive and relevant to everyday life. It is contained within so many services we access day in and day out and businesses are struggling to manage it. Now the argument goes that they do this, at great cost, because it is a competitive advantage to hoard that information (information is power, right?) and eventually, profits will arise from it.  Um, maybe yes and maybe no but it’s extremely difficult to actually measure that “eventual” profit…so I’ll go along with “no.” Now even though big data-focused hardware and software manufacturers are attempting to alleviate these problems of scale, the businesses who house these growing petabytes…and yes, even exabytes…of data are not seeing the expected benefits—relevant to their profits—as it costs money, lots of it.  This is money that is taken off the top line and definitely affects the bottom line.

Because of these imaginary profits (and the real loss), more and more companies will start outsourcing the “hoarding” of this data until the eventual state is that there are 2 or 3 big players who will act as brokers. I personally think it will be either the credit card companies or the credit rating agencies…both groups have the basic frameworks for delivering consumer profiles as a service (CPaaS) and charge for access rights.  A big step toward this will be when Microsoft unleashes IDaaS (Identity as a Service) as part of their integrating Active Directory into their Azure cloud. It’ll be a hurdle for them to convince the public to trust them, but I think they will eventually prevail.

These profile brokers will start using IDaaS because then they don’t have to have separate internal identity management systems (for separate data repositories of user data) for other businesses to access their CPaaS offerings.  Once this starts to gain traction you can bet that the real data mining begins on your online, and offline, habits because your loyalty card at the grocery store will be part of your profile…as will your
credit history and your public driving record and the books you get from your local library and…well, you get the picture.  Once your consumer profile is centralized, all kinds of data feeds will appear because the profile brokers will pay for them.  Your local government, always strapped for cash, will sell you out in an instant for some recurring monthly revenue.

Cloud-specific Programming

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely but, to-date, they have been entirely encapsulated within the local machine (or in some cases the nodes of a super computer or HPC cluster which, for our purposes, really is just a large single machine).  What this means is that the programs written for those systems need to know precisely where the functions will be run, what subsystems will run them, the exact syntax and context, etc.  One slight error or a small lag in the response time and the whole thing could crash or, at best, run slowly or produce additional errors.

But, what if you had a computer language that understood the cloud and took into account latency, data errors and even missing data?  A language that was able to partition processing amongst all kinds of different processing locations, and know that the next time, the locations may have moved?  A language that could guess at the best place to process (i.e. lowest latency, highest cache hit rate, etc.) but then change its mind as conditions change?

That language would allow you to specify a type of processing and then actively seek the best place for that processing to happen based on many different details…processing intensity, floating point, entire algorithm or proportional, subset or superset…and fully understand that, in some cases, it will have to make educated guesses about what the returned data will be (in case of unexpected latency).  It will also have to know that the data to be processed may exist in a thousand different locations such as the CPaaS providers, government feeds, or other providers for specific data types.  It will also be able to adapt its processing to the available processing locations such that it elegantly deprecates functionality…maybe based on a probability factor included in the language that records variables over time and uses that to guess where it will be next and line up the processing needed beforehand.  The possibilities are endless, but not impossible…which leads to…

Decoupled Processing and SoC

As can be seen by the efforts NVidia is making is this area, it will soon be that the processing of data will become completely decoupled from where that data lives or is used. What this is and how it will be done will rely on other events (see previous section) but the bottom line is that once it is decoupled, a whole new class of device will appear, in both static and mobile versions, that will be based on System on a Chip (SoC) which will allow deep processing density with very, very low power consumption. These devices will support multiple code sets across hundreds of cores and be able to intelligently communicate their capabilities in real time to distributed processing services that request their local processing services…whether over Wi-Fi, Bluetooth, IrDA, GSM, CDMA, or whatever comes next, the devices themselves will make the choice based on best use of bandwidth, processing request, location, etc.  These devices will take full advantage of the cloud specific computing languages to distribute processing across dozens and possibly hundreds of processing locations and will hold almost no data because they don’t have to, everything exists someplace else in the cloud.  In some cases these devices will be very small, the size of a thin watch for example, but they will be able to process the equivalent of what a super computer can do because they don’t do all of the processing, only what makes sense for the location and capabilities, etc.

These decoupled processing units, Pervasive Personal Processing or P3 units, will allow you to walk up to any workstation or monitor or TV set…anywhere in the world…and basically conduct your business as if you were sitting in from of your home computer.  All of you data, your photos, your documents, and your personal files will be instantly available in whatever way that you prefer.  All of your history for whatever services you use, online and offline, will be directly accessible.  The memo you left off writing that morning in the Houston office will be right where you left it, on that screen you just walked up to in the hotel lobby in Tokyo the next day, with the cursor blinking in the middle of the word you stopped on.

Welcome to Minority Report.

News Round-up 5/25/2012: Privacy in the Era of Big Data, Cloud Threats Telcos, Boom in the Cloud and more

Each week we compile the hottest stories in cloud computing and gather them for you on our blog. Take a look at the top news and stay current with developments.

The world moves to cloud. Is it time for cloud-based mobile management?

The shift to the cloud in consumer services indicates a greater shift of business services to come. How do you manage your employees’ mobile devices? Forbes provides some important questions to answer before deciding on a cloud solution.

 

Innovation isn’t dead, it just moved to the cloud

A recent interview in Atlantic magazine said that innovation was dead in Silicone Valley. Tedd Hoff from High Scalability responds by saying innovation is alive in well in the cloud.

 

Social + Mobile + Cloud = The New Paradigm for Midsize Business

Social media and cloud computing are relatively new in terms of business adoption but their impact is already being felt. It this the new paradigm for business customer relationships?

 

That Boom You Hear Is the Cloud

Did the Facebook IPO bust signal there was a bubble in tech that couldn’t be sustained? Cloud stocks are performing well and signal greater growth in the near future.

 

Private Cloud: ‘Everyone’s Got One. Where’s Yours?’

Forrester blog post warns against ‘cloudwashing’ your business and points out three common mistakes when choosing to migrate to the cloud.

 

How Cloud Computing Is Threatening Traditional Telco

Traditional telecommunications solutions are rapidly turning over to cloud based solutions handled by IT departments. New software and hardware makes it easier than ever to shift away from traditional telcos.

 

Also in the news:

 

 

News Round-up 5/25/2012: Privacy in the Era of Big Data, Cloud Threats Telcos, Boom in the Cloud and more

Each week we compile the hottest stories in cloud computing and gather them for you on our blog. Take a look at the top news and stay current with developments.

The world moves to cloud. Is it time for cloud-based mobile management?

The shift to the cloud in consumer services indicates a greater shift of business services to come. How do you manage your employees’ mobile devices? Forbes provides some important questions to answer before deciding on a cloud solution.

 

Innovation isn’t dead, it just moved to the cloud

A recent interview in Atlantic magazine said that innovation was dead in Silicone Valley. Tedd Hoff from High Scalability responds by saying innovation is alive in well in the cloud.

 

Social + Mobile + Cloud = The New Paradigm for Midsize Business

Social media and cloud computing are relatively new in terms of business adoption but their impact is already being felt. It this the new paradigm for business customer relationships?

 

That Boom You Hear Is the Cloud

Did the Facebook IPO bust signal there was a bubble in tech that couldn’t be sustained? Cloud stocks are performing well and signal greater growth in the near future.

 

Private Cloud: ‘Everyone’s Got One. Where’s Yours?’

Forrester blog post warns against ‘cloudwashing’ your business and points out three common mistakes when choosing to migrate to the cloud.

 

How Cloud Computing Is Threatening Traditional Telco

Traditional telecommunications solutions are rapidly turning over to cloud based solutions handled by IT departments. New software and hardware makes it easier than ever to shift away from traditional telcos.

 

Also in the news:

 

 

Cloud Theory to Cloud Reality: The Importance of Partner Management

 

Throughout my career at GreenPages I’ve been lucky enough to work with some top-shelf IT leaders. These folks possess many qualities that make them successful – technical smarts, excellent communication skills, inspired leadership, and killer dance moves. Well, at least those first three.

But there’s one skill that’s increasingly critical as more IT shops move from cloud theory to cloud reality: partner management.

IT leaders who effectively and proactively leverage partners will give their organization a competitive advantage during the journey to the cloud. Why? Because smart solution providers accelerate the time needed to research, execute, and support a technology project.

Let’s use the example of building a house. You could learn how to do some drafting on your own, but most folks are more comfortable using the experienced services of an architect who can work with the homeowner on what options are feasible within a given budget and timeframe.

Once you settle on a design, do you interview and manage the foundation contractor, framers, electricians, plumbers, carpenters, roofers, drywallers, painters, and so on? Probably not. Most prefer to hire a general contractor who has relationships with the right people with the right skills, and can coordinate all of the logistics within the design.

Ditto for a technology initiative. Sure, you could attempt that Exchange migration in-house but you’ll probably sleep better having an engineer who has done dozens of similar migrations, can avoid common pitfalls, and can call in reinforcements when needed.

The stakes are even higher for a cloud initiative, for a few reasons. First, we all know that “cloud” is among the most over-marketed tech terms in history. It’s so bad that I’ve asked my marketing team to replace every instance of cloud with “Fluffernutter” just to be unique (no word on that yet). Despite the hype, bona fide “cloud architect” skillsets are few and far between. IT leaders need to make sure their partner’s staff has the skills and track record to qualify, justify, scope, build, and support a Fluff, er, cloud infrastructure.

Second, cloud is such a broad concept that it can be overwhelming trying to figure out where to start. A smart partner will work with you to identify use cases that have been successful for other firms. This can range from a narrowly-focused project such as cloud backup to a full-blown private cloud infrastructure that completely modernizes the role of IT within an organization. The key here is talking to folks who have actually done the work and can speak to the opportunities and challenges.

Now, I’m certainly not suggesting that you don’t do your own independent vetting outside of your partner community. But once your due diligence is done, a great partner can act as an extended part of your team and put much-needed cycles back into your day. IT leaders who are proactive with these relationships will find the payback sweet indeed.