Optimize Your Infrastructure; From Hand-built to Mass-production

If you’ve been reading this blog, you’ll know that I write a lot about cloud and cloud technologies, specifically around optimizing IT infrastructures and transitioning them from traditional management methodologies and ideals toward dynamic, cloud-based methodologies.  Recently, in conversations with customers as well as my colleagues and peers within the industry, it is becoming increasingly clear that the public, at least the subset I deal with, are simply fed up with the massive amount of hype surrounding cloud.  Everyone is using that as a selling point and have attached so many different meanings that it has become meaningless…white noise that just hums in the background and adds no value to the conversation.  In order to try to cut through that background noise I’m going to cast the conversation in a way that is a lot less buzzy and a little more specific to what people know and are familiar with.  Let’s talk about cars (haa ha, again)…and how Henry Ford revolutionized the automobile industry.

First, let’s be clear that Henry Ford did not invent the automobile, he invented a way to make automobiles affordable to the common man or as he put it, the “great multitude.”  After the Model A, he realized he’d need a more efficient way to mass produce cars in order to lower the price while keeping them at the same level of quality they were known for. He looked at other industries and found four principles that would further his goal: interchangeable parts, continuous flow, division of labor, and reducing wasted effort. Ford put these principles into play gradually over five years, fine-tuning and testing as he went along. In 1913, they came together in the first moving assembly line ever used for large-scale manufacturing. Ford produced cars at a record-breaking rate…and each one that rolled off the production line was virtually identical to the one before and after.

Now let’s see how the same principles (of mass production) can revolutionize the IT Infrastructure as they did the automobile industry…and also let’s be clear that I am not calling this cloud, or dynamic datacenter or whatever the buzz-du-jour is, I am simply calling it an Optimized Infrastructure because that is what it is…an IT infrastructure that produces the highest quality IT products and services in the most efficient manner and at the lowest cost.

Interchangeable Parts

Henry Ford discovered significant efficiency by using interchangeable parts which meant making the individual pieces of the car the same every time. That way any valve would fit any engine, any steering wheel would fit any chassis. The efficiencies to be gained were proven in the assembly of standardized photography equipment pioneered by George Eastman in 1892. This meant improving the machinery and cutting tools used to make the parts. But once the machines were adjusted, a low-skilled laborer could operate them, replacing the skilled craftsperson that formerly made the parts by hand.

In a traditional “Hand-Built” IT infrastructure, skilled engineers are basically building servers—physical and virtual—and other IT assets from scratch and are typically reusing very little with each build.  They may have a “golden image” for the OS, but they then build multiple images based on the purpose of the server, its language or the geographic location of the division or department it is meant to serve.  They might layer on different software stacks with particularly configured applications or install each application one after another.  These assets are then configured by hand using run books, build lists etc. Then tested by hand, etc. which means that it takes time and skilled effort and there are still unacceptable amounts of errors, failures and expensive rework.

By significantly updating and improving the tools used (i.e. virtualization, configuration and change management, software distribution, etc.), the final state of IT assets can be standardized, the way they are built can be standardized, and the processes used to build them can be standardized…such that building any asset becomes a clear and repeatable process of connecting different parts together; these interchangeable parts can be used over and over and over again to produce virtually identical copies of the assets at much lower costs.

Division of Labor

Once Ford standardized his parts and tools, he needed to divide up how things were done in order to be more efficient. He needed to figure out which process should be done first so he divided the labor by breaking the assembly of the Model T into 84 distinct steps. Each worker was trained to do just one of these steps but always in the exact same order.

The Optimized Infrastructure relies on the same principle of dividing up the effort (of defining, creating, managing and ultimately retiring each IT asset) so that only the most relevant technology, tool or sometimes, yes, human, does the work. As can be seen in later sections, these “tools” (people, process or technology components) are then aligned in the most efficient manner such that it dramatically lowers the cost of running the system as well as guarantees that each specific work effort can be optimized individually, irrespective of the system as a whole.

Continuous Flow

To improve efficiency even more, and lower the cost even further, Ford needed the assembly line to be arranged so that as one task was finished, another began, with minimum time spent in set-up (set-up is always a negative production value). Ford was inspired by the meat-packing houses of Chicago and a grain mill conveyor belt he had seen. If he brought the work to the workers, they spent less time moving about. He adopted the Chicago meat-packers overhead trolley to auto production by installing the first automatic conveyer belt.

In an Optimized Infrastructure, this conveyor belt (assembly line) consists of individual process steps (automation) that are “brought to the worker” (each specific technological component responsible for that process step….see; division of labor) in a well-defined pattern (workflow) and then each workflow arranged in a well-controlled manner (orchestration) because it is no longer human workers doing those commodity IT activities (well, in 99.99% of the cases) but the system itself, leveraging virtualization, fungible resource pools and high levels of standardization among other things. This is the infrastructure assembly line and is how IT assets are mass produced…each identical and of the same high quality at the same low cost.

Reducing Wasted Effort

As a final principle, Ford called in Frederick Winslow Taylor, the creator of “scientific management,” to do time and motion studies to determine the exact speed at which the work should proceed and the exact motions workers should use to accomplish their tasks, thereby reducing wasted effort. In an Optimized Infrastructure, this is done through understanding and using continuous process improvement (CPI), but CPI cannot be done correctly unless you are monitoring the performance details of all the processes and the performance of the system as a whole and then documenting the results on a constant basis. This requires an infrastructure-wide management and monitoring strategy which, as you’ve probably guessed, was what Fredrick Taylor was doing in the Ford plant in the early 1900s.

Whatever You Call It…

From the start, the Model T was less expensive than most other hand-built cars because of expert engineering practices, but it was still not attainable for the “great multitude” as Ford had promised the world. He realized he’d need a more efficient way to produce the car in order to lower the price, and by using the four principles of interchangeable parts, continuous flow, division of labor, and reducing wasted effort, in 1915 he was able to drop the price of the Model T from $850 to $290 and, in that year, he sold 1 million cars.

Whether you prefer to call it cloud, or dynamic datacenter, or the Great Spedini’s Presto-Chango Cave of Magic Data doesn’t really matter…the fact is that those four principles listed above can be used along with the tools, technologies and operational methodologies that exist today—which are not rocket science or bleeding edge—to revolutionize your IT Infrastructure and stop hand-building your IT assets (employing your smartest and best workers to do so) and start mass producing those assets to lower your cost, increase your quality and, ultimately, significantly increase the value of your infrastructure.

With an Optimized Infrastructure of automated tools and processes where standardized/interchangeable parts are constantly reused based on a well-designed and efficiently orchestrated workflow that is monitored end-to-end, you too can make IT affordable for the “great multitude” in your organization.

Morphlabs Brings Hope to Private Cloud

Morphlabs just announced its latest private-cloud infrastructure product, the mCloud Helix, at the Oscon conference in Portland. Company CEO Winston Damarillo has long focused this company on what he calls “dynamic infrastructure services,” meaning that he aims to bring touted benefits of public (off-site) cloud such as flexibility and simplicity to on-site, privately controlled infrastructures. He’s also a big open-source guy, having sold GlueCode to IBM in the “early” days.

But to me, the key aspect of Morphlabs is the company’s Philippine roots. It’s headquartered in Los Angeles, but maintains developer teams in Metro Manila as well as Cebu City, the country’s “second city” and an emerging, vibrant technology hub. A sister company, Exist Software, produces custom software and maintains most of its executive and worker-bee teams in the Philippines.

There’s a beautiful word called “pagasa” in use throughout the Philippines. It means “hope,” and companies like Morphlabs bring several dimensions of pagasa to the technology world. For one thing, this developing nation of 90 million souls and counting thirsts for high-value jobs in the technology sectors.

For another, the existence of successful software-development teams in places other than Silicon Valley bring a variety of points-of-view to the task of improving the world. The ideas generated in meet-ups in Manila are often better grounded in daily reality than are the latest valley frou frou addressing the latest first-world problems.

(The present state of google searching brings despair to us all, but you can find some of my earlier writing on the topic by googling “Strukhoff Philippines technology.” )

Morphlabs’s latest is grounded in the reality of OpenStack software and Dell servers. Here’s some of the geek stuff: it high performance SSD-powered nodes and pre-integrated ZFS, to eliminate the need for expensive enterprise SANs; it allegedly sets a new energy standard for watt/virtual CPU (vCPU); its incorporation of Dell PowerEdge C servers utilize the latest hyperscale technology.

A Dell executive has touted it for providing “a simple deployment for a compact private cloud.” For his part, Winston says it “empowers customers to take home and immediately deploy private clouds using best-of-breed open source software and hardware without requiring a massive CapEx investment.” It’s also being sold to service providers, so there’s no lack of confidence about its scalability.

Where there’s pagasa, there’s opportunity, and opportunity breeds innovation. Winston and his global team continue their efforts to bring pagasa to the Philippines and the world – surely, innovation will continue to follow.

read more

PaaS on Hadoop Yarn

This post describes a prototype implementation of a simple PAAS built on the Hadoop YARN framework and the key findings from the experiment. While there are some advantages to using Hadoop YARN, there is at least one unsolved issue that would be difficult to overcome at this point. 
Hadoop is so popular these days as Big Data is one of the major hot topics people are intererested in. Similarly, PaaS (Platform as a Service) is also popular as Cloud Computing is one of the hot topics. Then naturally one question came to us: can we combine both Hadoop and PaaS to satisfy the two hot topics – Big Data and Cloud? At the same time, Hadoop YARN (i.e. MapReduce2 or MR2) architecture became much more flexible compared to the previous version that the idea seemed to become more real.
I went ahead and implemented a proof of concept with Hadoop Yarn. Here I’d like to share its architecture and interesting findings from it.

read more

Why Windows Phone 8 breaks the backwards compatibility tradition

Microsoft had always maintained backwards compatibility with most of their products. Compatibility has been one of the main reasons Windows has seen such great success, specifically backwards compatibility to systems such as MS-DOS. Even today you can get to a DOS prompt (command prompt) from Windows 7, this hasn’t changed for years and from a functionality point of view this is a bonus.

The announcement of the new Windows Phone 8 (and previously Windows RT on which Windows Phone 8 is based) flew in the face of tradition for Microsoft, it “broke” the compatibility of the applications and their ability to run on the new platform. This has been a typically non-Microsoft way to act but something that the industry isn’t totally unfamiliar with.

Apple have, on more than one occasion, launched a platform that is incompatible with anything that had come before. I refer to the release …

Virtustream Aligns with SafeNet

Virtustream and SafeNet on Tuesday announced that they have entered into an agreement that will incorporate SafeNet’s market-leading authentication solutions into Virtustream’s enterprise cloud platform, xStream.
“Enterprises looking to deploy the cloud want both enterprise-grade performance and security, while still benefiting from the scalability and economics of multi-tenant virtualization technology,” said Dr. Shaw Chuang, Executive Vice President of Engineering, Virtustream. “Incorporating SafeNet’s authentication platform brings another best-of-breed offering to our platform and lays the foundation for further extensive security capabilities.”

read more

Examining the G-Cloud Initiative – How the UK Public Sector is moving to the Cloud

Guest Post by Ben Jones

Ben Jones is a tech writer, interested in how technology helps businesses. He’s been assisting businesses in setting up cloud based IT services around the south of England.

There’s a cloud on the horizon of Whitehall. But this isn’t a prediction of stormy times ahead. No, this is the G-Cloud, and it’s being heralded by some as government’s biggest ever IT breakthrough.

In years gone by, the government has been accused of paying too much for IT contracts, many of which were won by a small number of suppliers. But now, the G-Cloud initiative aims to change this. The online system called, CloudStore, is part of the government’s plans to slash IT costs by £200million per year. So how is this going to be achieved? Well, the target is to move half of the government’s IT spending to cloud computing services and the CloudStore, also dubbed the government’s app store, is the key.

It was first announced as a government strategy almost 18 months ago in March 2011 with specific aim of making IT services for the public sector easier and cheaper. This means ditching the expensive bespoke IT services with lengthy, expensive contracts. Instead this initiative aims to replace these with more choice both in suppliers and, as a result prices. It’s a radical change in the historic approach by both the government and the public sector. Furthermore, cloud computing has the potential to be a global governmental strategy, with the American government already having its own version in place. And a look at the figures gives a clear indication why, with some governmental departments reporting a drop in the cost of IT services by as much as 90 per cent. And following the first CloudStore catalogue launch in mid-February, some 5000 pages were viewed in the first two hours, and in the first ten weeks, contracts worth £500,000 were signed. In this first procurement, around 257 suppliers offering approximately 1700 services were signed to the first G-Cloud CloudStore.

It’s the government’s attempt to bring competitiveness to its suppliers, encouraging a wider selection and promoting flexibility in procurements thus allowing more choice to the public sector. And what’s interesting is the mix of both small and medium sized businesses with over half of the suppliers signed to the first CloudStore being SMEs. This includes the likes of web hosting company Memset whose managing director Kate Craig-Wood backed the G-Cloud Services, who says they offered value for money for the taxpayer.

This new initiative heralds a new era for the British government and the wider public sector. And it’s hoped the new IT system will put paid to the Government’s history of ill-advised and mismanaged IT projects. That’s not to say there haven’t been any concerns over the G-Cloud Initiative. Some key concerns have related to how it’s going to be rolled out to public sector workers across the UK with some employees having fears over security as well as a lack of understanding. However, these haven’t stopped the second round of procurement for the G-Cloud in May 2012 with the total procurement value now available there soaring to £100 million. And in this time, the framework will run for 12 months and not the six as per the first iteration. This year-long contract will then become the standard, although it has been reported that this could be extended to 24 months in certain cases.


Efforts Underway to Provide Trusted Supplier Standard

This thought leadership interview examines the latest efforts to make global supply chains for technology providers more secure, verified, and therefore trusted.
The Open Group has a vision of boundaryless information flow, and that necessarily involves interoperability. But interoperability doesn’t have the effect that you want, unless you can also trust the information that you’re getting, as it flows through the system.
Therefore, it’s necessary that you be able to trust all of the links in the chain that you use to deliver your information. One thing that everybody who watches the news would acknowledge is that the threat landscape has changed. As systems become more and more interoperable, we get more and more attacks on the system.
As the value that flows through the system increases, there’s a lot more interest in cyber crime. Unfortunately, in our world, there’s now the issue of state-sponsored incursions in cyberspace, whether officially state-sponsored or not, but politically motivated ones certainly.

read more

Cloud Computing: Coraid Unveils ZX-Series NAS

Coraid on Tuesday unveiled the new Coraid ZX-Series family of NAS servers. Designed for cloud, video and Big Data customers, this high-performance unified storage solution is powered by the Oracle Solaris ZFS file system combined with Coraid’s EtherDrive technology to enable unmatched scalability, performance and operational simplicity.
Carl Wright, executive vice president at Coraid noted that “organizations are increasingly challenged to provide predictable, cost-effective file performance in the face of uncontrolled data growth. By extending our product family to include a best-in-class NAS offering, Coraid can meet that challenge with a unified storage solution that takes full advantage of the scalability and performance of Ethernet SAN.”

read more

Google Exec to Be Yahoo CEO

Yahoo’s going with another plainspoken blonde.
After the market closed Monday Yahoo’s board named Marissa Mayer, one of Google’s first employees and long-time protector of the stripped-down look of Google’s homepage, CEO replacing interim CEO Ross Levinsohn.
The Yahoo board is taking another flyer on Mayer, 37, considering she’s coming out of Google’s product side, responsible for the look and feel of such things as Google Mail, Google News and since 2010 Google Maps and other Google location and local services. She has no CEO or turnaround experience. She did sit on Google’s operating committee and is on the board of Wal-Mart.
It’s speculated that the Third Point contingent on Yahoo’s board, who got there recently by way of a threatened proxy fight, were instrumental in recruiting her. Presumably they think Yahoo’s products and technology need an overhaul to get more competitive with Google and Facebook.

read more

Crunching the Numbers in Search of a Greener Cloud

Although sometimes portrayed as a big computer in the sky, the reality of cloud computing is far more mundane. Clouds run on physical hardware, located in data centres, connected to one another and to their customers via high speed networks. All of that hardware must be powered and cooled, and all of those offices must be lit. Whilst many data centre operators continue to make welcome strides toward increasing the efficiency of their buildings, machines and processes, these advances remain a drop in the ocean next to the environmental implications of choices made about power source. With access to good information, might it be possible for users of the cloud to make choices that save themselves money, whilst at the same time saving (a bit of) the planet? Greenpeace has consistently drawn attention to the importance of energy choices in evaluating the environmental credentials of data centres, with 2011′s How Dirty Is Your Data? report continuing to polarise arguments after more than a year. The most efficient modern data centres deploy an impressive arsenal of tricks to save energy (and therefore money), and to burnish their green credentials. They use the most efficient modern processors, heat offices with waste server […]

read more

The cloud news categorized.