Tag Archives: cloud

Big Daddy Don Garlits & the Cloud: Capable Vs. Functional

I know what you’re thinking, yet another car analogy, but bear with me, I think you’ll like it…eventually ;)

When I was a kid, like around 11 or 12, during the summers I would ride my bike into town to go to the municipal pool to hang out with my friends and basically have fun.  On my way to the pool I used to ride past a garage and body shop in my neighborhood and sometimes I would stop to look around.  One day I found it had a back lot where there were a bunch of cars parked amongst the weeds, broken concrete and gravel.  I don’t remember thinking about why the cars were there except that maybe they were in various states of repair (or disrepair as the case may be…lots of rust, not a lot of intact glass) or that they were just forgotten about and left to slowly disintegrate and return to nature.

Back then I do remember that I was seriously on the path toward full-on car craziness as I was just starting to dream of driving, feeling the wind in my hair (yeah, it was that long ago) and enjoying the freedom I imagined it would bring.  I was a huge fan of “Car Toons” which was sort of the Mad Magazine of cars and basically lusted after hot rods, dragsters and sports cars.  I was endlessly scribbling car doodles on my note books and in the margins of text books.  I thought of myself as a cross between Big Daddy Don Garlits and a sports car designer.  In fact, I used to spend hours drawing what I thought was the perfect car and would give the design to my dad who, back then, was a car designer for the Ford Motor Company. I have no idea what ever happened to those designs but I imagine they were conspicuously put in his briefcase at home and dumped in the trash at work.

Anyway, among the various shells of once bright and gleaming cars in that back lot, almost hidden amongst the weeds was a candy-apple red Ford Pantera or, more accurately; the De Tomaso Pantera that was designed and built in Italy and powered by a Ford engine (and eventually imported to the US to be sold in Lincoln/Mercury dealerships).  The car sat on half-filled radial tires (relatively new to the US) and still sparkled as if it just came off the showroom floor…haa ha, or so my feverish car-obsessed, pre-teen brain thought it sparkled.  It was sleek, low to the ground and looked as if it were going 100 miles an hour just sitting there.  It was a supercar before the word was coined and I was deeply, madly and completely in love with it.

Of course, at 12 years old the only thing I could really do was dream of driving the car—I was, after all, 4 years away from even having a driver’s license—but I distinctly remember how vivid those daydreams were, how utterly real and “possible” they seemed.

Fast forward to now and to the customers I consult with about their desires for a building a cloud infrastructure within their environments. They are doing exactly what I did almost 40 years ago in that back lot; they are looking at shiny new ways of doing things: being faster, highly flexible, elastic, personal, serviceable—more innovative—and fully imagining how it would feel to run those amazingly effective infrastructures…but…like I was back then, they are just as unable to operate those new things as I was unable to drive that Pantera.  Even if I could afford to buy it, I had no knowledge or experience that would enable me to effectively (or legally) drive it.  That is the difference between being Functional and Capable.

The Pantera was certainly capable but *in relation to me* was not anywhere near being functional.  The essence and nature of the car never changed but my ability to effectively harness its power and direct it toward some beneficial outcome was zero; therefore the car was non-functional as far as I was concerned.  The same way a cloud infrastructure—fully built out with well architected components, tested and running—would be non-functional to customers who did not know how to operate that type of infrastructure.

In short; cloud capable versus cloud functional.

The way that a cloud infrastructure should be operated is based on the idea of delivering IT services and not the traditional ideas of servers and storage and networks being individually built, configured and connected by people doing physical stuff.  Cloud infrastructures are automated and orchestrated to deliver specific functionality aggregated into specific services; fast and efficiently, without the need for people doing “stuff.”  In fact, people doing stuff is too slow and just gets in the way and if you don’t change the operations of the systems to reflect that, you end up with a very capable yet non-functional system.

Literally, you have to transform how you operate the system—from a traditional to a cloud infrastructure—in lock-step with how that system is materially changed or it will be very much the same sort of difference between me riding my bicycle into town at 12 years old and me driving a candy-apple red Pantera.  It’s just dreaming until the required knowledge and experience is obtained…none of which is easy or quick…but tell that to a 12 year old lost in his imagination staring at sparkling red freedom and adventure…

Mind the Gap – Consumerization of Innovation

The landscape of IT innovation is changing. “Back in the day” (said in my gravelly old-man voice from my Barcalounger wearing my Netware red t-shirt) companies who were developing new technology solutions brought them to the enterprise and marketed them to the IT management stack. CIOs, CTOs and IT directors were the injection point for technology acceptance into the business. Now, that injection point has been turned into a fire hose.

Think about many of the technologies we have to consider as we develop our enterprise architectures:  tablets, smartphones, cloud computing, application stores, and file synchronization. Because our users and clients are consuming these technologies today outside of IT, we need to be aware of what they are using, how they are using it, and what bunker-buster is likely to be dropped into our lap next.

Sure, you can argue that “tablets” had been around for a number of years prior to the release of the iPad in 2010.  Apple’s own Newton Message Pad in 1993 is often the first device defined as a computing tablet. HP, IBM and others developed “tablets” going back to 2000 based on the Microsoft Tablet PC specification. These did gain some traction in certain industries (construction/architecture, medical).  However, these were primarily converted laptops with minimally innovative capabilities that failed to gain mass adoption. With the iPad, Apple demonstrated the concept of consumerization of innovation by developing the platform to the needs of the consumer market first, addressing the reasons why people would use a computing tablet instead of just pounding current corporate technology into a new shape. 

Now, IT has to deal with mass iPad usage by their users and customers.

Similarly, cloud services have been used in the consumer market for over a decade. It can be stated that many of the services users consume outside of the enterprise are cloud services (iTunes, Dropbox, Skype, Pandora, social networking, etc). As a consumer of these services, the user gains functionality that is not always available from the enterprises they work for. They can select, download and install applications that address their specific needs (self-service anyone?). They can share files with others around the globe. They can select the type of content they consume and how they communicate with others via streaming audio, video and news feeds. And don’t get me started on Twitter.

And this is the Gap IT needs to close.

We have tried to show our user population and our business owners the deficiencies in these technologies in terms of security, availability, service levels, management and other great IT industry “talk to the hand” terminology.  We’ve turned blue in the face and stamped our feet like a 2-year-old in the candy isle.  But has that stopped the pressure to adopt and enable these technologies within the enterprise? Remember, our business owners are consumers too.

IT needs to give a little here to maintain a modicum of control over the consumption of these technologies. The tech companies will continue to market to the masses (wouldn’t you?) as long as that mass market continues to consume.  And we, as IT people, will continue to face that mounting pressure and have to answer the question: “Why can’t we do that?” The net is that the pendulum of innovation is now swinging to the consumer side of the fulcrum. IT is reacting to technology instead of introducing it.

To close this Gap, we need to develop ways of saying “yes” without compromising our policies and standards, and do it efficiently. Is there a magic bullet here? No. But we have to recognize the inevitable and start moving toward the light. 

My best advice today is to be open-minded to what users are asking for. Expand your acceptance of user-initiated technology requests (many of them may be great ways to solve long term issues). Become an enabler instead of a CI –“no”. Adjust your perspectives to allow for flexibility in your control processes, tools and metrics.  And, most important of all, become a consumer of the consumer innovations. Knowledge is power, and experience is the best teacher we have.

 

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!

Cloud Corner Series -The Networking & Storage Challenges Around Clustered Datacenters



www.youtube.com/watch?v=fRl-KDveZQg

In this new episode of Cloud Corner, Director of Solutions Architecture Randy Weis and Solutions Architect Nick Phelps sit down to talk about clustered datacenters from both a networking and storage perspective. They discuss the challenges, provide some expert advice, and talk about what they think will be in store for the future. Check it out and enjoy!

Mind the Gap – Transitioning Your IT Management Methodology

At the recent GreenPages’ Summit, I presented on a topic that I believe will be key to our (for those of us in IT management) success as we re-define IT in the “cloud” era.  In the past, I have tried to define the term “cloud,” and have described it as anything from “an ecosystem of compute capabilities that can be delivered upon demand from anywhere to anywhere” to “IT in 3D.”  In truth, its definition is not really that important, but how we enable the appropriate use of it in our architectures is.

One barrier to adopting cloud as a part of an IT strategy is how we will manage the resources it provides us.  In theory, cloud services are beyond our direct control.  But are they beyond our ability to evaluate and influence?

IT is about enablement.  Enabling our customers or end users to complete the tasks that drive our businesses forward is our true calling.  Enabling the business to gain intelligence from its data is our craft.    So, we must strive to enable, where appropriate and effective, the use of cloud services as part of our mission.  What then is the impact to IT management?

There are the obvious challenges.  Cloud services are provided by, and managed by, those whom we consume them from.  Users utilizing cloud services may do so outside of IT control.  And, what happens when data and services step into that void where we cannot see?

In order to manage effectively in this brave new world of enablement, we must start to transition our methodologies and change our long-standing assumptions of what is critical.  Sure, we still have to manage and maintain our own datacenters (unless you go 100% service provider).  However, our concept of a datacenter has to change.  For one thing, datacenters are not really “centers” anymore. Once you leverage external resources as part of your overall architecture, you step outside of the hardened physical/virtual platforms that exist within your own facilities.  A datacenter is now “a flexible, secure and measurable compute utility comprised of delivery mechanisms, consumption points, and all connectivity in between.”

And so, we need to change how we manage our IT resources.  We need to expand our scope and visibility to include both the cloud services that are part of our delivery and connectivity mechanisms, and the end points used to consume our data and services.  This leads to a fundamental shift in daily operations and management.  Going forward, we need to be able to measure our service effectiveness end to end, even if in between they travel through systems that are not our own to devices we did not provision.

This is a transition, not a light-switch event.  Over the next few blogs I hope to focus some attention on several of the gaps that will exist as we move forward.  As a sneak peak, consider these statements:

Consumerization of technical innovation

Service-oriented management focus

Quality of Experience

“Greatest Generation” of users

Come on by and bring your imagination.  There is not one right or wrong answer here, but a framework for us to discuss what changes are coming like a speeding train, and how we need to mind the gap if we don’t want to be run over.

Cloudscape 2012: WhatsUp at GreenPages? Journey to success!

Guest Post from Caitlin Buxton, Director of North American Channel Sales, WhatsUp Gold Network Management Division of Ipswitch, Inc.

The WhatsUp Gold team attended the GreenPages Annual Technology Summit this week on the scenic New Hampshire/Maine Seacoast. This event was one of the most valuable technology summits we have participated in this year. The three-day event showcased all of GreenPages’ exemplary talent, skill, and professionalism that the organization brings to the IT community for both clients and vendor partners.

During the Partner Pavilion, we exhibited WhatsUp Gold’s Suite of Network Management and Log Management solutions and showed attendees how these solutions install, discover, and map network connected assets in minutes. We also showcased the powerful SNMP, WMI and SSH monitoring, alerting and notification capabilities, and web-based management which gives organizations a complete picture of an entire network infrastructure in real-time.

The entire GreenPages staff worked very closely with our team both in pre-event planning and during the event to make sure our investment and time was well spent by engaging with their clients, learning their challenges, and understanding how our solutions can make life easier. The GreenPages Account Managers were fantastic in providing insight into their clients’ needs and facilitating productive conversations.

I was also impressed by how many clients raved about the incredible value they receive from GreenPages. Repeatedly, I was told how hard the GreenPages team works to understand their individual business needs and helps to deliver solutions and information specific to their needs. They are always looking out for their customers’ best interests.

This is not surprising given that 100 IT Executives with limited time and budgets would not have travelled from all over the country for this event if they did not get significant value from it. However, it was refreshing to hear directly from the customers. It validates the pride I have in our GreenPages partnership knowing such a quality organization is on our team representing the WhatsUp Gold family of solutions.

Well done GreenPages! Thank you!

The Operational Consistency Proxy

#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside

f5friday

While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.

F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.

f5em2.0AUTOMATE COMMON TASKS

The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.

Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.

The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.

INTEGRATED PERFORMANCE METRICS real-time-app-perf-monitoring-cloud-dc

F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.

These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand. 

Monitoring includes:

  • Device Level Visibility & Monitoring
  • Capacity Planning
  • Virtual Level & Pool Member Statistics
  • Object Level Visibility
  • Near Real-Time Graphics
  • Reporting

In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.

Alerts include:

  • Device status change
  • SSL certificate expiration
  • Software install complete
  • Software copy failure
  • Statistics data threshold
  • Configuration synchronization
  • Attack signature update
  • Clock skew

When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.

OPERATIONAL CONSISTENCY PROXY

By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.

F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.

Additional Resources:

Happy Managing!


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

News on Windows 2012, Office 365 and Canadian Police

I had the pleasure of attending the Microsoft Worldwide Partner Conference in Toronto, Canada earlier this month and worldwide it was as 16,000 attendees squeezed into the Air Canada Center for Microsoft’s morning key note speeches.  That’s the most that arena has seen inside its snug confines since Vince Carter was dunking on opposing players, or I guess when Vince Carter could dunk period.  It was a week where Microsoft spent making some big announcements, covered some important changes and showcased some new products “Eh.”

The first major announcement was Microsoft’s Office 365 cloud solution which later this year will be available for purchase under the Open Licensing Program.  Office 365 was released last summer and has been solely available for customers to purchase online, although partners like GreenPages would assist with quoting the subscription, ultimately customers would purchase the monthly subscription directly from Microsoft, which can be a little painstaking and nevertheless confusing (like this sentence is).  Now with the announcement that Office 365 will be available through volume licensing, we’ll be able to invoice the customer directly like we would with an on-premise product, making the process much simpler for you.  Now you’ll have another avenue to purchase the subscription.  Most likely it will be available through the Open Value program and details are still being ironed out, so be on the lookout as we’ll provide the latest information as to when this will be available through volume licensing.

The other news is the announcement of Windows 8 set to be released to manufacturing in August and general availability in October.  Microsoft is very excited about this new release as they said it is the most anticipated release they’ve had since XP.  They showcased some pretty nifty touchscreen laptops with Windows 8 Professional loaded on which, I would have loved to bring back to the States, and I would have, assuming the Royal Canadian Mounted Police didn’t finally catch up with me at the Boarder.

The biggest news is the upcoming release of Windows 2012 which is scheduled for General Availability in early September and will offer new enhancements centered around Hyper-V. Along with the new features there are some major licensing changes, loss of an edition (nice knowing you Enterprise) and upgrade paths if you have current Software Assurance.

The first change with Windows 2012 is it will move to a more consistent licensing model and each edition will have the same exact common features, however the editions have been reduced.  With Windows 2012 there will only be two editions: Standard and Datacenter. Windows Enterprise, on the other hand, has been cut from the team and will not be at training camp when Windows 2012 debuts.  So you’re probably wondering, if Standard and Datacenter have the exact same features and can perform the same tasks than what is the difference between the two?   It’s all in the licensing, but before we get into the licensing, let’s check out the new features in Windows 2012 Standard edition which previously were only available in the premium editions.

Both Windows Standard and Datacenter will include these features among others.

-Windows Server Failover Clustering

-BranchCache Hosted Chache Server

-Active Directory Federated Services

-Additional Active Directory Certificate Services capabilities

-Distributed File Services

-DRS-R Cross-File Replication

Along with the new features there is a new licensing model for Windows 2012.  Both Windows 2012 Standard and Datacenter will now be licensed by the processor and the days of per server licensing are now gone and the biggest reason for that is virtualization.  What differentiates the two editions is the number of Virtual Machines (VMs) that are entitled to be run with each edition.  A Standard edition license will entitle you to run up to two VMs on up to two processors.  A Datacenter edition license will entitle you to run an unlimited number of VMs on up to two processors. Each license of Standard and Datacenter will cover two processors so for example if you have a quad-processor host, you would purchase 2 x Two-Processor licenses.  The Two-Processor license cannot be split up, meaning you can’t put one processor license on one server and the other processor license on another, nor can you combine a Standard and Datacenter license on the same host.  The processor license does not include Cals.  Windows Cals would still have to be purchased separately.

Ok, now that I have dropped this knowledge on you, what should you expect moving forward?  Let’s talk about pricing and what this new model is going to cost you.  A Two-Processor license of Datacenter will retail for $4,809, which breaks down to $2,405 a CPU.  The current retail price for Windows 2008 R2 Datacenter per Processor license is $2,405 so nothing has changed there.  For Windows 2012 Standard, a Two-Processor license retails for $882.  For those of you who were accustomed to purchasing Windows 2008 R2 Enterprise for $2,358 MSRP so you could use the 4-VMs that came with it will notice that the price to get 4-VMs of Windows 2012 (2 x Two-Processor Windows 2012 Standard = $1,764) is actually going to be less than what Windows 2008 R2 Enterprise costs.  The issue will be for those who need Windows Standard for a physical server.  Since there is no Windows 2012 license for physical servers, you’ll have to purchase the Two-Processor license.  Currently, Windows 2008 R2 Standard edition runs for $726 retail so you will be paying more to use Windows on physical servers.

Once Windows 2012 is released, you’ll still be able to use prior versions, which is known as downgrade rights.  Windows 2012 Datacenter edition can downgrade to any prior version or lower edition.  Windows 2012 Standard edition gives you rights to downgrade to any prior version of Standard or Enterprise edition.

In addition, if you have current Software Assurance (SA) on your Windows 2008 R2 license you are entitled to Windows 2012.  If you have Software Assurance on Datacenter edition you will be entitled to Windows 2012 Datacenter edition.  Today Datacenter edition covers 1 processor and Datacenter 2012 license with cover 2 processors, so for every two current Datacenter licenses with Software Assurance, you will receive one Windows 2012 Datacenter edition license.  If you have Software Assurance on Enterprise edition, you will be entitled to receive 2 x Two-Processor Standard 2012 edition licenses, that way you still have coverage of 4-VMs.  Lastly, if you have Software Assurance on Standard edition you’ll receive one Windows 2012 Standard edition license for each Standard edition license you own.

As you’re taking this news in, there are a few things I’d recommend considering.  The first of which is if you’re looking to purchase Windows over the next couple of months prior to Windows 2012’s release, you should look at purchasing it with Software Assurance because that will give you new versions rights to Windows 2012 once it’s ships.  Keep in mind you don’t have to load Windows 2012 right away, but by having Software Assurance it will give you access when you decide to. Also, there may be instances where you need to add VMs to your host, specifically those running Windows Standard and the only way to add more VMs is to purchase additional Windows Standard licenses.  Secondly, if you think you’ll be adding a substantial amount of VMs in the future, but don’t want to invest in Datacenter today, what you can do is purchase Windows Standard with Software Assurance through these participating license programs: Open Value, Select and Enterprise Agreement and by doing so you will be eligible to  “Step-Up”  your Standard License to Datacenter.  Step-Up is Microsoft’s term for an upgrade.  This Step-Up license will allow you to upgrade from your Standard edition license to Datacenter edition, thus providing you unlimited VMs on that host.  Again the Standard license would have to have current Software Assurance and be purchased through the aforementioned licensing programs.

Obviously this is big news and will create many more questions and we’re here to assist and guide you through the purchase process so feel free to reach out to your GreenPages Account Executive for more details.

Automation & Orchestration Part 1: What’s In A Name? That Which We Call a “Service”…

The phrases “service,” “abstraction,” & “automation & orchestration” are used a lot these days. Over the course of the next few blogs, I am going to describe what I think each phrase means and in the final blog I will describe how they all tie in together.

Let’s look at “service.” To me, when you trim off all the fat that word means, “Something (from whom) that provides a benefit to something (to whom).” The first thing that comes to mind when I think of who provides me a service is a bartender. I like wine. They have wine behind the bar. I will pay them the price of a glass + 20% for them to fill that glass & move it from behind the bar to in front of me. It’s all about services these days. Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. Professional services. Service level agreement. No shirts, no shoes, no service.

Within a company, there are many people working together to deliver a service. Some to external people & some to internal people. I want to examine an internal service because those tend to be much more loosely defined & documented. If a company sells an external service to a customer, chances are that service is very well defined b/c that company needs to describe in very clear terms to the customer exactly what they are getting when the customer shells out money. If that service changes, careful consideration needs to be paid to what ways that service can add more benefit (i.e., make the company more money) and in what ways parts of that service will change or be removed. Think about how many “Terms of Service & Conditions” pamphlets you get from a credit card company and how many pages each one is.

It can take many, many hours as a consultant in order to understand a service as it exists in a company today. Typically, the “something” that provides a benefit are the many people who work together to deliver that service. In order to define the service and its scope, you need to break it down into manageable pieces…let’s call them “tasks.” And those tasks can be complex so you can break those down into “steps.” You will find that each task, with its one or more steps, which is part of a service, is usually performed by the same person over and over again. Or, if the task is performed a lot (many times per day) then that task can usually be executed by a member of a team and not just a single person. Having the capability internally for more than one person to perform a task also protects the company from when Bob in accounting takes a sick day or when Bob in accounting takes home a pink slip. I’ll throw in a teaser for when I cover automation and orchestration…it would be ideal that not only can Bob do a task, but a computer as well (automation). That also may play into Bob getting a pink slip…but, again, more on that later. For now Bob doesn’t need to update his resume.

A lot of companies have not documented many, if any, of the internal services they deliver. I’m sure there is someone who knows the service from soup to nuts, but it’s likely they don’t know how (can’t) to do every task—or—may not have the authority/permission (shouldn’t) to do the task. Determining who in a company performs what task(s) can be a big undertaking in and of itself. And then, once you find Bob (sorry to pick on you Bob), it takes a lot of time for him to describe all the steps he does to complete a task. And once you put it on paper & show Bob, he remembers that he missed a step. And once you’ve pieced it all together and Bob says, “Yup, that about covers it,” you ask Bob what happens when something goes wrong and he looks at you and says, “Oh man, where do I begin?”

That last part is key. When things go well I call it the “Happy Day Scenario.” But things don’t always go well (ask the Yankees after the 2004 season) and just as, if not more, important in understanding a service is to know what to do when the Bob hits the fan. This part is almost never documented. Documentation is boring to lots of people and it’s hard enough for people to capture what the service *should* do let alone what it *could* do if something goes awry. So it’s a challenge to get people to recall and also predict what could go wrong. Documenting and regurgitating the steps of a business service “back” to the company is a big undertaking and very valuable to that company. Without knowing what Bob does today, it’s extremely hard to tell him how he can do it better.

Fun with Neologism in the Cloud Era

Having spent the last several blog posts on more serious considerations about cloud computing and the new IT era, I decided to lighten things up a bit.  The term “cloud” has bothered me from the first time I heard it uttered, as the concept and definition are as nebulous as, well a cloud.  In the intervening years, when thoroughly boring my wife and friends with shop talk about the “cloud,” I came to realize that in order for cloud computing to become mainstream, “it” needs to have some way to translate to the masses.

Neologism is the process of creating new words using existing or combinations of existing words to form a more descriptive term.  In our industry neologisms have been used extensively, although many of us do not realize how these terms got coined.  For example, the word “blog” is a combination of web and log.  “Blog” was formed over time as the lexicon was adopted.  It began with a new form of communicating across the Internet, known as a web log.  “Web log” become “we blog” simply by moving the space between words one to the left.  Now, regardless of who you talk to, the term “blog” is pretty much a fully formed concept.  Similarly, the term “Internet” is a combination of “inter” (between) and “network”, hence meaning between networks.

Today, the term “cloud” has become so overused that confusion reigns (get it?) over everyone.

Cloudable – meaning something that is conducive to leveraging cloud.  As in:  “My CRM application is cloudable “ or “We want to leverage data protection that includes cloudable capabilities”

Cloudiac – someone who is a huge proponent of cloud services.  A combination of “Cloud” and “Maniac”, as in:  “There were cloudiacs everywhere at Interop. “  In the not too distant future, we very well may see parallels to the “Trekkie” phenomena.  Imagine a bunch of middle-aged IT professionals running around in costumes made of giant cotton-balls and cardboard lightning bolts.

Cloudologist – an expert in cloud solutions.  Different from a Cloudiac, the Cloudologist actually has experience in developing and utilizing cloud based services.   This will lead to master’s degree programs in Cloudology.

Cloutonomous –  maintaining your autonomy over your systems and data in the cloud.  “I may be in the Cloud but I make sure I’m cloutonomous.”  Could refer to the consumer of the cloud services not being tied into long term services commitments that may inhibit their ability to move services in the event of a vendor failing to hit SLAs.

Cloud crawl – actions related to monitoring or reviewing your various cloud services.  “I went cloud crawling today and everything was sweet.” Off-take of the common “pub crawl,” just not as fun and with no lingering after-effects.

Counter-cloud – a reference to the concept of “counter culture,” which dates back to hippie days of the 60s and 70s.  In this application, it would describe a person or business that is against utilizing cloud services mainly because it is the new trend, or because they feel that it’s the latest government conspiracy to control the world.

Global Clouding – IT’s version of Global Warming, except in this case the world isn’t becoming uninhabitable, IT is just becoming a bit fuzzy around the edges.  What will IT be like with the advent of Global Clouding?

Clackers – Cloud and Hacker.  Clackers are those nefarious, shadowy figures that focus on disruption of cloud services.  This “new” form of hacker will concentrate on capturing data in transit, traffic disruption/re-direction (i.e. DNS Changer anyone?), and platform incursion.

Because IT is so lexicon heavy, building up a stable of Cloud-based terminology is inevitable, and potentially beneficial in focusing the terminology further.  Besides, as Cloudiacs will be fond of saying… “resistance is futile.”

Do you have any Neologisms of your own? I’d love to hear some!