Tag Archives: cloud

The Operational Consistency Proxy

#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside

f5friday

While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.

F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.

f5em2.0AUTOMATE COMMON TASKS

The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.

Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.

The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.

INTEGRATED PERFORMANCE METRICS real-time-app-perf-monitoring-cloud-dc

F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.

These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand. 

Monitoring includes:

  • Device Level Visibility & Monitoring
  • Capacity Planning
  • Virtual Level & Pool Member Statistics
  • Object Level Visibility
  • Near Real-Time Graphics
  • Reporting

In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.

Alerts include:

  • Device status change
  • SSL certificate expiration
  • Software install complete
  • Software copy failure
  • Statistics data threshold
  • Configuration synchronization
  • Attack signature update
  • Clock skew

When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.

OPERATIONAL CONSISTENCY PROXY

By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.

F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.

Additional Resources:

Happy Managing!


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

News on Windows 2012, Office 365 and Canadian Police

I had the pleasure of attending the Microsoft Worldwide Partner Conference in Toronto, Canada earlier this month and worldwide it was as 16,000 attendees squeezed into the Air Canada Center for Microsoft’s morning key note speeches.  That’s the most that arena has seen inside its snug confines since Vince Carter was dunking on opposing players, or I guess when Vince Carter could dunk period.  It was a week where Microsoft spent making some big announcements, covered some important changes and showcased some new products “Eh.”

The first major announcement was Microsoft’s Office 365 cloud solution which later this year will be available for purchase under the Open Licensing Program.  Office 365 was released last summer and has been solely available for customers to purchase online, although partners like GreenPages would assist with quoting the subscription, ultimately customers would purchase the monthly subscription directly from Microsoft, which can be a little painstaking and nevertheless confusing (like this sentence is).  Now with the announcement that Office 365 will be available through volume licensing, we’ll be able to invoice the customer directly like we would with an on-premise product, making the process much simpler for you.  Now you’ll have another avenue to purchase the subscription.  Most likely it will be available through the Open Value program and details are still being ironed out, so be on the lookout as we’ll provide the latest information as to when this will be available through volume licensing.

The other news is the announcement of Windows 8 set to be released to manufacturing in August and general availability in October.  Microsoft is very excited about this new release as they said it is the most anticipated release they’ve had since XP.  They showcased some pretty nifty touchscreen laptops with Windows 8 Professional loaded on which, I would have loved to bring back to the States, and I would have, assuming the Royal Canadian Mounted Police didn’t finally catch up with me at the Boarder.

The biggest news is the upcoming release of Windows 2012 which is scheduled for General Availability in early September and will offer new enhancements centered around Hyper-V. Along with the new features there are some major licensing changes, loss of an edition (nice knowing you Enterprise) and upgrade paths if you have current Software Assurance.

The first change with Windows 2012 is it will move to a more consistent licensing model and each edition will have the same exact common features, however the editions have been reduced.  With Windows 2012 there will only be two editions: Standard and Datacenter. Windows Enterprise, on the other hand, has been cut from the team and will not be at training camp when Windows 2012 debuts.  So you’re probably wondering, if Standard and Datacenter have the exact same features and can perform the same tasks than what is the difference between the two?   It’s all in the licensing, but before we get into the licensing, let’s check out the new features in Windows 2012 Standard edition which previously were only available in the premium editions.

Both Windows Standard and Datacenter will include these features among others.

-Windows Server Failover Clustering

-BranchCache Hosted Chache Server

-Active Directory Federated Services

-Additional Active Directory Certificate Services capabilities

-Distributed File Services

-DRS-R Cross-File Replication

Along with the new features there is a new licensing model for Windows 2012.  Both Windows 2012 Standard and Datacenter will now be licensed by the processor and the days of per server licensing are now gone and the biggest reason for that is virtualization.  What differentiates the two editions is the number of Virtual Machines (VMs) that are entitled to be run with each edition.  A Standard edition license will entitle you to run up to two VMs on up to two processors.  A Datacenter edition license will entitle you to run an unlimited number of VMs on up to two processors. Each license of Standard and Datacenter will cover two processors so for example if you have a quad-processor host, you would purchase 2 x Two-Processor licenses.  The Two-Processor license cannot be split up, meaning you can’t put one processor license on one server and the other processor license on another, nor can you combine a Standard and Datacenter license on the same host.  The processor license does not include Cals.  Windows Cals would still have to be purchased separately.

Ok, now that I have dropped this knowledge on you, what should you expect moving forward?  Let’s talk about pricing and what this new model is going to cost you.  A Two-Processor license of Datacenter will retail for $4,809, which breaks down to $2,405 a CPU.  The current retail price for Windows 2008 R2 Datacenter per Processor license is $2,405 so nothing has changed there.  For Windows 2012 Standard, a Two-Processor license retails for $882.  For those of you who were accustomed to purchasing Windows 2008 R2 Enterprise for $2,358 MSRP so you could use the 4-VMs that came with it will notice that the price to get 4-VMs of Windows 2012 (2 x Two-Processor Windows 2012 Standard = $1,764) is actually going to be less than what Windows 2008 R2 Enterprise costs.  The issue will be for those who need Windows Standard for a physical server.  Since there is no Windows 2012 license for physical servers, you’ll have to purchase the Two-Processor license.  Currently, Windows 2008 R2 Standard edition runs for $726 retail so you will be paying more to use Windows on physical servers.

Once Windows 2012 is released, you’ll still be able to use prior versions, which is known as downgrade rights.  Windows 2012 Datacenter edition can downgrade to any prior version or lower edition.  Windows 2012 Standard edition gives you rights to downgrade to any prior version of Standard or Enterprise edition.

In addition, if you have current Software Assurance (SA) on your Windows 2008 R2 license you are entitled to Windows 2012.  If you have Software Assurance on Datacenter edition you will be entitled to Windows 2012 Datacenter edition.  Today Datacenter edition covers 1 processor and Datacenter 2012 license with cover 2 processors, so for every two current Datacenter licenses with Software Assurance, you will receive one Windows 2012 Datacenter edition license.  If you have Software Assurance on Enterprise edition, you will be entitled to receive 2 x Two-Processor Standard 2012 edition licenses, that way you still have coverage of 4-VMs.  Lastly, if you have Software Assurance on Standard edition you’ll receive one Windows 2012 Standard edition license for each Standard edition license you own.

As you’re taking this news in, there are a few things I’d recommend considering.  The first of which is if you’re looking to purchase Windows over the next couple of months prior to Windows 2012’s release, you should look at purchasing it with Software Assurance because that will give you new versions rights to Windows 2012 once it’s ships.  Keep in mind you don’t have to load Windows 2012 right away, but by having Software Assurance it will give you access when you decide to. Also, there may be instances where you need to add VMs to your host, specifically those running Windows Standard and the only way to add more VMs is to purchase additional Windows Standard licenses.  Secondly, if you think you’ll be adding a substantial amount of VMs in the future, but don’t want to invest in Datacenter today, what you can do is purchase Windows Standard with Software Assurance through these participating license programs: Open Value, Select and Enterprise Agreement and by doing so you will be eligible to  “Step-Up”  your Standard License to Datacenter.  Step-Up is Microsoft’s term for an upgrade.  This Step-Up license will allow you to upgrade from your Standard edition license to Datacenter edition, thus providing you unlimited VMs on that host.  Again the Standard license would have to have current Software Assurance and be purchased through the aforementioned licensing programs.

Obviously this is big news and will create many more questions and we’re here to assist and guide you through the purchase process so feel free to reach out to your GreenPages Account Executive for more details.

Automation & Orchestration Part 1: What’s In A Name? That Which We Call a “Service”…

The phrases “service,” “abstraction,” & “automation & orchestration” are used a lot these days. Over the course of the next few blogs, I am going to describe what I think each phrase means and in the final blog I will describe how they all tie in together.

Let’s look at “service.” To me, when you trim off all the fat that word means, “Something (from whom) that provides a benefit to something (to whom).” The first thing that comes to mind when I think of who provides me a service is a bartender. I like wine. They have wine behind the bar. I will pay them the price of a glass + 20% for them to fill that glass & move it from behind the bar to in front of me. It’s all about services these days. Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. Professional services. Service level agreement. No shirts, no shoes, no service.

Within a company, there are many people working together to deliver a service. Some to external people & some to internal people. I want to examine an internal service because those tend to be much more loosely defined & documented. If a company sells an external service to a customer, chances are that service is very well defined b/c that company needs to describe in very clear terms to the customer exactly what they are getting when the customer shells out money. If that service changes, careful consideration needs to be paid to what ways that service can add more benefit (i.e., make the company more money) and in what ways parts of that service will change or be removed. Think about how many “Terms of Service & Conditions” pamphlets you get from a credit card company and how many pages each one is.

It can take many, many hours as a consultant in order to understand a service as it exists in a company today. Typically, the “something” that provides a benefit are the many people who work together to deliver that service. In order to define the service and its scope, you need to break it down into manageable pieces…let’s call them “tasks.” And those tasks can be complex so you can break those down into “steps.” You will find that each task, with its one or more steps, which is part of a service, is usually performed by the same person over and over again. Or, if the task is performed a lot (many times per day) then that task can usually be executed by a member of a team and not just a single person. Having the capability internally for more than one person to perform a task also protects the company from when Bob in accounting takes a sick day or when Bob in accounting takes home a pink slip. I’ll throw in a teaser for when I cover automation and orchestration…it would be ideal that not only can Bob do a task, but a computer as well (automation). That also may play into Bob getting a pink slip…but, again, more on that later. For now Bob doesn’t need to update his resume.

A lot of companies have not documented many, if any, of the internal services they deliver. I’m sure there is someone who knows the service from soup to nuts, but it’s likely they don’t know how (can’t) to do every task—or—may not have the authority/permission (shouldn’t) to do the task. Determining who in a company performs what task(s) can be a big undertaking in and of itself. And then, once you find Bob (sorry to pick on you Bob), it takes a lot of time for him to describe all the steps he does to complete a task. And once you put it on paper & show Bob, he remembers that he missed a step. And once you’ve pieced it all together and Bob says, “Yup, that about covers it,” you ask Bob what happens when something goes wrong and he looks at you and says, “Oh man, where do I begin?”

That last part is key. When things go well I call it the “Happy Day Scenario.” But things don’t always go well (ask the Yankees after the 2004 season) and just as, if not more, important in understanding a service is to know what to do when the Bob hits the fan. This part is almost never documented. Documentation is boring to lots of people and it’s hard enough for people to capture what the service *should* do let alone what it *could* do if something goes awry. So it’s a challenge to get people to recall and also predict what could go wrong. Documenting and regurgitating the steps of a business service “back” to the company is a big undertaking and very valuable to that company. Without knowing what Bob does today, it’s extremely hard to tell him how he can do it better.

Fun with Neologism in the Cloud Era

Having spent the last several blog posts on more serious considerations about cloud computing and the new IT era, I decided to lighten things up a bit.  The term “cloud” has bothered me from the first time I heard it uttered, as the concept and definition are as nebulous as, well a cloud.  In the intervening years, when thoroughly boring my wife and friends with shop talk about the “cloud,” I came to realize that in order for cloud computing to become mainstream, “it” needs to have some way to translate to the masses.

Neologism is the process of creating new words using existing or combinations of existing words to form a more descriptive term.  In our industry neologisms have been used extensively, although many of us do not realize how these terms got coined.  For example, the word “blog” is a combination of web and log.  “Blog” was formed over time as the lexicon was adopted.  It began with a new form of communicating across the Internet, known as a web log.  “Web log” become “we blog” simply by moving the space between words one to the left.  Now, regardless of who you talk to, the term “blog” is pretty much a fully formed concept.  Similarly, the term “Internet” is a combination of “inter” (between) and “network”, hence meaning between networks.

Today, the term “cloud” has become so overused that confusion reigns (get it?) over everyone.

Cloudable – meaning something that is conducive to leveraging cloud.  As in:  “My CRM application is cloudable “ or “We want to leverage data protection that includes cloudable capabilities”

Cloudiac – someone who is a huge proponent of cloud services.  A combination of “Cloud” and “Maniac”, as in:  “There were cloudiacs everywhere at Interop. “  In the not too distant future, we very well may see parallels to the “Trekkie” phenomena.  Imagine a bunch of middle-aged IT professionals running around in costumes made of giant cotton-balls and cardboard lightning bolts.

Cloudologist – an expert in cloud solutions.  Different from a Cloudiac, the Cloudologist actually has experience in developing and utilizing cloud based services.   This will lead to master’s degree programs in Cloudology.

Cloutonomous –  maintaining your autonomy over your systems and data in the cloud.  “I may be in the Cloud but I make sure I’m cloutonomous.”  Could refer to the consumer of the cloud services not being tied into long term services commitments that may inhibit their ability to move services in the event of a vendor failing to hit SLAs.

Cloud crawl – actions related to monitoring or reviewing your various cloud services.  “I went cloud crawling today and everything was sweet.” Off-take of the common “pub crawl,” just not as fun and with no lingering after-effects.

Counter-cloud – a reference to the concept of “counter culture,” which dates back to hippie days of the 60s and 70s.  In this application, it would describe a person or business that is against utilizing cloud services mainly because it is the new trend, or because they feel that it’s the latest government conspiracy to control the world.

Global Clouding – IT’s version of Global Warming, except in this case the world isn’t becoming uninhabitable, IT is just becoming a bit fuzzy around the edges.  What will IT be like with the advent of Global Clouding?

Clackers – Cloud and Hacker.  Clackers are those nefarious, shadowy figures that focus on disruption of cloud services.  This “new” form of hacker will concentrate on capturing data in transit, traffic disruption/re-direction (i.e. DNS Changer anyone?), and platform incursion.

Because IT is so lexicon heavy, building up a stable of Cloud-based terminology is inevitable, and potentially beneficial in focusing the terminology further.  Besides, as Cloudiacs will be fond of saying… “resistance is futile.”

Do you have any Neologisms of your own? I’d love to hear some!

RECAP: HP Discover 2012 Event

If you are going to do something, make it matter.  That was the key phrase that was posted throughout the conference at HP Discover 2012 in Las Vegas a couple weeks ago.  With some of the new announcements, HP did just that.

One of the biggest announcements in my opinion is the HP Virtual Connect Direct-Attached Fibre Channel Storage for 3PAR. In a nutshell, it helps to reduce your SAN infrastructure by eliminating switches and HBAs. You connect your Blade System Servers directly to the 3PAR array.  This allows you to have a single layer FC storage network.  Since you won’t have a fabric to manage, you can increase your provisioning process by as much as 2.5X.  Also, by removing the fabric layer, you can eliminate up to 55% latency.

This will allow organizations to reduce costs by eliminating the SAN fabric.  It will save on operating costs by cutting down on capital expenditure.  It also scales with the “pay as you grow” methodology allowing you to purchase only what you need.

Complexity is greatly decreased with the wire-once strategy.  If new servers are added to the Blade Chassis, they simply access the storage through the already connected cabling.

Virtual Connect Manager allows for a single pane of glass approach.  It can be used through a web interface or CLI, for those UNIX lovers.

The new trend in IT is Big Data.  Some of the biggest customer challenges are the velocity and volume of data, the large variety and disparate sources of data, and the complex analytics that are required for maximizing the value of information.  HP introduced Vertica 6, which does all of
these.

Vertica 6 FlexStore has been expanded to allow access to any data, stored at any location, through any interface.  You can connect to Hadoop File Systems, existing databases, and data warehouses.  You can also access unstructured analysis platforms such as HP/Autonomy IDOL.

It also includes high performance data analytics for the R Statistical Tool natively and in parallel without the in-memory and single-threaded limitations of R. Vertica 6 has expanded their C++ SDK to add secure sandboxing of user-defined code.

Workload Management simplifies the user experience by enabling more diverse workloads.  Some users experienced up to a 40X speed increase on their queries.  Regardless of size, Workload Management balances all system resources to meet SLAs.

Vertica 6 software will run on the HP public cloud.  Web and mobile applications generate a ton of data.  This will allow business intelligence to quickly spot any trends that are developing and act accordingly.

Not to be overlooked are the enhancements made to the core components that are already part of the system.

Over the past few years, there has been a big interest in disk to disk backup and deduplication.  HP’s latest solution in this space is the B6200 with StoreOnce Catalyst software.  It has over 50 patents that deliver world record performance of 100TB/hr backups and 40TB/hr restores.  This claims to be 3X and 5X faster, respectively, than the next leading competitor.

The hardware is scalable.  It starts at 48TB (32TB usable) and can grow to 768TB (512TB usable).  With a typical deduplication rate of 20X, the system can provide extended data protection for up to 10PBs.

This is a federated backup solution that allows you to move data from remote sites to multiple datacenters without having to reduplicate it.  It integrates with HP Data Protector, Symantec NetBackup, and Symantec BackupExec giving the administrator one console to manage all deduplication, backup, and disaster recovery operations.

The portfolio also includes smaller units for SMB customers. They take advantage of the same type of technologies allowing companies to meet those pesky backup windows.

As a leading HP Partner, GreenPages can assist you with these or any of the products in the HP portfolio.

By Mark Mychalczuk

The Private Cloud Strikes Back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree.  His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth.  Yes, cost is indeed a factor as is sharing risk but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not. He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions.  I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon but if your main concern is how to better equip and provide the environment and tools necessary to innovate within your organization, the whole social thing is a red herring for selling you things that you don’t need.

Traditional status quo within IT is deeply encumbered by mostly manual processes—optimized for people carrying out commodity IT tasks such as provisioning servers and OSes—that cannot be optimized any further, therefore a different, much better way had to be found.  That way is the private cloud which takes those commodity IT tasks and elevates them to automated and orchestrated, well defined workflows and then utilizes a policy-driven system to carry them out.  Whether these workflows are initiated by a human or as a result of a specific set of monitored criteria, the system dynamically creates and recreates itself based on actual business and performance need—something that is almost impossible to translate into the public cloud scenario.

Not that public cloud cannot be leveraged where appropriate, but the enterprise’s requirement is much more granular and specific than any public cloud can or should allow…simply to JP’s point that they must share the risk among many players and that risk is generic by definition within the public cloud.  Once you start creating one-off specific environments, the commonality is lost and it loses the cost benefits because now you are simply utilizing a private cloud whose assets are owned by someone else…sound like co-lo?

Finally, I wouldn’t expect someone whose main revenue source is based on the idea that a public cloud is better than a private cloud to say anything different than what JP has said, but I did expect some semblance of clarity as to where his loyalties lie…and it looks like it’s not with the best interests of the enterprise customer.

Translating a Vision for IT Amid a “Severe Storm Watch”

IT departments adopt technology from two perspectives: from a directive by the CIO to a “rogue IT” suggestion or project from an individual user. The former represents a top-down condition, while the latter has technology adoption from the bottom-up. Oftentimes, there seems to be confusion somewhere in the middle, resulting in a smorgasbord of tools at one end, and a grand, ambitious strategy at the other end. This article suggests a framework to implement a vision from strategy, policy, process, and ultimately tools.

Vision for IT -> Strategies -> Policies -> Processes -> Procedures -> Tools and Automation

Revenue Generating Activities -> Business Process -> IT Services

As a solutions architect and consultant, I’ve met with many clients in the past few years. From director-level staff to engineers to support staff in the trenches, IT has taken on a language of its own. Every organization has its own acronyms, sure. Buzzwords and marketing hype strangle the English language inside the datacenter. Consider the range of experience present in many shops, and it is easy to imagine the confusion. The seasoned, senior executive talks about driving standards and reducing spend for datacenter floor space, and the excited young intern responds with telecommuting, tweets, and cloud computing, all in a proof-of-concept that is already in progress. What the…? Who’s right?

 

It occurred to me a while ago that there is a “severe storm watch” for IT. According to the National Weather Service, a “watch” is issued when conditions are favorable for [some type of weather chaos]. Well, in IT, more than in other departments, one can make these observations:

  • Generationally-diverse workforce
  • Diverse backgrounds of workers
  • Highly variable experience of workers
  • Rapidly changing products and offerings
  • High complexity of subject matter and decisions

My colleague, Geoff Smith, recently posted a five-part series (The Taxonomy of IT) describing the operations of IT departments. In the series, Geoff points out that IT departments take on different shapes and behaviors based on a number of factors. The series presents a thoughtful classification of IT departments and how they develop, with a framework borrowed from biology. This post presents a somewhat more tactical suggestion on how IT departments can deal with strategy and technology adoption.

Yet Another Framework

A quick search on Google shows a load of articles on Business and IT Alignment. There’s even a Wikipedia article on the topic. I hear it all the time, and I hate the term. This term suggests that “IT” simply does the bidding of “The Business,” whatever that may be. I prefer to see Business and IT Partnership. But anyway, let’s begin with a partnership within IT departments. Starting with tools, do you know the value proposition of all of the tools in your environment? Do you know about all of the tools in your environment?

 

A single Vision for IT should first translate into one or more Strategies. I’m thinking of a Vision statement for IT that looks something like the following:

“Acme IT exists as a competitive, prime provider of information technology services to enable Acme Company to generate revenue by developing, marketing, and delivering its products and services to its customers. Acme IT stays competitive by providing Acme Company with relevant services that are delivered with the speed, quality and reliability that the company expects. Acme IT also acts as a technology thought leader for the company, proactively providing services that help Acme Company increase revenue, reduce costs, attract new customers, and improve brand image.”

Wow, that’s quite a vision for an IT department. How would a CIO begin to deliver on a vision like that? Just start using VMware, and you’re all set! Not quite! Installing VMware might come all the way at the end of the chain… at “Tool A” in the diagram above.

First, we need one or more Strategies. One valid Strategy may indeed be to leverage virtualization to improve time to market for IT services, and reduce infrastructure costs by reducing the number of devices in the datacenter. Great ideas, but a couple of Policies might be needed to implement this strategy.

One Policy, Policy A in the above diagram, might be that all application development should use a virtual server. Policy B might mandate that all new servers will be assessed as virtualization candidates before physical equipment is purchased.

Processes then flow from Policies. Since I have a policy that mandates that new development should happen on a virtual infrastructure, eventually I should be able to make a good estimate of the infrastructure needed for my development efforts. My Capacity Management process could then requisition and deploy some amount of infrastructure in the datacenter before it is requested by a developer. You’ll notice that this process, Capacity Management, enables a virtualization policy for developers, and neatly links up with my strategy to improve time to market for IT services (through reduced application development time). Eventually, we could trace this process back to our single Vision for IT.

But we’re not done! Processes need to be implemented by Procedures. In order to implement a capacity management process properly, I need to estimate demand from my customers. My customers will be application developers if we’re talking about the policy that developers must use virtualized equipment. Most enterprises have some sort of way to handle this, so we’d want to look at the procedure that developer customers use to request resources. To enable all of this, the request and the measurement of demand, I may want to implement some sort of Tool, like a service catalog or a request portal. That’s the end of the chain – the Tool.

Following the discussion back up to Vision, we can see how the selection of a tool is justified by following the chain back to procedure, process, policy, strategy, and ultimately vision.

This framework provides a simple alignment that can be used in IT departments for a number of advantages. One significant advantage is that it provides a common language for everyone in the IT department to understand the reasoning behind the design of a particular process, the need for a particular procedure, or the selection of a particular tool over another.

In a future blog post, I’ll cover the various other advantages of using this framework.

Food for Thought

  1. Do you see a proliferation of tools and a corresponding disconnect with strategy in your department?
  2. Who sets the vision and strategy for IT in your department?
  3. Is your IT department using a similar framework to rationalize tools?
  4. Do your IT policies link to processes and procedures?
  5. Can you measure compliance to your IT policies?

Where Is the Cloud Going? Try Thinking “Minority Report”

I read a news release (here) recently where NVidia is proposing to partition processing between on-device and cloud-located graphics hardware…here’s an excerpt:

“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realizing their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”

As well as the split processing that is handled by the Silk browser on the Kindle Fire (see here), I started thinking about that “processing partitioning” strategy in relation to other aspects of computing and cloud computing in particular.  My thinking is that, over the next five to seven years (at most by 2020), there will be several very important seismic shifts in computing dealing with at least four separate events:  1) user data becomes a centralized commodity that’s brokered by a few major players,  2) a new cloud-specific programming language is developed, 3) processing becomes “completely” decoupled from hardware and location, and, D) end user computing becomes based almost completely on SoC technologies (see here).  The end result will be a world of data and processing independence never seen that will allow us to live in that Minority Report world.  I’ll describe the events and then will describe how all of them will come together to create what I call “pervasive personal processing” or P3.

User Data

Data about you, your reading preferences, what you buy, what you watch on TV, where you shop, etc. exist in literally thousands of different locations and that’s a problem…not for you…but for merchants and the companies that support them.  It’s information that must be stored and maintained and regularly refreshed for it to remain valuable, basically, what is being called “big data.” The extent of this data almost cannot be measured because it is so pervasive and relevant to everyday life. It is contained within so many services we access day in and day out and businesses are struggling to manage it. Now the argument goes that they do this, at great cost, because it is a competitive advantage to hoard that information (information is power, right?) and eventually, profits will arise from it.  Um, maybe yes and maybe no but it’s extremely difficult to actually measure that “eventual” profit…so I’ll go along with “no.” Now even though big data-focused hardware and software manufacturers are attempting to alleviate these problems of scale, the businesses who house these growing petabytes…and yes, even exabytes…of data are not seeing the expected benefits—relevant to their profits—as it costs money, lots of it.  This is money that is taken off the top line and definitely affects the bottom line.

Because of these imaginary profits (and the real loss), more and more companies will start outsourcing the “hoarding” of this data until the eventual state is that there are 2 or 3 big players who will act as brokers. I personally think it will be either the credit card companies or the credit rating agencies…both groups have the basic frameworks for delivering consumer profiles as a service (CPaaS) and charge for access rights.  A big step toward this will be when Microsoft unleashes IDaaS (Identity as a Service) as part of their integrating Active Directory into their Azure cloud. It’ll be a hurdle for them to convince the public to trust them, but I think they will eventually prevail.

These profile brokers will start using IDaaS because then they don’t have to have separate internal identity management systems (for separate data repositories of user data) for other businesses to access their CPaaS offerings.  Once this starts to gain traction you can bet that the real data mining begins on your online, and offline, habits because your loyalty card at the grocery store will be part of your profile…as will your
credit history and your public driving record and the books you get from your local library and…well, you get the picture.  Once your consumer profile is centralized, all kinds of data feeds will appear because the profile brokers will pay for them.  Your local government, always strapped for cash, will sell you out in an instant for some recurring monthly revenue.

Cloud-specific Programming

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely but, to-date, they have been entirely encapsulated within the local machine (or in some cases the nodes of a super computer or HPC cluster which, for our purposes, really is just a large single machine).  What this means is that the programs written for those systems need to know precisely where the functions will be run, what subsystems will run them, the exact syntax and context, etc.  One slight error or a small lag in the response time and the whole thing could crash or, at best, run slowly or produce additional errors.

But, what if you had a computer language that understood the cloud and took into account latency, data errors and even missing data?  A language that was able to partition processing amongst all kinds of different processing locations, and know that the next time, the locations may have moved?  A language that could guess at the best place to process (i.e. lowest latency, highest cache hit rate, etc.) but then change its mind as conditions change?

That language would allow you to specify a type of processing and then actively seek the best place for that processing to happen based on many different details…processing intensity, floating point, entire algorithm or proportional, subset or superset…and fully understand that, in some cases, it will have to make educated guesses about what the returned data will be (in case of unexpected latency).  It will also have to know that the data to be processed may exist in a thousand different locations such as the CPaaS providers, government feeds, or other providers for specific data types.  It will also be able to adapt its processing to the available processing locations such that it elegantly deprecates functionality…maybe based on a probability factor included in the language that records variables over time and uses that to guess where it will be next and line up the processing needed beforehand.  The possibilities are endless, but not impossible…which leads to…

Decoupled Processing and SoC

As can be seen by the efforts NVidia is making is this area, it will soon be that the processing of data will become completely decoupled from where that data lives or is used. What this is and how it will be done will rely on other events (see previous section) but the bottom line is that once it is decoupled, a whole new class of device will appear, in both static and mobile versions, that will be based on System on a Chip (SoC) which will allow deep processing density with very, very low power consumption. These devices will support multiple code sets across hundreds of cores and be able to intelligently communicate their capabilities in real time to distributed processing services that request their local processing services…whether over Wi-Fi, Bluetooth, IrDA, GSM, CDMA, or whatever comes next, the devices themselves will make the choice based on best use of bandwidth, processing request, location, etc.  These devices will take full advantage of the cloud specific computing languages to distribute processing across dozens and possibly hundreds of processing locations and will hold almost no data because they don’t have to, everything exists someplace else in the cloud.  In some cases these devices will be very small, the size of a thin watch for example, but they will be able to process the equivalent of what a super computer can do because they don’t do all of the processing, only what makes sense for the location and capabilities, etc.

These decoupled processing units, Pervasive Personal Processing or P3 units, will allow you to walk up to any workstation or monitor or TV set…anywhere in the world…and basically conduct your business as if you were sitting in from of your home computer.  All of you data, your photos, your documents, and your personal files will be instantly available in whatever way that you prefer.  All of your history for whatever services you use, online and offline, will be directly accessible.  The memo you left off writing that morning in the Houston office will be right where you left it, on that screen you just walked up to in the hotel lobby in Tokyo the next day, with the cursor blinking in the middle of the word you stopped on.

Welcome to Minority Report.

Cincinnati Bell Launches Cloud Services With Apptix and Parallels

Cincinnati Bell (NYSE: CBB) today announced in a press release the expansion of its portfolio of telecommunications and IT services for businesses with the addition of cloud-based Microsoft® Communication and Collaboration Solutions powered by hosted business services provider Apptix® (OSE: APP) and Parallels, the leading provider of cloud service delivery software. These new solutions will allow Cincinnati Bell to more effectively serve small & medium businesses, as well as key industries including healthcare, government, and education.

 

According to the 2012 Parallels SMB Cloud Insights™ report, businesses are increasingly turning to cloud solutions such as hosted communications and collaboration services. In the past year, it is estimated that more than one million SMBs in the United States have started using some form of cloud services.

 

“Purchasing and maintaining software and hardware can be daunting and expensive for many SMB customers,” said Stuart Levinsky, General Manager of Cloud Computing at Cincinnati Bell. “Cloud Solutions from Cincinnati Bell allow businesses to focus on what’s important to them – their customers – while letting us do the heavy lifting to provide a proven, reliable communications network and the top cloud-based services available anywhere.”

 

Cincinnati Bell’s new Cloud Solutions – including hosted Microsoft Exchange email with mobile synchronization and hosted Microsoft SharePoint – keep employees connected on the go, enhance productivity, and reduce the cost of IT services. Optional archiving and compliance features help businesses meet stringent regulatory requirements, such as HIPAA, PCI, FRCP, and SOX.

“We’re pleased that Cincinnati Bell selected Apptix to support their strategic move into the cloud,” said David Ehrhardt, president and chief executive officer of Apptix. “Our partner program reflects Apptix’s extensive experience in the hosted services market, providing everything our partners need to successfully transition into the cloud market. Apptix offers our channel partners flexible business models, diversified solution offerings, and dedicated sales, marketing, and support resources and staff to fast-track their revenue growth from cloud-based solutions. ”

 

“Cloud services represent a significant growth opportunity for communication service providers such as Cincinnati Bell,” said Birger Steen, CEO of Parallels. “We are pleased to join with our valued partners Cincinnati Bell and Apptix and as they use Parallels Automation to rapidly syndicate and deliver cloud services.”

 

For more information about Cincinnati Bell’s new Cloud Solutions for business customers, visit www.cincinnatibell.com/cloud.

The Encrypted Elephant in the Cloud Room

Encrypting data in the cloud is tricky and defies long held best practices regarding key management. New kid on the block Porticor aims to change that.

pink elephant

Anyone who’s been around cryptography for a while understands that secure key management is a critical foundation for any security strategy involving encryption. Back in the day it was SSL, and an entire industry of solutions grew up specifically aimed at protecting the key to the kingdom – the master key. Tamper-resistant hardware devices are still required for some US Federal security standards under the FIPS banner, with specific security protections at the network and software levels providing additional assurance that the ever important key remains safe.

In many cases it’s advised that the master key is not even kept on the same premises as the systems that use it. It must be locked up, safely, offsite; transported via a secure briefcase, handcuffed to a security officer and guarded by dire wolves. With very, very big teeth.

No, I am not exaggerating. At least not much. The master key really is that important to the security of cryptography. porticor-logo

That’s why encryption in the cloud is such a tough nut to crack. Where, exactly, do you store the keys used to encrypt those Amazon S3 objects? Where, exactly, do you store the keys used to encrypt disk volumes in any cloud storage service?

Start-up Porticor has an answer, one that breaks (literally and figuratively) traditional models of key management and offers a pathway to a more secure method of managing cryptography in the cloud.

SPLIT-KEY ENCRYPTION andyburton-quote

Porticor is a combination SaaS / IaaS solution designed to enable encryption of data at rest in IaaS environments with a focus on cloud, currently available on AWS and other clouds. It’s a combination in not just deployment model – which is rapidly becoming the norm for cloud-based services – but in architecture, as well.

To alleviate violating best practices with respect to key management, i.e. you don’t store the master key right next to the data it’s been used to encrypt – Porticor has developed a technique it calls “Split-Key Encryption.”

Data encryption comprises, you’ll recall, the execution of an encryption algorithm on the data using a secret key, the result of which is ciphertext. The secret key is the, if you’ll pardon the pun, secret to gaining access to that data once it has been encrypted. Storing it next to the data, then, is obviously a Very Bad Idea™ and as noted above the industry has already addressed the risk of doing so with a variety of solutions. Porticor takes a different approach by focusing on the security of the key not only from the perspective of its location but of its form.

The secret master key in Porticor’s system is actually a mathematical combination of the master key generated on a per project (disk volumes or S3 objects) basis and a unique key created by the Porticor Virtual Key Management™ (PVKM™)  system. The master key is half of the real key, and the PVKM generated key the other half. Only by combining the two – mathematically – can you discover the true secret key needed to work with the encrypted data.

split key encryptionThe PVKM generated key is stored in Porticor’s SaaS-based key management system, while the master keys are stored in the Porticor virtual appliance, deployed in the cloud along with the data its protecting.

The fact that the secret key can only be derived algorithmically from the two halves of the keys enhances security by making it impossible to find the actual encryption key from just one of the halves, since the math used removes all hints to the value of that key. It removes the risk of someone being able to recreate the secret key correctly unless they have both halves at the same time. The math could be a simple concatenation, but it could also be a more complicated algebraic equation. It could ostensibly be different for each set of keys, depending on the lengths to which Porticor wants to go to minimize the risk of someone being able to recreate the secret key correctly.

Still, some folks might be concerned that the master key exists in the same environment as the data it ultimately protects. Porticor intends to address that by moving to a partially homomorphic key encryption scheme.

HOMOMORPHIC KEY ENCRYPTION

If you aren’t familiar with homomorphic encryption, there are several articles I’d encourage you to read, beginning with “Homomorphic Encryption” by Technology Review followed by Craig Stuntz’s “What is Homomorphic Encryption, and Why Should I Care?”  If you can’t get enough of equations and formulas, then wander over to Wikipedia and read its entry on Homomorphic Encryption as well.

Porticor itself has a brief discussion of the technology, but it is not nearly as deep as the aforementioned articles.

In a nutshell (in case you can’t bear to leave this page) homomorphic encryption is the fascinating property of some algorithms to work both on plaintext as well as on encrypted versions of the plaintext and come up with the same result. Executing the algorithm against encrypted data and then decrypting it gives the same result as executing the algorithm against the unencrypted version of the data. 

So, what Porticor plans to do is apply homomorphic encryption to the keys, ensuring that the actual keys are no longer stored anywhere – unless you remember to tuck them away someplace safe or write it down. The algorithms for joining the two keys are performed on the encrypted versions of the keys, resulting in an encrypted symmetric key specific to one resource – a disk volume or S3 object.

The resulting system ensures that:

No keys are ever on a disk in plain form Master keys are never decrypted, and so they are never known to anyone outside the application owner themselves The “second half” of each key (PVKM stored) are also never decrypted, and are never even known to anyone (not even Porticor) Symmetric keys for a specific resource exist in memory only, and are decrypted for use only when the actual data is needed, then they are discarded

This effectively eliminates one more argument against cloud – that keys cannot adequately be secured.

In a traditional data encryption solution the only thing you need is the secret key to unlock the data. Using Porticor’s split-key technology you need the PVKM key and the master key used to recombine those keys. Layer atop that homomorphic key encryption to ensure the keys don’t actually exist anywhere, and you have a rejoined to the claim that secure data and cloud simply cannot coexist.

In addition to the relative newness of the technique (and the nature of being untried at this point) the argument against homomorphic encryption of any kind is a familiar one: performance. Cryptography in general is by no means a fast operation and there is more than a decade’s worth of technology in the form of hardware acceleration (and associated performance tests) specifically designed to remediate the slow performance of cryptographic functions. Homomorphic encryption is noted to be excruciatingly slow and the inability to leverage any kind of hardware acceleration in cloud computing environments offers no relief. Whether this performance penalty will be worth the additional level of security such a system adds is largely a matter of conjecture and highly dependent upon the balance between security and performance required by the organization.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_facebook[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Getting at the Heart of Security in the Cloud
Threat Assessment: Terminal Services RDP Vulnerability
The Cost of Ignoring ‘Non-Human’ Visitors
Identity Gone Wild! Cloud Edition F5 Friday: Addressing the Unintended Consequences of Cloud
Surfing the Surveys: Cloud, Security and those Pesky Breaches Dome9: Closing the (Cloud) Barn Door  Get Your Money for Nothing and Your Bots for Free  Technorati Tags: F5,MacVittie,Porticor,cryptography,cloud,homomorphic encryption,PKI,security,blog

read more