Many organizations have embraced, or are considering, the benefits of cloud computing – speed, flexibility, increased expertise, shared workload, reduced costs, etc. The benefits are many – but so are the risks. What are the threats to Cloud security? Which parties assume responsibility for securing the environment? What about the data? Which type of cloud deployment offers superior security benefits?
In his session at the 10th International Cloud Expo, Dr. Nataraj (Raj) Nagaratnam, IBM Distinguished Engineer and CTO for Security Solutions in IBM Security Systems, will examine Cloud Computing from a security and compliance perspective and will help Cloud Expo delegates to understand the three major types of cloud deployment.
Monthly Archives: June 2012
Cloud Computing: The Compliant Cloud at Cloud Expo New York
Many organizations have embraced, or are considering, the benefits of cloud computing – speed, flexibility, increased expertise, shared workload, reduced costs, etc. The benefits are many – but so are the risks. What are the threats to Cloud security? Which parties assume responsibility for securing the environment? What about the data? Which type of cloud deployment offers superior security benefits?
In his session at the 10th International Cloud Expo, Dr. Nataraj (Raj) Nagaratnam, IBM Distinguished Engineer and CTO for Security Solutions in IBM Security Systems, will examine Cloud Computing from a security and compliance perspective and will help Cloud Expo delegates to understand the three major types of cloud deployment.
Dell’s Testing ARM Servers
Not waiting for Calexda, Dell is developing its own low-power ARM-based microservers.
The dense, cheap widgets aren’t generally available. They aren’t ready for prime time yet.
Instead Dell’s got a seed program happening called Copper that won’t brighten Intel’s mood any since Dell is the second-largest maker of x86 servers behind HP, and HP is also skipping down the ARM path. What’s more, Dell, at least, is ultimately contemplating the enterprise mainstream despite the risk of cannibalization.
It said Tuesday morning that it’s shipped ARM-based clusters to a few “hyperscale” customers for evaluation and it’s putting demonstration clusters at Dell Solution Centers worldwide – as well as at the Texas Advanced Computing Center, the supercomputing center at the University of Texas in Austin – where they can be remotely accessed by ISVs so they can develop the nascent ARM server ecosystem.
Dell’s Testing ARM Servers
Not waiting for Calexda, Dell is developing its own low-power ARM-based microservers.
The dense, cheap widgets aren’t generally available. They aren’t ready for prime time yet.
Instead Dell’s got a seed program happening called Copper that won’t brighten Intel’s mood any since Dell is the second-largest maker of x86 servers behind HP, and HP is also skipping down the ARM path. What’s more, Dell, at least, is ultimately contemplating the enterprise mainstream despite the risk of cannibalization.
It said Tuesday morning that it’s shipped ARM-based clusters to a few “hyperscale” customers for evaluation and it’s putting demonstration clusters at Dell Solution Centers worldwide – as well as at the Texas Advanced Computing Center, the supercomputing center at the University of Texas in Austin – where they can be remotely accessed by ISVs so they can develop the nascent ARM server ecosystem.
The Likelihood Theorem
When deciding where and how to spend your IT dollars, one question that comes up consistently is how far down the path of redundancy and resiliency should you build your solution for, and where does it cross the threshold from a necessity, to a nice-to-have because-its-cool. Defining your relative position on this path has impacts in all areas of IT, including technology selection, implementation design, policies and procedures definition, and management requirements. Therefore, I’ve developed the Likelihood (LH) Theorem to assist with identifying where that position is relative to your specific situation. The LH is not a financial criteria, nor is it directly an ROI metric. However it can be used to assist with determining the impact of making certain decisions in the design process.
Prior to establishing the components that make up your LH ratio, consider that at the start, with a completely blank slate, we all have the same LH. True, you could argue that someone establishing a system in Kansas doesn’t have to worry about a tsunami, but they do have to consider tornados. Besides, the preparation for such a level of regional, long term impact would be very similar regardless of the root cause.
The Likelihood Theorem starts with the concept of an Event (E ). Each ( E ) has its own unique LH. So initially:
LH=E
Next, apply any minimum standards that you define for included systems in your environment. Call this the Foundation Factor (FF). If you define a FF, then you can reduce LH by some factor, eliminating certain events from consideration. For example, your FF for server hardware may be redundant power supplies, NICs, and RAID. When it comes to network connectivity, it may be redundant paths. If using SaaS for business critical functions, it may be ISP redundancy via multi-homing and link load balancing. Therefore
LH=E-FF
Any of us who have been in this industry (or been a consumer of IT) for more than 5 minutes knows that even with a baseline established, things happen. This is known as the Wild Card Effect (WCE). One key note here is that all WCEs are in some form potentially controllable by the business. For hardware, this may be the difference between purchasing from Tier 1 and Tier 2 vendors (i.e. lower quality of components or lower mean time to failure rates). Another WCE may be the available budget for the solution. There may be multiple WCEs in any scenario, and all WCEs add back to the LH ratio:
WCE1 +WCE2 + WCE3 …=WCEn
And so:
LH=E-FF+WCEn
At this point, we have accounted for the event in question, reduced our risk profile by our minimum standards, and adjusted for wild cards that are beyond our minimum standards but that we could address should we have the authority to make certain decisions. Now, we need to begin considering the impacts associated with the event in question. Is the event we are considering singular in nature, or is it potentially repetitive? LH related to a regional disaster would be singular, however if we are considering telecommunication outages, then repetitive is more reasonable. So, we need to take the equation and multiply it by the potential frequency (FQ):
LH=(E-FF+WCEn)*FQ
The last factor in LH is determining the length of time that the event in question could impact the environment. This may come into play if the system in question is transitory, an interim step to a new solution, or has an expected limited lifecycle. The length of time that the event is possible can impact our thoughts around how much we invest in preventing it:
LH=((E-FF+WCEn)*FQ)/Time
So, in thinking about how to approach your design, consider these factors: What event are you trying to avoid? Do your minimum specifications eliminate the possibility of the event occurring ( E = FF )? What if you had to reduce your specifications to meet a lower budget (WCE1) or use a solution with an inherently higher ratio of failures or lackluster support (WCE 2 and WCE3)? Can you reduce those wildcards if the Event is not fully covered by your minimum standards (lower total WCEn)? Will the event be a one-time thing or could it happen repeatedly over the lifecycle of the solution?
I’m not suggesting that you can associate specific numerical values for these factors, but in terms of elevating or reducing the likelihood of an event happening, these criteria are key indicators. Using this formula is a way to ensure that working within the known constraints placed on us by the business, we have maximized our ability to avoid specific events and reduced the likelihood of those we can realistically address.
Why did you resist the name “cloud computing”?
The question of course could be directed to none other than Larry Ellison. While Salesforce.com founder Marc Benioff took the term and made it a cornerstone of their CRM marketing, Ellison and Oracle took umbrage at trying to re-invent an operating model he believed had been around for a while.
At the recent D10 conference, Kara Swisher interviewed him about this famous brush-off of the term cloud computing.
“I objected to people saying, “Oh my God, we just invented cloud computing,” said Ellison.
He also said that while he resisted the term, he did agree consumers and the general public needed a simpler way to understand and use computing resources. He thought the term was overly hyped and very promising at the same time.
“People said the PC would replace the mainframe. But IBM still does mainframes. PCs are more important than mainframes. I would argue that smartphones are …
Healthcare Cloud Security Needs
During the past few months, we’ve seen more and more healthcare oriented organizations reviewing cloud capabilities, migrating applications, and planning ahead to further migrate additional applications to the cloud. The challenge is not a simple one – on one hand, the cloud approach is very appealing. Its flexibility (and cost) increases accessibility to information and […]
Healthcare Cloud Security Needs
During the past few months, we’ve seen more and more healthcare oriented organizations reviewing cloud capabilities, migrating applications, and planning ahead to further migrate additional applications to the cloud. The challenge is not a simple one – on one hand, the cloud approach is very appealing. Its flexibility (and cost) increases accessibility to information and […]
CloudCIO.ca – Cloud CIO Canada
We’re finalizing our membership options for the CCN, including one for ‘Cloud CIO Canada’ – CloudCIO.ca.
Cloud CIO is a global set of best practices we’re building and this will localize them for Canada, and thus provide the ideal kick-starter knowledge base for Canadian CIO`s who want a learning exercise as much as an industry engagement.
It also acts as a showcase of pioneering Canadian CIO`s who are embracing Cloud Computing in a wide variety of different ways. This will share best practices in key areas like contracting and data privacy, and it`s also pivotal towards efforts to boost the Digital Economy.
CloudCIO.ca – Cloud CIO Canada
We’re finalizing our membership options for the CCN, including one for ‘Cloud CIO Canada’ – CloudCIO.ca.
Cloud CIO is a global set of best practices we’re building and this will localize them for Canada, and thus provide the ideal kick-starter knowledge base for Canadian CIO`s who want a learning exercise as much as an industry engagement.
It also acts as a showcase of pioneering Canadian CIO`s who are embracing Cloud Computing in a wide variety of different ways. This will share best practices in key areas like contracting and data privacy, and it`s also pivotal towards efforts to boost the Digital Economy.