Tag Archives: cloud

RECAP: HP Discover 2012 Event

If you are going to do something, make it matter.  That was the key phrase that was posted throughout the conference at HP Discover 2012 in Las Vegas a couple weeks ago.  With some of the new announcements, HP did just that.

One of the biggest announcements in my opinion is the HP Virtual Connect Direct-Attached Fibre Channel Storage for 3PAR. In a nutshell, it helps to reduce your SAN infrastructure by eliminating switches and HBAs. You connect your Blade System Servers directly to the 3PAR array.  This allows you to have a single layer FC storage network.  Since you won’t have a fabric to manage, you can increase your provisioning process by as much as 2.5X.  Also, by removing the fabric layer, you can eliminate up to 55% latency.

This will allow organizations to reduce costs by eliminating the SAN fabric.  It will save on operating costs by cutting down on capital expenditure.  It also scales with the “pay as you grow” methodology allowing you to purchase only what you need.

Complexity is greatly decreased with the wire-once strategy.  If new servers are added to the Blade Chassis, they simply access the storage through the already connected cabling.

Virtual Connect Manager allows for a single pane of glass approach.  It can be used through a web interface or CLI, for those UNIX lovers.

The new trend in IT is Big Data.  Some of the biggest customer challenges are the velocity and volume of data, the large variety and disparate sources of data, and the complex analytics that are required for maximizing the value of information.  HP introduced Vertica 6, which does all of
these.

Vertica 6 FlexStore has been expanded to allow access to any data, stored at any location, through any interface.  You can connect to Hadoop File Systems, existing databases, and data warehouses.  You can also access unstructured analysis platforms such as HP/Autonomy IDOL.

It also includes high performance data analytics for the R Statistical Tool natively and in parallel without the in-memory and single-threaded limitations of R. Vertica 6 has expanded their C++ SDK to add secure sandboxing of user-defined code.

Workload Management simplifies the user experience by enabling more diverse workloads.  Some users experienced up to a 40X speed increase on their queries.  Regardless of size, Workload Management balances all system resources to meet SLAs.

Vertica 6 software will run on the HP public cloud.  Web and mobile applications generate a ton of data.  This will allow business intelligence to quickly spot any trends that are developing and act accordingly.

Not to be overlooked are the enhancements made to the core components that are already part of the system.

Over the past few years, there has been a big interest in disk to disk backup and deduplication.  HP’s latest solution in this space is the B6200 with StoreOnce Catalyst software.  It has over 50 patents that deliver world record performance of 100TB/hr backups and 40TB/hr restores.  This claims to be 3X and 5X faster, respectively, than the next leading competitor.

The hardware is scalable.  It starts at 48TB (32TB usable) and can grow to 768TB (512TB usable).  With a typical deduplication rate of 20X, the system can provide extended data protection for up to 10PBs.

This is a federated backup solution that allows you to move data from remote sites to multiple datacenters without having to reduplicate it.  It integrates with HP Data Protector, Symantec NetBackup, and Symantec BackupExec giving the administrator one console to manage all deduplication, backup, and disaster recovery operations.

The portfolio also includes smaller units for SMB customers. They take advantage of the same type of technologies allowing companies to meet those pesky backup windows.

As a leading HP Partner, GreenPages can assist you with these or any of the products in the HP portfolio.

By Mark Mychalczuk

The Private Cloud Strikes Back

Having read JP Rangaswami’s argument against private clouds (and the obvious promoting of his version of cloud) I have only to say that he’s looking for oranges in an apple tree.  His entire premise is based on the idea that enterprises are wholly concerned with cost and sharing risk when that can’t be farther from the truth.  Yes, cost is indeed a factor as is sharing risk but a bigger and more important factor facing the enterprise today is agility and flexibility…something that monolithic leviathan-like enterprise IT systems of today definitely are not. He then jumps from cost to social enterprise as if there is a causal relationship there when, in fact, they are two separate discussions.  I don’t doubt that if you are a consumer (not just customer) facing organization, it’s best to get on that social enterprise bandwagon but if your main concern is how to better equip and provide the environment and tools necessary to innovate within your organization, the whole social thing is a red herring for selling you things that you don’t need.

Traditional status quo within IT is deeply encumbered by mostly manual processes—optimized for people carrying out commodity IT tasks such as provisioning servers and OSes—that cannot be optimized any further, therefore a different, much better way had to be found.  That way is the private cloud which takes those commodity IT tasks and elevates them to automated and orchestrated, well defined workflows and then utilizes a policy-driven system to carry them out.  Whether these workflows are initiated by a human or as a result of a specific set of monitored criteria, the system dynamically creates and recreates itself based on actual business and performance need—something that is almost impossible to translate into the public cloud scenario.

Not that public cloud cannot be leveraged where appropriate, but the enterprise’s requirement is much more granular and specific than any public cloud can or should allow…simply to JP’s point that they must share the risk among many players and that risk is generic by definition within the public cloud.  Once you start creating one-off specific environments, the commonality is lost and it loses the cost benefits because now you are simply utilizing a private cloud whose assets are owned by someone else…sound like co-lo?

Finally, I wouldn’t expect someone whose main revenue source is based on the idea that a public cloud is better than a private cloud to say anything different than what JP has said, but I did expect some semblance of clarity as to where his loyalties lie…and it looks like it’s not with the best interests of the enterprise customer.

Translating a Vision for IT Amid a “Severe Storm Watch”

IT departments adopt technology from two perspectives: from a directive by the CIO to a “rogue IT” suggestion or project from an individual user. The former represents a top-down condition, while the latter has technology adoption from the bottom-up. Oftentimes, there seems to be confusion somewhere in the middle, resulting in a smorgasbord of tools at one end, and a grand, ambitious strategy at the other end. This article suggests a framework to implement a vision from strategy, policy, process, and ultimately tools.

Vision for IT -> Strategies -> Policies -> Processes -> Procedures -> Tools and Automation

Revenue Generating Activities -> Business Process -> IT Services

As a solutions architect and consultant, I’ve met with many clients in the past few years. From director-level staff to engineers to support staff in the trenches, IT has taken on a language of its own. Every organization has its own acronyms, sure. Buzzwords and marketing hype strangle the English language inside the datacenter. Consider the range of experience present in many shops, and it is easy to imagine the confusion. The seasoned, senior executive talks about driving standards and reducing spend for datacenter floor space, and the excited young intern responds with telecommuting, tweets, and cloud computing, all in a proof-of-concept that is already in progress. What the…? Who’s right?

 

It occurred to me a while ago that there is a “severe storm watch” for IT. According to the National Weather Service, a “watch” is issued when conditions are favorable for [some type of weather chaos]. Well, in IT, more than in other departments, one can make these observations:

  • Generationally-diverse workforce
  • Diverse backgrounds of workers
  • Highly variable experience of workers
  • Rapidly changing products and offerings
  • High complexity of subject matter and decisions

My colleague, Geoff Smith, recently posted a five-part series (The Taxonomy of IT) describing the operations of IT departments. In the series, Geoff points out that IT departments take on different shapes and behaviors based on a number of factors. The series presents a thoughtful classification of IT departments and how they develop, with a framework borrowed from biology. This post presents a somewhat more tactical suggestion on how IT departments can deal with strategy and technology adoption.

Yet Another Framework

A quick search on Google shows a load of articles on Business and IT Alignment. There’s even a Wikipedia article on the topic. I hear it all the time, and I hate the term. This term suggests that “IT” simply does the bidding of “The Business,” whatever that may be. I prefer to see Business and IT Partnership. But anyway, let’s begin with a partnership within IT departments. Starting with tools, do you know the value proposition of all of the tools in your environment? Do you know about all of the tools in your environment?

 

A single Vision for IT should first translate into one or more Strategies. I’m thinking of a Vision statement for IT that looks something like the following:

“Acme IT exists as a competitive, prime provider of information technology services to enable Acme Company to generate revenue by developing, marketing, and delivering its products and services to its customers. Acme IT stays competitive by providing Acme Company with relevant services that are delivered with the speed, quality and reliability that the company expects. Acme IT also acts as a technology thought leader for the company, proactively providing services that help Acme Company increase revenue, reduce costs, attract new customers, and improve brand image.”

Wow, that’s quite a vision for an IT department. How would a CIO begin to deliver on a vision like that? Just start using VMware, and you’re all set! Not quite! Installing VMware might come all the way at the end of the chain… at “Tool A” in the diagram above.

First, we need one or more Strategies. One valid Strategy may indeed be to leverage virtualization to improve time to market for IT services, and reduce infrastructure costs by reducing the number of devices in the datacenter. Great ideas, but a couple of Policies might be needed to implement this strategy.

One Policy, Policy A in the above diagram, might be that all application development should use a virtual server. Policy B might mandate that all new servers will be assessed as virtualization candidates before physical equipment is purchased.

Processes then flow from Policies. Since I have a policy that mandates that new development should happen on a virtual infrastructure, eventually I should be able to make a good estimate of the infrastructure needed for my development efforts. My Capacity Management process could then requisition and deploy some amount of infrastructure in the datacenter before it is requested by a developer. You’ll notice that this process, Capacity Management, enables a virtualization policy for developers, and neatly links up with my strategy to improve time to market for IT services (through reduced application development time). Eventually, we could trace this process back to our single Vision for IT.

But we’re not done! Processes need to be implemented by Procedures. In order to implement a capacity management process properly, I need to estimate demand from my customers. My customers will be application developers if we’re talking about the policy that developers must use virtualized equipment. Most enterprises have some sort of way to handle this, so we’d want to look at the procedure that developer customers use to request resources. To enable all of this, the request and the measurement of demand, I may want to implement some sort of Tool, like a service catalog or a request portal. That’s the end of the chain – the Tool.

Following the discussion back up to Vision, we can see how the selection of a tool is justified by following the chain back to procedure, process, policy, strategy, and ultimately vision.

This framework provides a simple alignment that can be used in IT departments for a number of advantages. One significant advantage is that it provides a common language for everyone in the IT department to understand the reasoning behind the design of a particular process, the need for a particular procedure, or the selection of a particular tool over another.

In a future blog post, I’ll cover the various other advantages of using this framework.

Food for Thought

  1. Do you see a proliferation of tools and a corresponding disconnect with strategy in your department?
  2. Who sets the vision and strategy for IT in your department?
  3. Is your IT department using a similar framework to rationalize tools?
  4. Do your IT policies link to processes and procedures?
  5. Can you measure compliance to your IT policies?

Where Is the Cloud Going? Try Thinking “Minority Report”

I read a news release (here) recently where NVidia is proposing to partition processing between on-device and cloud-located graphics hardware…here’s an excerpt:

“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realizing their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”

As well as the split processing that is handled by the Silk browser on the Kindle Fire (see here), I started thinking about that “processing partitioning” strategy in relation to other aspects of computing and cloud computing in particular.  My thinking is that, over the next five to seven years (at most by 2020), there will be several very important seismic shifts in computing dealing with at least four separate events:  1) user data becomes a centralized commodity that’s brokered by a few major players,  2) a new cloud-specific programming language is developed, 3) processing becomes “completely” decoupled from hardware and location, and, D) end user computing becomes based almost completely on SoC technologies (see here).  The end result will be a world of data and processing independence never seen that will allow us to live in that Minority Report world.  I’ll describe the events and then will describe how all of them will come together to create what I call “pervasive personal processing” or P3.

User Data

Data about you, your reading preferences, what you buy, what you watch on TV, where you shop, etc. exist in literally thousands of different locations and that’s a problem…not for you…but for merchants and the companies that support them.  It’s information that must be stored and maintained and regularly refreshed for it to remain valuable, basically, what is being called “big data.” The extent of this data almost cannot be measured because it is so pervasive and relevant to everyday life. It is contained within so many services we access day in and day out and businesses are struggling to manage it. Now the argument goes that they do this, at great cost, because it is a competitive advantage to hoard that information (information is power, right?) and eventually, profits will arise from it.  Um, maybe yes and maybe no but it’s extremely difficult to actually measure that “eventual” profit…so I’ll go along with “no.” Now even though big data-focused hardware and software manufacturers are attempting to alleviate these problems of scale, the businesses who house these growing petabytes…and yes, even exabytes…of data are not seeing the expected benefits—relevant to their profits—as it costs money, lots of it.  This is money that is taken off the top line and definitely affects the bottom line.

Because of these imaginary profits (and the real loss), more and more companies will start outsourcing the “hoarding” of this data until the eventual state is that there are 2 or 3 big players who will act as brokers. I personally think it will be either the credit card companies or the credit rating agencies…both groups have the basic frameworks for delivering consumer profiles as a service (CPaaS) and charge for access rights.  A big step toward this will be when Microsoft unleashes IDaaS (Identity as a Service) as part of their integrating Active Directory into their Azure cloud. It’ll be a hurdle for them to convince the public to trust them, but I think they will eventually prevail.

These profile brokers will start using IDaaS because then they don’t have to have separate internal identity management systems (for separate data repositories of user data) for other businesses to access their CPaaS offerings.  Once this starts to gain traction you can bet that the real data mining begins on your online, and offline, habits because your loyalty card at the grocery store will be part of your profile…as will your
credit history and your public driving record and the books you get from your local library and…well, you get the picture.  Once your consumer profile is centralized, all kinds of data feeds will appear because the profile brokers will pay for them.  Your local government, always strapped for cash, will sell you out in an instant for some recurring monthly revenue.

Cloud-specific Programming

A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely but, to-date, they have been entirely encapsulated within the local machine (or in some cases the nodes of a super computer or HPC cluster which, for our purposes, really is just a large single machine).  What this means is that the programs written for those systems need to know precisely where the functions will be run, what subsystems will run them, the exact syntax and context, etc.  One slight error or a small lag in the response time and the whole thing could crash or, at best, run slowly or produce additional errors.

But, what if you had a computer language that understood the cloud and took into account latency, data errors and even missing data?  A language that was able to partition processing amongst all kinds of different processing locations, and know that the next time, the locations may have moved?  A language that could guess at the best place to process (i.e. lowest latency, highest cache hit rate, etc.) but then change its mind as conditions change?

That language would allow you to specify a type of processing and then actively seek the best place for that processing to happen based on many different details…processing intensity, floating point, entire algorithm or proportional, subset or superset…and fully understand that, in some cases, it will have to make educated guesses about what the returned data will be (in case of unexpected latency).  It will also have to know that the data to be processed may exist in a thousand different locations such as the CPaaS providers, government feeds, or other providers for specific data types.  It will also be able to adapt its processing to the available processing locations such that it elegantly deprecates functionality…maybe based on a probability factor included in the language that records variables over time and uses that to guess where it will be next and line up the processing needed beforehand.  The possibilities are endless, but not impossible…which leads to…

Decoupled Processing and SoC

As can be seen by the efforts NVidia is making is this area, it will soon be that the processing of data will become completely decoupled from where that data lives or is used. What this is and how it will be done will rely on other events (see previous section) but the bottom line is that once it is decoupled, a whole new class of device will appear, in both static and mobile versions, that will be based on System on a Chip (SoC) which will allow deep processing density with very, very low power consumption. These devices will support multiple code sets across hundreds of cores and be able to intelligently communicate their capabilities in real time to distributed processing services that request their local processing services…whether over Wi-Fi, Bluetooth, IrDA, GSM, CDMA, or whatever comes next, the devices themselves will make the choice based on best use of bandwidth, processing request, location, etc.  These devices will take full advantage of the cloud specific computing languages to distribute processing across dozens and possibly hundreds of processing locations and will hold almost no data because they don’t have to, everything exists someplace else in the cloud.  In some cases these devices will be very small, the size of a thin watch for example, but they will be able to process the equivalent of what a super computer can do because they don’t do all of the processing, only what makes sense for the location and capabilities, etc.

These decoupled processing units, Pervasive Personal Processing or P3 units, will allow you to walk up to any workstation or monitor or TV set…anywhere in the world…and basically conduct your business as if you were sitting in from of your home computer.  All of you data, your photos, your documents, and your personal files will be instantly available in whatever way that you prefer.  All of your history for whatever services you use, online and offline, will be directly accessible.  The memo you left off writing that morning in the Houston office will be right where you left it, on that screen you just walked up to in the hotel lobby in Tokyo the next day, with the cursor blinking in the middle of the word you stopped on.

Welcome to Minority Report.

Cincinnati Bell Launches Cloud Services With Apptix and Parallels

Cincinnati Bell (NYSE: CBB) today announced in a press release the expansion of its portfolio of telecommunications and IT services for businesses with the addition of cloud-based Microsoft® Communication and Collaboration Solutions powered by hosted business services provider Apptix® (OSE: APP) and Parallels, the leading provider of cloud service delivery software. These new solutions will allow Cincinnati Bell to more effectively serve small & medium businesses, as well as key industries including healthcare, government, and education.

 

According to the 2012 Parallels SMB Cloud Insights™ report, businesses are increasingly turning to cloud solutions such as hosted communications and collaboration services. In the past year, it is estimated that more than one million SMBs in the United States have started using some form of cloud services.

 

“Purchasing and maintaining software and hardware can be daunting and expensive for many SMB customers,” said Stuart Levinsky, General Manager of Cloud Computing at Cincinnati Bell. “Cloud Solutions from Cincinnati Bell allow businesses to focus on what’s important to them – their customers – while letting us do the heavy lifting to provide a proven, reliable communications network and the top cloud-based services available anywhere.”

 

Cincinnati Bell’s new Cloud Solutions – including hosted Microsoft Exchange email with mobile synchronization and hosted Microsoft SharePoint – keep employees connected on the go, enhance productivity, and reduce the cost of IT services. Optional archiving and compliance features help businesses meet stringent regulatory requirements, such as HIPAA, PCI, FRCP, and SOX.

“We’re pleased that Cincinnati Bell selected Apptix to support their strategic move into the cloud,” said David Ehrhardt, president and chief executive officer of Apptix. “Our partner program reflects Apptix’s extensive experience in the hosted services market, providing everything our partners need to successfully transition into the cloud market. Apptix offers our channel partners flexible business models, diversified solution offerings, and dedicated sales, marketing, and support resources and staff to fast-track their revenue growth from cloud-based solutions. ”

 

“Cloud services represent a significant growth opportunity for communication service providers such as Cincinnati Bell,” said Birger Steen, CEO of Parallels. “We are pleased to join with our valued partners Cincinnati Bell and Apptix and as they use Parallels Automation to rapidly syndicate and deliver cloud services.”

 

For more information about Cincinnati Bell’s new Cloud Solutions for business customers, visit www.cincinnatibell.com/cloud.

The Encrypted Elephant in the Cloud Room

Encrypting data in the cloud is tricky and defies long held best practices regarding key management. New kid on the block Porticor aims to change that.

pink elephant

Anyone who’s been around cryptography for a while understands that secure key management is a critical foundation for any security strategy involving encryption. Back in the day it was SSL, and an entire industry of solutions grew up specifically aimed at protecting the key to the kingdom – the master key. Tamper-resistant hardware devices are still required for some US Federal security standards under the FIPS banner, with specific security protections at the network and software levels providing additional assurance that the ever important key remains safe.

In many cases it’s advised that the master key is not even kept on the same premises as the systems that use it. It must be locked up, safely, offsite; transported via a secure briefcase, handcuffed to a security officer and guarded by dire wolves. With very, very big teeth.

No, I am not exaggerating. At least not much. The master key really is that important to the security of cryptography. porticor-logo

That’s why encryption in the cloud is such a tough nut to crack. Where, exactly, do you store the keys used to encrypt those Amazon S3 objects? Where, exactly, do you store the keys used to encrypt disk volumes in any cloud storage service?

Start-up Porticor has an answer, one that breaks (literally and figuratively) traditional models of key management and offers a pathway to a more secure method of managing cryptography in the cloud.

SPLIT-KEY ENCRYPTION andyburton-quote

Porticor is a combination SaaS / IaaS solution designed to enable encryption of data at rest in IaaS environments with a focus on cloud, currently available on AWS and other clouds. It’s a combination in not just deployment model – which is rapidly becoming the norm for cloud-based services – but in architecture, as well.

To alleviate violating best practices with respect to key management, i.e. you don’t store the master key right next to the data it’s been used to encrypt – Porticor has developed a technique it calls “Split-Key Encryption.”

Data encryption comprises, you’ll recall, the execution of an encryption algorithm on the data using a secret key, the result of which is ciphertext. The secret key is the, if you’ll pardon the pun, secret to gaining access to that data once it has been encrypted. Storing it next to the data, then, is obviously a Very Bad Idea™ and as noted above the industry has already addressed the risk of doing so with a variety of solutions. Porticor takes a different approach by focusing on the security of the key not only from the perspective of its location but of its form.

The secret master key in Porticor’s system is actually a mathematical combination of the master key generated on a per project (disk volumes or S3 objects) basis and a unique key created by the Porticor Virtual Key Management™ (PVKM™)  system. The master key is half of the real key, and the PVKM generated key the other half. Only by combining the two – mathematically – can you discover the true secret key needed to work with the encrypted data.

split key encryptionThe PVKM generated key is stored in Porticor’s SaaS-based key management system, while the master keys are stored in the Porticor virtual appliance, deployed in the cloud along with the data its protecting.

The fact that the secret key can only be derived algorithmically from the two halves of the keys enhances security by making it impossible to find the actual encryption key from just one of the halves, since the math used removes all hints to the value of that key. It removes the risk of someone being able to recreate the secret key correctly unless they have both halves at the same time. The math could be a simple concatenation, but it could also be a more complicated algebraic equation. It could ostensibly be different for each set of keys, depending on the lengths to which Porticor wants to go to minimize the risk of someone being able to recreate the secret key correctly.

Still, some folks might be concerned that the master key exists in the same environment as the data it ultimately protects. Porticor intends to address that by moving to a partially homomorphic key encryption scheme.

HOMOMORPHIC KEY ENCRYPTION

If you aren’t familiar with homomorphic encryption, there are several articles I’d encourage you to read, beginning with “Homomorphic Encryption” by Technology Review followed by Craig Stuntz’s “What is Homomorphic Encryption, and Why Should I Care?”  If you can’t get enough of equations and formulas, then wander over to Wikipedia and read its entry on Homomorphic Encryption as well.

Porticor itself has a brief discussion of the technology, but it is not nearly as deep as the aforementioned articles.

In a nutshell (in case you can’t bear to leave this page) homomorphic encryption is the fascinating property of some algorithms to work both on plaintext as well as on encrypted versions of the plaintext and come up with the same result. Executing the algorithm against encrypted data and then decrypting it gives the same result as executing the algorithm against the unencrypted version of the data. 

So, what Porticor plans to do is apply homomorphic encryption to the keys, ensuring that the actual keys are no longer stored anywhere – unless you remember to tuck them away someplace safe or write it down. The algorithms for joining the two keys are performed on the encrypted versions of the keys, resulting in an encrypted symmetric key specific to one resource – a disk volume or S3 object.

The resulting system ensures that:

No keys are ever on a disk in plain form Master keys are never decrypted, and so they are never known to anyone outside the application owner themselves The “second half” of each key (PVKM stored) are also never decrypted, and are never even known to anyone (not even Porticor) Symmetric keys for a specific resource exist in memory only, and are decrypted for use only when the actual data is needed, then they are discarded

This effectively eliminates one more argument against cloud – that keys cannot adequately be secured.

In a traditional data encryption solution the only thing you need is the secret key to unlock the data. Using Porticor’s split-key technology you need the PVKM key and the master key used to recombine those keys. Layer atop that homomorphic key encryption to ensure the keys don’t actually exist anywhere, and you have a rejoined to the claim that secure data and cloud simply cannot coexist.

In addition to the relative newness of the technique (and the nature of being untried at this point) the argument against homomorphic encryption of any kind is a familiar one: performance. Cryptography in general is by no means a fast operation and there is more than a decade’s worth of technology in the form of hardware acceleration (and associated performance tests) specifically designed to remediate the slow performance of cryptographic functions. Homomorphic encryption is noted to be excruciatingly slow and the inability to leverage any kind of hardware acceleration in cloud computing environments offers no relief. Whether this performance penalty will be worth the additional level of security such a system adds is largely a matter of conjecture and highly dependent upon the balance between security and performance required by the organization.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_facebook[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Getting at the Heart of Security in the Cloud
Threat Assessment: Terminal Services RDP Vulnerability
The Cost of Ignoring ‘Non-Human’ Visitors
Identity Gone Wild! Cloud Edition F5 Friday: Addressing the Unintended Consequences of Cloud
Surfing the Surveys: Cloud, Security and those Pesky Breaches Dome9: Closing the (Cloud) Barn Door  Get Your Money for Nothing and Your Bots for Free  Technorati Tags: F5,MacVittie,Porticor,cryptography,cloud,homomorphic encryption,PKI,security,blog

read more

Avoid the Security Umpire Problem

Have you ever been part of a team or committee working on an initiative and found that the security or compliance person seemed to be holding up your project? They just seemed to find fault with anything and everything and just didn’t add much value to the initiative? If you are stuck with security staff that are like this all the time, that’s a bigger issue that’s not within the scope of this article to solve.  But, most of the time, it’s because this person was brought in very late in the project and a bunch of things have just been thrown at them, forcing them to make quick calls or decisions.

A common scenario is that people feel that there is no need to involve the security folks until after the team has come up with a solution.  Then the team pulls in the security or compliance folks to validate that the solution doesn’t go afoul of the organization’s security or compliance standards. Instead of a team member who can help with the security and compliance aspects of your project, you have ended up with an umpire.

Now think back to when you were a kid picking teams to play baseball.  If you had an odd number of kids then more than likely there would be one person left who would end up being the umpire. When you bring in the security or compliance team member late in the game, you may end up with someone that takes on the role of calling balls and strikes instead of being a contributing member of the team.

Avoid this situation by involving your Security and Compliance staff early on, when the team is being assembled.  Your security SMEs should be part of these conversations.  They should know the business and what the business requirements are.  They should be involved in the development of solutions.  They should know how to work within a team through the whole project lifecycle. Working this way ensures that the security SME has full context and is a respected member of the team, not a security umpire.

This is even more important when the initiative is related to virtualization or cloud. There are so many new things happening in this specific area that everyone on the team needs as much context, background, and lead time as possible so that they can work as a team to come up with solutions that make sense for the business.


The Taxonomy of IT – Part 4: Order and Family

The Order level of IT classification builds upon the previous Kingdom, Phylum and Class levels. In biology, Order is used to further group like organisms by traits that define their nature or character. In the Mammalia Class, Orders include Primates, Carnivora, Insectivora, and Cetacea. Carnivora is pretty self-explanatory and includes a wide range of animal species. However, Cetacea is restricted to whales, dolphins and porpoises and indicates more of an evolutionary development path that is consistent between them.

In IT, the concept of what we consume and how we got to that consumption model correlates to the concept of Order. So, Order focuses on how IT is consumed and why it’s consumed that way.

Business needs drive IT models, and as business needs change so does the way we leverage IT. An organization may have started out with a traditional on-premise solution that met all needs, and over time has morphed into a hybrid solution of internal and external resources. Likewise, the way users consume IT changes over time. This may be due to underlying business change, or possibly due to “generational” changes in the workforce. In either case, where IT is today does not always reflect its true nature.

Using consumption as a metric, we can group IT environments to bring to light how they have evolved, and expose their future needs. Some examples of different Orders might be:

Contra-Private – IT is mostly a private resource and is not specifically consumption driven. The IT organization uses their own internalized set of standards in order to identify the technical direction of the platforms. Shunning industry standards and trends, they often take a less-is-more approach to the tools and services they provide to the business. Ironically, their platforms tend to be oversized and underutilized.

Mandatorily-Mixed – here IT leverages a mix of internal, external, hard-built and truly consumed resources because the business demands it. IT may have less power to make foundational decisions or affect policy, but they typically will be better funded and be encouraged to work with outside groups. Often the internal/external moat is drawn around the LOB application stack, and these tend to be overly scaled.

Scale-Sourced – In this Order, IT would be incented to make efficiency and flexibility their guiding principles for decision-making. The business allows IT to determine use of and integration with outside services and solutions and relies on them to make the intelligent decisions. This Order is also user driven, with the ability to adopt new services and policies that drive user effectiveness.

The Family classification is the first real grouping of organisms where their external appearance is the primary factor. Oddly, what is probably the most visually apparent comes this deep in the classification model. Similarly within IT, we can now start grouping environments by their IT “appearance,” or more fundamentally, their core framework.

If you dissect a Honey Badger, it would probably be evident that it’s very much like other animals in the weasel family. It’s overall shape and proportions are similar to other weasels, from the smallest Least weasel to the largest Wolverine. So size is not the factor here, what is more important is the structure, and what type of lifestyle that structure has evolved to support. Therefore, in IT, Family refers to the core structure of data flow within IT systems.

Here are some examples:

Linear – IT is built along a pathway that conforms to a linear work flow. Systems are built to address specific point functions such as marketing, financials, manufacturing, etc. Each system has a definitive start and stop point, with end to end integration only. Input/output is translated between them, often by duplicated entry, scripted processes, or 3rd party translation. One function cannot begin until another has completed, thus creating a chain of potential break-points and inefficiencies.

Parallel – Workstreams can be completed concurrently, with some form of data-mashing at the end of each function. While this structure allows for users to work without waiting on others to complete their functions, it does require additional effort to combine the streams at the end.

Linked – Here, systems are linked at key intersections of workflow. Data crosses these intersections in a controlled and orderly fashion. Often, the data conversions are transparent or at least simplified. The efficiency level is increased, as dynamic information can be utilized by more than one person, however the complexities of this approach are often fraught with underlying dangers and support challenges.

Mobius – If you know the form of a Mobius strip, you get the idea here. In this form, it doesn’t matter what side of the workflow you are on, everything flows without interruption or collision. If this is delivered by more than one integrated system, then the integration is well tested and supported by all parties involved. More likely, this form is enabled by a singular system that receives, correlates, and forwards the data along its merry way.

Both the Order and Family are where we start to see the benefits of a Cloud IT architecture. Built to specification, consumed in a flexible, on-demand way, and enabling the true flow of information across all required systems may sound like nirvana. But, consider that our limiting factor in achieving this goal is not technology per se, but our ability to visualize and accept it.


Los datos están realmente seguros en un servicio Cloud?

Muchos usuarios ven el cloud como algo infalible, donde sus datos nunca van a desaparecer y su servicio siempre va a estar en línea, pero realmente es cierto?

En contra de muchas opiniones, el termino “servicios cloud”  no esta relacionado en absoluto con el término garantía de servicio, la calidad y garantía de servicio, no depende del nombre del mismo, depende directamente de la calidad, conocimiento e inversión del proveedor que lo ofrece.

Continue reading Los datos están realmente seguros en un servicio Cloud?