Tag Archives: security

Introducing the F5 Technical Certification Program

#F5TCP #interop You are now. Introducing the F5 Technical Certification Program.

f5friday

Can you explain the role of the Cache-Control HTTP header? How about the operational flow of data during an SMTP authentication exchange? Are you well-versed in the anatomy of an SSL handshake and the implications of encrypting data as it flows across the network?

Can you explain the features and functionalities of protocols and technologies specific to the Transport layer?

If so, then you won’t need to study nearly as much as many of your compatriots when you take the test to become an F5 Certified™ professional.

Introducing the F5 Technical Certification Program (F5-TCP)

F5_CertLogo_041012mdF5 Certified™ individuals represent a new breed of technologist – capable of manipulating the entire application stack from the traditional networking knowledge all the way to advanced application-layer understanding with a unique capability to integrate the two. Never before has any company created a program designed to bridge these worlds; a capability critical to the increasingly mobile and cloud-based solutions being implemented around the world today.

The need has always existed, but with the increasing focus on the abstraction of infrastructure through cloud computing and virtualization the need is greater today than ever for basic application delivery skills. Consider that at the heart of the elasticity promised by cloud computing is load balancing, and yet there is no general course or certification program through which a basic understanding of the technology can be achieved. There are no university courses in application delivery, no well-defined missing certlearning paths for new hires, no standard skills assessments. Vendors traditionally provide training but it is focused on product, not technology or general knowledge, leaving employees with highly specific skills that are not necessarily transferrable. This makes the transition to cloud more difficult as organizations struggle with integrating disparate application delivery technologies to ensure an operationally consistent environment without compromising on security or performance.

The F5-TCP focuses on both basic application delivery knowledge as well as a learning path through its application delivery products.

Starting with a core foundation in application delivery fundamentals, F5 Certified™ individuals will be able to focus on specific application delivery tracks through a well-defined learning path that leads to application delivery mastery.

Fundamentals being what they are – fundamental – the first step is to build a strong foundation in the technologies required to deploy and manage application delivery regardless of vendor or environment. Understanding core concepts such as the entire OSI model – including the impact of transport and application layer protocols and technologies on the network – is an invaluable skill today given the increasing focus on these layers over others when moving to highly virtualized and cloud computing environments.

As technologies continue to put pressure on IT to integrate more devices, more applications, and more environments, the application delivery tier becomes more critical to the ability of organizations not just to successfully integrate the technology, but to manage it, secure it, and deliver it in an operationally efficient way. Doing that requires skills; skills that IT organizations often lack. With no strong foundation in how to leverage such technology, it makes sense that organizations are simply not seeing the benefits of application delivery they could if they were able to fully take advantage of it.

testing tracks

quote-badgeApplication delivery solutions are often underutilized and not well-understood in many IT organizations. According to research by Gartner, up to three-quarters of IT organizations that have deployed advanced application delivery controllers (ADCs) use them only for basic load balancing. When faced with performance or availability challenges, these organizations often overlook the already-deployed ADC, because it was purchased to solve basic server load balancing and is typically controlled by the network operations team.

Gartner: Three Phases to Improve Application Delivery Teams 

F5 is excited to embark on this effort and provide not just a “BIG-IP” certification, but the fundamental skills and knowledge necessary for organizations to incorporate application delivery as a first class citizen in its data center architecture and fully realize the benefits of application delivery.

F5 Certification Resources

Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:

read more

Keynote Announces New 24/7 Web Privacy Tracking, Compliance Monitoring

Image representing Keynote Systems as depicted...

Keynote Systems today announced a new on-demand service for addressing growing Web privacy issues stemming from online behavioral targeting. The new service, called Keynote Web Privacy Tracking, goes beyond traditional monitoring and identifies third party tracking in violation of a site’s own stated privacy policy.

Keynote Web Privacy Tracking provides comprehensive insight into third parties that violate a company’s privacy policies across a website. Using a real browser, Keynote’s service monitors websites and records all of the tracking activity present, for example, cookies being placed on the browser. Keynote then matches that activity against a database of over 600 tracking companies and over 1,000 tracking domains, providing details on what privacy policies are being violated. Additionally, the Keynote Referrer Chain feature provides a detailed record for how the third-party violator came to be on the site, and an audit trail of each handoff in the ad request.

While there are already website privacy testing solutions on the market, Keynote Web Privacy Tracking is the first to apply a proven 24/7 monitoring technology to address the growing concerns over the impact of third party trackers on Internet privacy.

By monitoring websites around the clock from up to 70 geographic locations and covering 28 countries in the United States and Europe, Keynote Web Privacy Tracking provides an unmatched breadth of coverage for understanding the precise location and size of potential privacy issues, including risks arising from variations in how ad networks deliver geo-targeted content. Once privacy violations are found, Keynote goes one step further by providing detailed and actionable records that enable a site owner to manage policy violations with the ad network directly responsible for bringing a violator to the website. Keynote’s solution also features one-click analysis and reporting – once a site operator finds someone violating a company’s own stated privacy policy, with the click of a button a site operator can drill-down for further information.

Keynote Web Privacy Tracking has a comprehensive tracking database that provides site operators with detailed information for each third party tracker on their site. Site owners can then export the Keynote Web Privacy Tracking Report and share with co-workers and ad network partners to take immediate corrective action that reduces their exposure to privacy violations.

“Keynote Web Privacy Tracking is an ideal solution that site operators can begin leveraging immediately to address their lack of visibility into which third parties are violating the site’s own stated privacy policies,” said Vik Chaudhary, vice president of product management and corporate development at Keynote. “Our data will allow them to take very fast remedial action. Also, we believe our cutting edge 24/7 privacy compliance monitoring service will help address the increasing concerns of the many U.S. government agencies examining the issue. This includes the FTC, as well as government agencies in Europe, which may soon hold site operators legally accountable for ensuring consumer privacy on their website.”

“Online websites know that they need to publicize and enforce a strong privacy policy in order to comply with regulations, maintain goodwill with users, and ensure repeat traffic,” said Ian Glazer, research vice president at Gartner, Inc. “However those tasked with managing privacy within the organization often lack visibility into their potential privacy risk. Privacy professionals are engaging a new breed of tools to help them identify the continued risk that comes with third party cookies.”

Scott Crawford, research director with Enterprise Management Associates said, “With regulators and individuals alike becoming increasingly vocal about the responsible handling of sensitive personal data, organizations that develop and deploy Web applications must take those concerns more seriously than ever before.” Crawford continued, “Keynote’s new product provides organizations with more granular and precise insight into how sensitive information is used and privacy requirements met, not only by a business’s own applications, but also by those who provide services such as advertising placement, which could jeopardize the business’s relationships with its customers if private data is not handled properly.”

The results of an in-depth and comprehensive analysis of the online behavioral tracking on 269 Websites, to be publicly released by Keynote in the near future, found that 86 percent of the sites analyzed included third-party tracking of site visitors and, as a consequence of these third parties, over 60 percent of those sites violated one or more of the industry’s most common tracking-related privacy standards.

“The number of websites that allow visitors to be tracked by third parties may be surprising to some, but as consumers begin to understand that their online behavior can be recorded, website publishers will have to work even harder to ensure consumers’ privacy expectations are met,” said Ray Everett, Keynote’s director of privacy services.

Keynote Web Privacy Tracking detects the third parties collecting user information on each company’s site across all pages monitored by Keynote. Keynote then cross-checks each tracker against a database of over 600 ad networks and 1,000 tracking domains. Tracking companies that do not commit to an industry best practice for Web privacy are then flagged as a violator of the selected policy.

Policies checked by Keynote Web Privacy Tracking include:

  • Provide customers an Opt-out
  • Promise to Anonymize Data
  • Subject to Industry Overview from Recognized Organizations

“Ultimately, the burden of policing third-party trackers falls on the shoulders of website publishers,” Keynote’s Everett concluded. “A publisher is responsible for the content of their website, including the practices of the advertisers appearing on it. Monitoring the constantly changing advertising ecosystem is a daunting task, but the consequence of failure is the placing of your brand’s reputation at tremendous risk.”


Three Reasons to Use Cloud Anti-Spam

Guest Post by Emmanuel Carabott,  Security Research Manager at GFI Software Ltd.

GFI Software helps network administrators with network security, content security and messaging needs

Budgets are stretched thin, you already work too many hours, and you’re always trying to find a server that can run the latest requested workload.

For companies with the flexibility to take advantage of cloud-based technologies, there’s a quick and simple way to win back some time, save some money, and free up some resources on your email servers and reallocate the ones that are running your current anti-spam solution – cloud anti-spam. Here’s how:

Money

Cloud anti-spam solutions require no up-front costs, no hardware, operating system, or software investments, and operate on a simple per-user subscription model. They are a great solution for companies looking to implement anti-spam technologies without a major investment. They keep your costs low, predictable, and easy to allocate. The subscription model means you even have the option to take what has always been considered a capital expense and turn it into an operational expense, which may make your CFO as happy as your CIO would be about the budget you save.

Time

Cloud anti-spam solutions will give you back hours in your week taking care of the infrastructure, but that’s not all. The best cloud anti-spam solutions offer you a user self-service model, where each user can get a daily summary of messages that were filtered out, and can click a link in that summary to release a false positive, or log onto a web portal at any time to check for missing or delayed messages themselves. They get instant gratification and your help desk works fewer tickets related to spam. Everyone wins, except, of course, the spammers.

Resources

Spam, malware, and phishing messages don’t just cost time and money, they can consume significant server resources. Anti-spam solutions running on your email server take a lot of CPU cycles to run filter lists and scan for malware, RAM to expand all those attachments before they can be scanned, and disk space to quarantine what inevitably will be deleted. Moving that entire load to the cloud anti-spam solution frees up resources on your servers, can free up space in your racks, and will save you tons of bandwidth you can put to better use since spam is stopped before it ever reaches your border.

Companies that for legal and compliance reasons, or that prefer to maintain complete control of all aspects of the email system may not find cloud anti-spam solutions are the best fit, but for companies with the flexibility to do so, they are the right choice for IT teams looking to save money, time and resources, and who also want to provide their end users with a great email experience. You’re already stretched thin; give yourself, your team, and your budget a break by choosing a cloud anti-spam solution today.


The Encrypted Elephant in the Cloud Room

Encrypting data in the cloud is tricky and defies long held best practices regarding key management. New kid on the block Porticor aims to change that.

pink elephant

Anyone who’s been around cryptography for a while understands that secure key management is a critical foundation for any security strategy involving encryption. Back in the day it was SSL, and an entire industry of solutions grew up specifically aimed at protecting the key to the kingdom – the master key. Tamper-resistant hardware devices are still required for some US Federal security standards under the FIPS banner, with specific security protections at the network and software levels providing additional assurance that the ever important key remains safe.

In many cases it’s advised that the master key is not even kept on the same premises as the systems that use it. It must be locked up, safely, offsite; transported via a secure briefcase, handcuffed to a security officer and guarded by dire wolves. With very, very big teeth.

No, I am not exaggerating. At least not much. The master key really is that important to the security of cryptography. porticor-logo

That’s why encryption in the cloud is such a tough nut to crack. Where, exactly, do you store the keys used to encrypt those Amazon S3 objects? Where, exactly, do you store the keys used to encrypt disk volumes in any cloud storage service?

Start-up Porticor has an answer, one that breaks (literally and figuratively) traditional models of key management and offers a pathway to a more secure method of managing cryptography in the cloud.

SPLIT-KEY ENCRYPTION andyburton-quote

Porticor is a combination SaaS / IaaS solution designed to enable encryption of data at rest in IaaS environments with a focus on cloud, currently available on AWS and other clouds. It’s a combination in not just deployment model – which is rapidly becoming the norm for cloud-based services – but in architecture, as well.

To alleviate violating best practices with respect to key management, i.e. you don’t store the master key right next to the data it’s been used to encrypt – Porticor has developed a technique it calls “Split-Key Encryption.”

Data encryption comprises, you’ll recall, the execution of an encryption algorithm on the data using a secret key, the result of which is ciphertext. The secret key is the, if you’ll pardon the pun, secret to gaining access to that data once it has been encrypted. Storing it next to the data, then, is obviously a Very Bad Idea™ and as noted above the industry has already addressed the risk of doing so with a variety of solutions. Porticor takes a different approach by focusing on the security of the key not only from the perspective of its location but of its form.

The secret master key in Porticor’s system is actually a mathematical combination of the master key generated on a per project (disk volumes or S3 objects) basis and a unique key created by the Porticor Virtual Key Management™ (PVKM™)  system. The master key is half of the real key, and the PVKM generated key the other half. Only by combining the two – mathematically – can you discover the true secret key needed to work with the encrypted data.

split key encryptionThe PVKM generated key is stored in Porticor’s SaaS-based key management system, while the master keys are stored in the Porticor virtual appliance, deployed in the cloud along with the data its protecting.

The fact that the secret key can only be derived algorithmically from the two halves of the keys enhances security by making it impossible to find the actual encryption key from just one of the halves, since the math used removes all hints to the value of that key. It removes the risk of someone being able to recreate the secret key correctly unless they have both halves at the same time. The math could be a simple concatenation, but it could also be a more complicated algebraic equation. It could ostensibly be different for each set of keys, depending on the lengths to which Porticor wants to go to minimize the risk of someone being able to recreate the secret key correctly.

Still, some folks might be concerned that the master key exists in the same environment as the data it ultimately protects. Porticor intends to address that by moving to a partially homomorphic key encryption scheme.

HOMOMORPHIC KEY ENCRYPTION

If you aren’t familiar with homomorphic encryption, there are several articles I’d encourage you to read, beginning with “Homomorphic Encryption” by Technology Review followed by Craig Stuntz’s “What is Homomorphic Encryption, and Why Should I Care?”  If you can’t get enough of equations and formulas, then wander over to Wikipedia and read its entry on Homomorphic Encryption as well.

Porticor itself has a brief discussion of the technology, but it is not nearly as deep as the aforementioned articles.

In a nutshell (in case you can’t bear to leave this page) homomorphic encryption is the fascinating property of some algorithms to work both on plaintext as well as on encrypted versions of the plaintext and come up with the same result. Executing the algorithm against encrypted data and then decrypting it gives the same result as executing the algorithm against the unencrypted version of the data. 

So, what Porticor plans to do is apply homomorphic encryption to the keys, ensuring that the actual keys are no longer stored anywhere – unless you remember to tuck them away someplace safe or write it down. The algorithms for joining the two keys are performed on the encrypted versions of the keys, resulting in an encrypted symmetric key specific to one resource – a disk volume or S3 object.

The resulting system ensures that:

No keys are ever on a disk in plain form Master keys are never decrypted, and so they are never known to anyone outside the application owner themselves The “second half” of each key (PVKM stored) are also never decrypted, and are never even known to anyone (not even Porticor) Symmetric keys for a specific resource exist in memory only, and are decrypted for use only when the actual data is needed, then they are discarded

This effectively eliminates one more argument against cloud – that keys cannot adequately be secured.

In a traditional data encryption solution the only thing you need is the secret key to unlock the data. Using Porticor’s split-key technology you need the PVKM key and the master key used to recombine those keys. Layer atop that homomorphic key encryption to ensure the keys don’t actually exist anywhere, and you have a rejoined to the claim that secure data and cloud simply cannot coexist.

In addition to the relative newness of the technique (and the nature of being untried at this point) the argument against homomorphic encryption of any kind is a familiar one: performance. Cryptography in general is by no means a fast operation and there is more than a decade’s worth of technology in the form of hardware acceleration (and associated performance tests) specifically designed to remediate the slow performance of cryptographic functions. Homomorphic encryption is noted to be excruciatingly slow and the inability to leverage any kind of hardware acceleration in cloud computing environments offers no relief. Whether this performance penalty will be worth the additional level of security such a system adds is largely a matter of conjecture and highly dependent upon the balance between security and performance required by the organization.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_facebook[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Getting at the Heart of Security in the Cloud
Threat Assessment: Terminal Services RDP Vulnerability
The Cost of Ignoring ‘Non-Human’ Visitors
Identity Gone Wild! Cloud Edition F5 Friday: Addressing the Unintended Consequences of Cloud
Surfing the Surveys: Cloud, Security and those Pesky Breaches Dome9: Closing the (Cloud) Barn Door  Get Your Money for Nothing and Your Bots for Free  Technorati Tags: F5,MacVittie,Porticor,cryptography,cloud,homomorphic encryption,PKI,security,blog

read more

BIG-IP Solutions for Microsoft Private Cloud

Five of the top six services critical to cloud are application delivery services and available with F5 BIG-IP.

f5friday

The big news at MMS 2012 was focused on private cloud and Microsoft’s latest solutions in the space with System Center 2012. Microsoft’s news comes on the heels of IBM’s latest foray with its PureSystems launch at its premiere conference, IBM Pulse. 

As has become common, while System Center 2012 addresses the resources most commonly associated with cloud of any kind, compute, and the means by which operational tasks can be codified, automated, and integrated, it does not delve too deeply into the network, leaving that task to its strategic partners.

One of its long-term partners is F5, and we take the task seriously.The benefits of private cloud are rooted in greater economies of scale through broader aggregation and provisioning of resources, as well its ability to provide for flexible and reliable applications that are always available and rely on many of these critical services. Applications are not islands of business functionality, after all; they rely upon a multitude of network-hosted services such as load balancing, identity and access management, and security services to ensure a consistent, secure end-user experience from anywhere, from any device.most important features cloud nww 5 of the top 6 services seen as most critical to cloud implementations in a 2012 Network World Cloud survey are infrastructure services, all of which are supported by the application delivery tier.

The ability to consistently apply policies governing these aspects of every successful application deployment is critical to keeping the network aligned with the allocation of compute and storage resources. With the network, applications cannot scale, reliability is variable, and security compromised through fragmentation and complexity. The lack of a unified infrastructure architecture reduces the performance, scale, security and flexibility of cloud computing environments, both private and public. Thus, just as we ensure the elasticity and operational benefits associated with a more automated and integrated application delivery strategy for IBM, so have we done with respect to a Microsoft private cloud solution.

BIG-IP Solutions for Microsoft Private Cloud

BIG-IP solutions for Microsoft private cloud take advantage of key features and technologies in BIG-IP version 11.1, including F5’s virtual Clustered MultiprocessingTM (vCMP™) technology, iControl®, F5’s web services-enabled open application programming interface (API), administrative partitioning and server name indication (SNI). Together, these features help reduce the cost and complexity of managing cloud infrastructures in multi-tenant environments. With BIG-IP v11.1, organizations reap the maximum benefits of conducting IT operations and application delivery services in the private cloud. Although these technologies are generally applicable to all cloud implementations – private, public or hybrid – we also announced Microsoft-specific integration and support that enables organizations to ensure the capability to extend automation and orchestration into the application delivery tier for maximum return on investment.

F5 Monitoring Pack for System Center
Provides two-way communication between BIG-IP devices and the System Center management console. Health monitoring, failover, and configuration synchronization of BIG-IP devices, along with customized alerting, Maintenance Mode, and Live Migration, occur within the Operations Manager component of System Center. The F5 Load Balancing Provider for System Center
Enables one-step, automated deployment of load balancing services through direct interoperability between the Virtual Machine Manager component of System Center 2012 and BIG-IP devices. BIG-IP devices are managed through the System Center user interface, and administrators can custom-define load balancing services. The Orchestrator component of System Center 2012
Provides F5 traffic management capabilities and takes advantage of workflows designed using the Orchestrator Runbook Designer. These custom workflows can then be published directly into System Center 2012 service catalogs and presented as a standard offering to the organization. This is made possible using the F5 iControl SDK, which gives customers the flexibility to choose a familiar development environment such as the Microsoft .NET Framework programming model or Windows PowerShell scripting.

 

F5 big ip msft private cloud solution diagram

Private cloud – as an approach to IT operations – calls for transformation of datacenters, leveraging a few specific strategic points of control, to aggregate and continuously re-allocate IT resources as needed in such as way to make software applications more like services that are always on and secured across users and devices. Private cloud itself is not a single, tangible solution today. Today it is a solution comprised of several key components, including power/cooling, compute, storage and network, management and monitoring tools and the the software applications/databases that end users need.

We’ve moved past the hype of private cloud and its potential benefits. Now organizations need a path, clearly marked, to help them build and deploy private clouds.

That’s part of F5’s goal – to provide the blueprints necessary to build out the application delivery tier to ensure a flexible, reliable and scalable foundation for the infrastructure services required to build and deploy private clouds.

Availability

The F5 Monitoring Pack for System Center and the F5 PRO-enabled Monitoring Pack for System Center are now available. The F5 Load Balancing Provider for System Center is available as a free download from the F5 DevCentral website. The Orchestrator component of System Center 2012 is based on F5 iControl and Windows PowerShell, and is also free.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Complexity Drives Consolidation  At the Intersection of Cloud and Control…  F5 Friday: Addressing the Unintended Consequences of Cloud  F5 Friday: Workload Optimization with F5 and IBM PureSystems  The HTTP 2.0 War has Just Begun  F5 Friday: Microsoft and F5 Lync Up on Unified Communications  DevCentral Groups – Microsoft / F5 Solutions  Webcast: BIG-IP v11 and Microsoft Technologies – Applications   Technorati Tags: F5,F5 Friday,MacVittie,Microsoft,MMS 2012,BIG-IP,private cloud computing,cloud computing,devops,automation,orchestration,architecture,System Center 2012,load balancing,security,performance,scalability domain,blog

read more

Avoid the Security Umpire Problem

Have you ever been part of a team or committee working on an initiative and found that the security or compliance person seemed to be holding up your project? They just seemed to find fault with anything and everything and just didn’t add much value to the initiative? If you are stuck with security staff that are like this all the time, that’s a bigger issue that’s not within the scope of this article to solve.  But, most of the time, it’s because this person was brought in very late in the project and a bunch of things have just been thrown at them, forcing them to make quick calls or decisions.

A common scenario is that people feel that there is no need to involve the security folks until after the team has come up with a solution.  Then the team pulls in the security or compliance folks to validate that the solution doesn’t go afoul of the organization’s security or compliance standards. Instead of a team member who can help with the security and compliance aspects of your project, you have ended up with an umpire.

Now think back to when you were a kid picking teams to play baseball.  If you had an odd number of kids then more than likely there would be one person left who would end up being the umpire. When you bring in the security or compliance team member late in the game, you may end up with someone that takes on the role of calling balls and strikes instead of being a contributing member of the team.

Avoid this situation by involving your Security and Compliance staff early on, when the team is being assembled.  Your security SMEs should be part of these conversations.  They should know the business and what the business requirements are.  They should be involved in the development of solutions.  They should know how to work within a team through the whole project lifecycle. Working this way ensures that the security SME has full context and is a respected member of the team, not a security umpire.

This is even more important when the initiative is related to virtualization or cloud. There are so many new things happening in this specific area that everyone on the team needs as much context, background, and lead time as possible so that they can work as a team to come up with solutions that make sense for the business.