Tag Archives: F5

F5 launches distributed cloud app infrastructure protection

F5 has launched the F5 Distributed Cloud App Infrastructure Protection (AIP), a cloud workload protection solution that expands application observability and protection to cloud-native infrastructures. Powered by technology acquired with Threat Stack, AIP is the newest addition to the F5 Distributed Cloud Services portfolio of cloud-native SaaS-based application security and delivery services. Organisations of all… Read more »

The post F5 launches distributed cloud app infrastructure protection appeared first on Cloud Computing News.

Hybrid environments and IoT pose biggest threats to infosec – F5

F5 Forum 2Service providers and enterprises face an insecure networking environment in coming years as more applications, data and services are sent to the cloud, according to networking vendor F5, writes Telecoms.com.

Speaking at the F5 Forum in London, VP of UK and Ireland Keith Bird stressed security is now front and centre not only to the CTO and CEO, but to consumers as intrusion or security breaches regularly make headlines. Bird pointed to the hybrid on-premise/cloud-based environment, in which an increasing number of enterprise and service providers operate, as a huge challenge looming for the information security industry.

“Not so long ago, we looked at just single points of entry. In today’s hybrid world, we’ve got apps in the data centre or in the cloud as SaaS and this is only increasing,” he said. “What we know for sure is that there is no longer a perimeter to the network – that’s totally disappeared.”

“81% of people we recently surveyed said they plan on operating in a hybrid environment, while 20% said they’re now moving over half of their corporate applications to the cloud. Even some of the largest companies in the world are taking up to 90% of their applications to the cloud.”

Given the volume and nature of data being hosted in the cloud, firms are far more accountable and held to tighter information security standards today than they have ever been. The average financial impact of an information security breach is now in the region of $7.2 million, according to F5 research.

“The average cost of a security breach consists of $110,000 lost revenue per hour of downtime – but the effect on a company’s website or application is costing potential business,” said Bird. “The average customer will abandon an attempted session after roughly four seconds of inactivity, so there’s new business being lost as well.”

F5 said of the threats it is seeing at the moment, according to customer surveys, the evolving nature and sophistication of attacks ranks highest, with the internal threat of employee ignorance a close second.

“So what are the top security challenges our customers are seeing?” said Bird. “58% are seeing increasingly sophisticated attacks on their networks, from zero-day to zero-second. 52% were concerned that their own employees don’t realise the impact of not following security policies. Obviously plenty of people said they don’t have enough budget, but that’s not quite the biggest problem facing security departments today.”

F5’s Technical Director Gary Newe, who’s responsible for field systems engineering, said the looming prospect of IoT “scares the bejesus” out of him.

“We’ve all heard about the IoT,” he said before pointing to the connected fridge as a farcically insecure IoT device. “There are 3 billion devices which run Java, which makes it 3 million hackable devices, and that scares the bejesus out of me. This isn’t just a potential impact to the enterprise, but it could have a massive impact on consumers and families. Fitness trackers, for example, just encourage people to give a tonne of data over to companies we don’t know about, and we don’t know how good their security is.”

The scariest bit, Newe emphasised, is the growing knowledge and intelligence of more technically adept youngsters today, and how the rate of technological change will only exacerbate the requirement for a fresh approach to network security.

“Change is coming at a pace, the likes of which we’ve never seen nor ever anticipated,” he said. “We’re building big walls around our networks, but hackers are just walking through the legitimate front doors we’re putting in instead.

“The scariest thing is that the OECD [Organisation for Economic Cooperation and Development] has said the average IQ today is 10 points higher than it was 20 years ago. So teenagers today are smarter than we ever were, they’ve got more compute power than we ever had, and they’re bored. That, to me, is terrifying.”

F5 Aqcuires LineRate

F5 Networks, Inc. today announced that it has agreed to acquire LineRate Systems, a developer of software defined networking (SDN) services. Through this acquisition, F5 gains access to LineRate’s layer 7+ networking services technology, intellectual property, and engineering talent, further building on F5’s commitment to bring superior service agility, application intelligence, and programmability to software defined data centers.

“F5’s vision is to enable application-layer SDN services for software defined data centers, providing our customers with unprecedented automation, orchestration, and control,” said Karl Triebes, EVP of Product Development and CTO at F5. “Extreme scale of applications, infrastructure, and services will be vital to SDN, core to F5, and the key to customers realizing the benefits of software defined data centers. LineRate’s programmable network capabilities and innovations in layer 7 will bolster our efforts to extend F5’s market leadership in these areas, and we are very pleased to welcome the LineRate team to F5.”

Critical to the success of SDN architectures are security, acceleration, optimization, and intelligent traffic management services at the application layer. Application-layer services will be vital to meet customers’ expectations of flexibility, scale, and performance, and to provide significant competitive advantages. The LineRate acquisition aligns with F5’s vision for application control plane architecture and goal of delivering application-layer SDN services to bring superior agility, application intelligence, and programmability to software defined data centers.

LineRate Systems, based near Boulder, Colorado, provides software that offers layer 7+ network services. LineRate’s software defined solution is an early-stage product, which will become an important part of F5’s broad range of development initiatives focused on software defined data centers.

“In order to reap the benefits of software defined data centers, programmable, scalable network services must be present throughout the entire network stack—up to the application layer,” said Manish Vachharajani, Co-Founder and Chief Software Architect at LineRate Systems. “We recognize that the SDN fabric may be good at layer 2-4, but ultimately customers care most about the applications. The LineRate team is excited to join F5 and looks forward to helping F5 accelerate development initiatives in this important market.”

A More Practical View of Cloud Brokers

#cloud The conventional view of cloud brokers misses the need to enforce policies and ensure compliance

cloudbrokerviews During a dinner at VMworld organized by Lilac Schoenbeck of BMC, we had the chance to chat up cloud and related issues with Kia Behnia, CTO at BMC. Discussion turned, naturally I think, to process. That could be because BMC is heavily invested in automating and orchestrating processes. Despite the nomenclature used (business process management) for IT this is a focus on operational process automation, though eventually IT will have to raise the bar and focus on the more businessy aspects of IT and operations.

Alex Williams postulated the decreasing need for IT in an increasingly cloudy world. On the surface this generally seems to be an accurate observation. After all, when business users can provision applications a la SaaS to serve their needs do you really need IT? Even in cases where you’re deploying a fairly simple web site, the process has become so abstracted as to comprise the push of a button, dragging some components after specifying a template, and voila! Web site deployed, no IT necessary.

While from a technical difficulty perspective this may be true (and if we say it is, it is for only the smallest of organizations) there are many responsibilities of IT that are simply overlooked and, as we all know, underappreciated for what they provide, not the least of which is being able to understand the technical implications of regulations and requirements like HIPAA, PCI-DSS, and SOX – all of which have some technical aspect to them and need to be enforced, well, with technology.

See, choosing a cloud deployment environment is not just about “will this workload run in cloud X”. It’s far more complex than that, with many more variables that are often hidden from the end-user, a.k.a. the business peoples. Yes, cost is important. Yes, performance is important. And these are characteristics we may be able to gather with a cloud broker. But what we can’t know is whether or not a particular cloud will be able to enforce other policies – those handed down by governments around the globe and those put into writing by the organization itself.

Imagine the horror of a CxO upon discovering an errant employee with a credit card has just violated a regulation that will result in Severe Financial Penalties or worse – jail. These are serious issues that conventional views of cloud brokers simply do not take into account. It’s one thing to violate an organizational policy regarding e-mailing confidential data to your Gmail account, it’s quite another to violate some of the government regulations that govern not only data at rest but in flight.

A PRACTICAL VIEW of CLOUD BROKERS

Thus, it seems a more practical view of cloud brokers is necessary; a view that enables such solutions to not only consider performance and price, but ability to adhere to and enforce corporate and regulatory polices. Such a data center hosted cloud broker would be able to take into consideration these very important factors when making decisions regarding the optimal deployment environment for a given application. That may be a public cloud, it may be a private cloud – it may be a dynamic data center. The resulting decision (and options) are not nearly as important as the ability for IT to ensure that the technical aspects of policies are included in the decision making process.

And it must be IT that codifies those requirements into a policy that can be leveraged by the  broker and ultimately the end-user to help make deployment decisions. Business users, when faced with requirements for web application firewalls in PCI-DSS, for example, or ensuring a default “deny all” policy on firewalls and routers, are unlikely able to evaluate public cloud offerings for ability to meet such requirements. That’s the role of IT, and even wearing rainbow-colored cloud glasses can’t eliminate the very real and important role IT has to play here.

The role of IT may be changing, transforming, but it is no way being eliminated or decreasing in importance. In fact, given the nature of today’s environments and threat landscape, the importance of IT in helping to determine deployment locations that at a minimum meet organizational and regulatory requirements is paramount to enabling business users to have more control over their own destiny, as it were. 

So while cloud brokers currently appear to be external services, often provided by SIs with a vested interest in cloud migration and the services they bring to the table, ultimately these beasts will become enterprise-deployed services capable of making policy-based decisions that include the technical details and requirements of application deployment along with the more businessy details such as costs.

The role of IT will never really be eliminated. It will morph, it will transform, it will expand and contract over time. But business and operational regulations cannot be encapsulated into policies without IT. And for those applications that cannot be deployed into public environments without violating those policies, there needs to be a controlled, local environment into which they can be deployed.


Related blogs and articles:  
 
lori-short-2012clip_image004[5]

Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite.

Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules

 

F5 Networks

clip_image003[5]clip_image004[5]clip_image006[5]clip_image007[5]clip_image008[5]


read more

Cloud Isn’t Social, It’s Business

Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS.

Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model.

While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization.

By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations.

A service subscription business model:

  • Makes it easier to project costs across entire infrastructure
    Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project.
  • Easier to justify cost of infrastructure
    Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not.
  • Enables business to manage costs over time 
    Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not.
  • Easier to start up a project/application and ramp up over time as associated revenue increases
    Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets.
  • Enables consistent comparison with off-premise cloud computing
    A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required.

The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand.

That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well.

One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT.


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

The Operational Consistency Proxy

#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside

f5friday

While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.

F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.

f5em2.0AUTOMATE COMMON TASKS

The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.

Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.

The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.

INTEGRATED PERFORMANCE METRICS real-time-app-perf-monitoring-cloud-dc

F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.

These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand. 

Monitoring includes:

  • Device Level Visibility & Monitoring
  • Capacity Planning
  • Virtual Level & Pool Member Statistics
  • Object Level Visibility
  • Near Real-Time Graphics
  • Reporting

In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.

Alerts include:

  • Device status change
  • SSL certificate expiration
  • Software install complete
  • Software copy failure
  • Statistics data threshold
  • Configuration synchronization
  • Attack signature update
  • Clock skew

When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.

OPERATIONAL CONSISTENCY PROXY

By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.

F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.

Additional Resources:

Happy Managing!


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

Introducing the F5 Technical Certification Program

#F5TCP #interop You are now. Introducing the F5 Technical Certification Program.

f5friday

Can you explain the role of the Cache-Control HTTP header? How about the operational flow of data during an SMTP authentication exchange? Are you well-versed in the anatomy of an SSL handshake and the implications of encrypting data as it flows across the network?

Can you explain the features and functionalities of protocols and technologies specific to the Transport layer?

If so, then you won’t need to study nearly as much as many of your compatriots when you take the test to become an F5 Certified™ professional.

Introducing the F5 Technical Certification Program (F5-TCP)

F5_CertLogo_041012mdF5 Certified™ individuals represent a new breed of technologist – capable of manipulating the entire application stack from the traditional networking knowledge all the way to advanced application-layer understanding with a unique capability to integrate the two. Never before has any company created a program designed to bridge these worlds; a capability critical to the increasingly mobile and cloud-based solutions being implemented around the world today.

The need has always existed, but with the increasing focus on the abstraction of infrastructure through cloud computing and virtualization the need is greater today than ever for basic application delivery skills. Consider that at the heart of the elasticity promised by cloud computing is load balancing, and yet there is no general course or certification program through which a basic understanding of the technology can be achieved. There are no university courses in application delivery, no well-defined missing certlearning paths for new hires, no standard skills assessments. Vendors traditionally provide training but it is focused on product, not technology or general knowledge, leaving employees with highly specific skills that are not necessarily transferrable. This makes the transition to cloud more difficult as organizations struggle with integrating disparate application delivery technologies to ensure an operationally consistent environment without compromising on security or performance.

The F5-TCP focuses on both basic application delivery knowledge as well as a learning path through its application delivery products.

Starting with a core foundation in application delivery fundamentals, F5 Certified™ individuals will be able to focus on specific application delivery tracks through a well-defined learning path that leads to application delivery mastery.

Fundamentals being what they are – fundamental – the first step is to build a strong foundation in the technologies required to deploy and manage application delivery regardless of vendor or environment. Understanding core concepts such as the entire OSI model – including the impact of transport and application layer protocols and technologies on the network – is an invaluable skill today given the increasing focus on these layers over others when moving to highly virtualized and cloud computing environments.

As technologies continue to put pressure on IT to integrate more devices, more applications, and more environments, the application delivery tier becomes more critical to the ability of organizations not just to successfully integrate the technology, but to manage it, secure it, and deliver it in an operationally efficient way. Doing that requires skills; skills that IT organizations often lack. With no strong foundation in how to leverage such technology, it makes sense that organizations are simply not seeing the benefits of application delivery they could if they were able to fully take advantage of it.

testing tracks

quote-badgeApplication delivery solutions are often underutilized and not well-understood in many IT organizations. According to research by Gartner, up to three-quarters of IT organizations that have deployed advanced application delivery controllers (ADCs) use them only for basic load balancing. When faced with performance or availability challenges, these organizations often overlook the already-deployed ADC, because it was purchased to solve basic server load balancing and is typically controlled by the network operations team.

Gartner: Three Phases to Improve Application Delivery Teams 

F5 is excited to embark on this effort and provide not just a “BIG-IP” certification, but the fundamental skills and knowledge necessary for organizations to incorporate application delivery as a first class citizen in its data center architecture and fully realize the benefits of application delivery.

F5 Certification Resources

Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:

read more

The Encrypted Elephant in the Cloud Room

Encrypting data in the cloud is tricky and defies long held best practices regarding key management. New kid on the block Porticor aims to change that.

pink elephant

Anyone who’s been around cryptography for a while understands that secure key management is a critical foundation for any security strategy involving encryption. Back in the day it was SSL, and an entire industry of solutions grew up specifically aimed at protecting the key to the kingdom – the master key. Tamper-resistant hardware devices are still required for some US Federal security standards under the FIPS banner, with specific security protections at the network and software levels providing additional assurance that the ever important key remains safe.

In many cases it’s advised that the master key is not even kept on the same premises as the systems that use it. It must be locked up, safely, offsite; transported via a secure briefcase, handcuffed to a security officer and guarded by dire wolves. With very, very big teeth.

No, I am not exaggerating. At least not much. The master key really is that important to the security of cryptography. porticor-logo

That’s why encryption in the cloud is such a tough nut to crack. Where, exactly, do you store the keys used to encrypt those Amazon S3 objects? Where, exactly, do you store the keys used to encrypt disk volumes in any cloud storage service?

Start-up Porticor has an answer, one that breaks (literally and figuratively) traditional models of key management and offers a pathway to a more secure method of managing cryptography in the cloud.

SPLIT-KEY ENCRYPTION andyburton-quote

Porticor is a combination SaaS / IaaS solution designed to enable encryption of data at rest in IaaS environments with a focus on cloud, currently available on AWS and other clouds. It’s a combination in not just deployment model – which is rapidly becoming the norm for cloud-based services – but in architecture, as well.

To alleviate violating best practices with respect to key management, i.e. you don’t store the master key right next to the data it’s been used to encrypt – Porticor has developed a technique it calls “Split-Key Encryption.”

Data encryption comprises, you’ll recall, the execution of an encryption algorithm on the data using a secret key, the result of which is ciphertext. The secret key is the, if you’ll pardon the pun, secret to gaining access to that data once it has been encrypted. Storing it next to the data, then, is obviously a Very Bad Idea™ and as noted above the industry has already addressed the risk of doing so with a variety of solutions. Porticor takes a different approach by focusing on the security of the key not only from the perspective of its location but of its form.

The secret master key in Porticor’s system is actually a mathematical combination of the master key generated on a per project (disk volumes or S3 objects) basis and a unique key created by the Porticor Virtual Key Management™ (PVKM™)  system. The master key is half of the real key, and the PVKM generated key the other half. Only by combining the two – mathematically – can you discover the true secret key needed to work with the encrypted data.

split key encryptionThe PVKM generated key is stored in Porticor’s SaaS-based key management system, while the master keys are stored in the Porticor virtual appliance, deployed in the cloud along with the data its protecting.

The fact that the secret key can only be derived algorithmically from the two halves of the keys enhances security by making it impossible to find the actual encryption key from just one of the halves, since the math used removes all hints to the value of that key. It removes the risk of someone being able to recreate the secret key correctly unless they have both halves at the same time. The math could be a simple concatenation, but it could also be a more complicated algebraic equation. It could ostensibly be different for each set of keys, depending on the lengths to which Porticor wants to go to minimize the risk of someone being able to recreate the secret key correctly.

Still, some folks might be concerned that the master key exists in the same environment as the data it ultimately protects. Porticor intends to address that by moving to a partially homomorphic key encryption scheme.

HOMOMORPHIC KEY ENCRYPTION

If you aren’t familiar with homomorphic encryption, there are several articles I’d encourage you to read, beginning with “Homomorphic Encryption” by Technology Review followed by Craig Stuntz’s “What is Homomorphic Encryption, and Why Should I Care?”  If you can’t get enough of equations and formulas, then wander over to Wikipedia and read its entry on Homomorphic Encryption as well.

Porticor itself has a brief discussion of the technology, but it is not nearly as deep as the aforementioned articles.

In a nutshell (in case you can’t bear to leave this page) homomorphic encryption is the fascinating property of some algorithms to work both on plaintext as well as on encrypted versions of the plaintext and come up with the same result. Executing the algorithm against encrypted data and then decrypting it gives the same result as executing the algorithm against the unencrypted version of the data. 

So, what Porticor plans to do is apply homomorphic encryption to the keys, ensuring that the actual keys are no longer stored anywhere – unless you remember to tuck them away someplace safe or write it down. The algorithms for joining the two keys are performed on the encrypted versions of the keys, resulting in an encrypted symmetric key specific to one resource – a disk volume or S3 object.

The resulting system ensures that:

No keys are ever on a disk in plain form Master keys are never decrypted, and so they are never known to anyone outside the application owner themselves The “second half” of each key (PVKM stored) are also never decrypted, and are never even known to anyone (not even Porticor) Symmetric keys for a specific resource exist in memory only, and are decrypted for use only when the actual data is needed, then they are discarded

This effectively eliminates one more argument against cloud – that keys cannot adequately be secured.

In a traditional data encryption solution the only thing you need is the secret key to unlock the data. Using Porticor’s split-key technology you need the PVKM key and the master key used to recombine those keys. Layer atop that homomorphic key encryption to ensure the keys don’t actually exist anywhere, and you have a rejoined to the claim that secure data and cloud simply cannot coexist.

In addition to the relative newness of the technique (and the nature of being untried at this point) the argument against homomorphic encryption of any kind is a familiar one: performance. Cryptography in general is by no means a fast operation and there is more than a decade’s worth of technology in the form of hardware acceleration (and associated performance tests) specifically designed to remediate the slow performance of cryptographic functions. Homomorphic encryption is noted to be excruciatingly slow and the inability to leverage any kind of hardware acceleration in cloud computing environments offers no relief. Whether this performance penalty will be worth the additional level of security such a system adds is largely a matter of conjecture and highly dependent upon the balance between security and performance required by the organization.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_facebook[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Getting at the Heart of Security in the Cloud
Threat Assessment: Terminal Services RDP Vulnerability
The Cost of Ignoring ‘Non-Human’ Visitors
Identity Gone Wild! Cloud Edition F5 Friday: Addressing the Unintended Consequences of Cloud
Surfing the Surveys: Cloud, Security and those Pesky Breaches Dome9: Closing the (Cloud) Barn Door  Get Your Money for Nothing and Your Bots for Free  Technorati Tags: F5,MacVittie,Porticor,cryptography,cloud,homomorphic encryption,PKI,security,blog

read more

BIG-IP Solutions for Microsoft Private Cloud

Five of the top six services critical to cloud are application delivery services and available with F5 BIG-IP.

f5friday

The big news at MMS 2012 was focused on private cloud and Microsoft’s latest solutions in the space with System Center 2012. Microsoft’s news comes on the heels of IBM’s latest foray with its PureSystems launch at its premiere conference, IBM Pulse. 

As has become common, while System Center 2012 addresses the resources most commonly associated with cloud of any kind, compute, and the means by which operational tasks can be codified, automated, and integrated, it does not delve too deeply into the network, leaving that task to its strategic partners.

One of its long-term partners is F5, and we take the task seriously.The benefits of private cloud are rooted in greater economies of scale through broader aggregation and provisioning of resources, as well its ability to provide for flexible and reliable applications that are always available and rely on many of these critical services. Applications are not islands of business functionality, after all; they rely upon a multitude of network-hosted services such as load balancing, identity and access management, and security services to ensure a consistent, secure end-user experience from anywhere, from any device.most important features cloud nww 5 of the top 6 services seen as most critical to cloud implementations in a 2012 Network World Cloud survey are infrastructure services, all of which are supported by the application delivery tier.

The ability to consistently apply policies governing these aspects of every successful application deployment is critical to keeping the network aligned with the allocation of compute and storage resources. With the network, applications cannot scale, reliability is variable, and security compromised through fragmentation and complexity. The lack of a unified infrastructure architecture reduces the performance, scale, security and flexibility of cloud computing environments, both private and public. Thus, just as we ensure the elasticity and operational benefits associated with a more automated and integrated application delivery strategy for IBM, so have we done with respect to a Microsoft private cloud solution.

BIG-IP Solutions for Microsoft Private Cloud

BIG-IP solutions for Microsoft private cloud take advantage of key features and technologies in BIG-IP version 11.1, including F5’s virtual Clustered MultiprocessingTM (vCMP™) technology, iControl®, F5’s web services-enabled open application programming interface (API), administrative partitioning and server name indication (SNI). Together, these features help reduce the cost and complexity of managing cloud infrastructures in multi-tenant environments. With BIG-IP v11.1, organizations reap the maximum benefits of conducting IT operations and application delivery services in the private cloud. Although these technologies are generally applicable to all cloud implementations – private, public or hybrid – we also announced Microsoft-specific integration and support that enables organizations to ensure the capability to extend automation and orchestration into the application delivery tier for maximum return on investment.

F5 Monitoring Pack for System Center
Provides two-way communication between BIG-IP devices and the System Center management console. Health monitoring, failover, and configuration synchronization of BIG-IP devices, along with customized alerting, Maintenance Mode, and Live Migration, occur within the Operations Manager component of System Center. The F5 Load Balancing Provider for System Center
Enables one-step, automated deployment of load balancing services through direct interoperability between the Virtual Machine Manager component of System Center 2012 and BIG-IP devices. BIG-IP devices are managed through the System Center user interface, and administrators can custom-define load balancing services. The Orchestrator component of System Center 2012
Provides F5 traffic management capabilities and takes advantage of workflows designed using the Orchestrator Runbook Designer. These custom workflows can then be published directly into System Center 2012 service catalogs and presented as a standard offering to the organization. This is made possible using the F5 iControl SDK, which gives customers the flexibility to choose a familiar development environment such as the Microsoft .NET Framework programming model or Windows PowerShell scripting.

 

F5 big ip msft private cloud solution diagram

Private cloud – as an approach to IT operations – calls for transformation of datacenters, leveraging a few specific strategic points of control, to aggregate and continuously re-allocate IT resources as needed in such as way to make software applications more like services that are always on and secured across users and devices. Private cloud itself is not a single, tangible solution today. Today it is a solution comprised of several key components, including power/cooling, compute, storage and network, management and monitoring tools and the the software applications/databases that end users need.

We’ve moved past the hype of private cloud and its potential benefits. Now organizations need a path, clearly marked, to help them build and deploy private clouds.

That’s part of F5’s goal – to provide the blueprints necessary to build out the application delivery tier to ensure a flexible, reliable and scalable foundation for the infrastructure services required to build and deploy private clouds.

Availability

The F5 Monitoring Pack for System Center and the F5 PRO-enabled Monitoring Pack for System Center are now available. The F5 Load Balancing Provider for System Center is available as a free download from the F5 DevCentral website. The Orchestrator component of System Center 2012 is based on F5 iControl and Windows PowerShell, and is also free.

Connect with Lori: Connect with F5: o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google Related blogs & articles: Complexity Drives Consolidation  At the Intersection of Cloud and Control…  F5 Friday: Addressing the Unintended Consequences of Cloud  F5 Friday: Workload Optimization with F5 and IBM PureSystems  The HTTP 2.0 War has Just Begun  F5 Friday: Microsoft and F5 Lync Up on Unified Communications  DevCentral Groups – Microsoft / F5 Solutions  Webcast: BIG-IP v11 and Microsoft Technologies – Applications   Technorati Tags: F5,F5 Friday,MacVittie,Microsoft,MMS 2012,BIG-IP,private cloud computing,cloud computing,devops,automation,orchestration,architecture,System Center 2012,load balancing,security,performance,scalability domain,blog

read more