Tag Archives: security
Four Things You Need to Know About PCI Compliance in the Cloud
By Andrew Hay, Chief Evangelist, CloudPassage
Andrew Hay is the Chief Evangelist at CloudPassage, Inc. where he is lead advocate for its SaaS server security product portfolio. Prior to joining CloudPassage, Andrew was a a Senior Security Analyst for 451 Research, where he provided technology vendors, private equity firms, venture capitalists and end users with strategic advisory services.
Anyone who’s done it will tell you that implementing controls that will pass a PCI audit is challenging enough in a traditional data center where everything is under your complete control. Cloud-based application and server hosting makes this even more complex. Cloud teams often hit a wall when it’s time to select and deploy PCI security controls for cloud server environments. Quite simply, the approaches we’ve come to rely on just don’t work in highly dynamic, less-controlled cloud environments. Things were much easier when all computing resources were behind the firewall with layers of network-deployed security controls between critical internal resources and the bad guys on the outside.
Addressing the challenges of PCI DSS in cloud environments isn’t an insurmountable challenge. Luckily, there are ways to address some of these key challenges when operating a PCI-DSS in-scope server in a cloud environment. The first step towards embracing cloud computing, however, is admitting (or in some cases learning) that your existing tools might be not capable of getting the job done.
Traditional security strategies were created at a time when cloud infrastructures did not exist and the use of public, multi-tenant infrastructure was data communications via the Internet. Multi-tenant (and even some single-tenant) cloud hosting environments introduce many nuances, such as dynamic IP addressing of servers, cloud bursting, rapid deployment and equally rapid server decommissioning, that the vast majority of security tools cannot handle.
First Takeaway: The tools that you have relied upon for addressing PCI related concerns might not be built to handle the nuances of cloud environments.
The technical nature of cloud-hosting environments makes them more difficult to secure. A technique sometimes called “cloud-bursting” can be used to increase available compute power extremely rapidly by cloning virtual servers, typically within seconds to minutes. That’s certainly not enough time for manual security configuration or review.
Second Takeaway: Ensure that your chosen tools can be built into your cloud instance images to ensure security is part of the provisioning process.
While highly beneficial, high-speed scalability also means high-speed growth of vulnerabilities and attackable surface area. Using poorly secured images for cloud-bursting or failing to automate security in the stack means a growing threat of server compromise and nasty compliance problems during audits.
Third Takeaway: Vulnerabilities should be addressed prior to bursting or cloning your cloud servers and changes should be closely monitored to limit the expansion of your attackable surface area.
Traditional firewall technologies present another challenge in cloud environments. Network address assignment is far more dynamic in clouds, especially in public clouds. There is rarely a guarantee that your server will spin up with the same IP address every time. Current host-based firewalls can usually handle changes of this nature but what about firewall policies defined with specific source and destination IP addresses? How will you accurately keep track of cloud server assets or administer network access controls when IP addresses can change to an arbitrary address within a massive IP address space?
Fourth Takeaway: Ensure that your chosen tools can handle the dynamic nature of cloud environments without disrupting operations or administrative access.
The auditing and assessment of deployed servers is an addressable challenge presented by cloud architectures. Deploying tools purpose-built for dynamic public, private and hybrid cloud environments will also ensure that your security scales alongside your cloud server deployments. Also, if you think of cloud servers as semi-static entities deployed on a dynamic architecture, you will be better prepared to help educate internal stakeholders, partners and assessors on the aforementioned cloud nuances – and how your organization has implemented safeguards to ensure adherence to PCI-DSS.
Woz on Cloud Dangers Started a Useful Conversation
When Apple co-founder and all-around tech icon Steve Wozniak was quoted as saying he expected horror stories from the cloud, and in the wake of a cautionary tale of total cloud hack horror from xxxxxx, it set off a useful round of comment.
Yesterday we had a guest post on the topic.
Today you might read the I, Cringley take, which as can be expected is full of his usual cobbled-together, but pretty effective, roll-your-own solutions.
Woz is Worried About “Everything Going to the Cloud” — the Real Issue is Giving Up Control
Guest Post By Nati Shalom, CTO and Founder of GigaSpaces
In a recent article, Steve Wozniak, who co-founded Apple with the late Steve Jobs, predicted “horrible problems” in the coming years as cloud-based computing takes hold.
“I really worry about everything going to the cloud,”.. “I think it’s going to be horrendous. I think there are going to be a lot of horrible problems in the next five years. ….“…with the cloud, you don’t own anything. You already signed it away.”
When I first read the title I thought, Wozniak sounds like Larry Ellison two years ago, when he pitched the Cloud is hype, before he made a 180-degree turn to acknowledge Oracle wished to be a cloud vendor too.
Reading it more carefully, I realized the framing of the topic is instead just misleading. Wozniak actually touches on something that I hear more often, as the cloud hype cycle is moves from a Peak of Inflated Expectations into through the Trough of Disillusionment.
Wozniak echos an important lesson, that IMO, is major part of the reason many of the companies that moved to cloud have experienced lots of outages during the past months. I addressed several of these aspects in in a recent blog post: Lessons from the Heroku/Amazon Outage.
When we move our operations to the cloud, we often assume that we’re out-sourcing our data center operation completely, including our disaster recovery procedures. The truth is that when we move to the cloud we’re only outsourcing the infrastructure, not our operations, and the responsibility of how to use this infrastructure remain ours.
Choosing better tradeoffs between producivity and control
For companies today, the main reason we chose to move to the cloud in the first place was to gain better agility and productivity. But in starting this cloud journey, we found that we had to give up some measure of control to achieve the agility and productivity.
The good news is that as the industry mature there are more choices that provides better tradeoffs between producivity and control:
- Open source cloud such as OpenStack and CloudStack
- Private cloud offering
- DevOps and automation tools such as Chef and Puppet
- OpenSource PaaS such as Cloudify, OpenShift and CloudFoundry
- DevOps and PaaS combined such Cloudify
As businesses look at cloud strategy today, there isn’t a need to give up control over productivity. With technologies like Cloudify, businesses can get the best out of both worlds.
- Apple Co-Founder Steve Wozniak Distrusts the Cloud: Is He Right?
- Mapping the Cloud/PaaS Stack
- Why Larry Don’t Get It
- Lessons from Zynga & Sony on moving from Amazon AWS
- Public vs Private clouds (Again!)- it’s not about the cost
- Putting DevOps and PaaS together with Cloudify
Nati Shalom is the CTO and founder of GigaSpaces and founder of the Israeli cloud.org consortium.
The Operational Consistency Proxy
#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside
While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.
F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.
AUTOMATE COMMON TASKS
The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.
Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.
The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.
INTEGRATED PERFORMANCE METRICS
F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.
These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand.
Monitoring includes:
- Device Level Visibility & Monitoring
- Capacity Planning
- Virtual Level & Pool Member Statistics
- Object Level Visibility
- Near Real-Time Graphics
- Reporting
In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.
Alerts include:
- Device status change
- SSL certificate expiration
- Software install complete
- Software copy failure
- Statistics data threshold
- Configuration synchronization
- Attack signature update
- Clock skew
When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.
OPERATIONAL CONSISTENCY PROXY
By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.
F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.
Additional Resources:
- F5 Enterprise Manager Overview
- In 5 Minutes or Less – Enterprise Manager v3.0
- Application Delivery Network Platform Management
Happy Managing!
Connect with Lori: | Connect with F5: |
Related blogs & articles:
- Devops Proverb: Process Practice Makes Perfect
- Persistent Threat Management
- F5 Friday: ADN = SDN at Layer 4-7
- Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture
- F5 Friday: Avoiding the Operational Debt of Cloud
Dropbox Employee Account Hack Led to Customers being Spammed
Dropbox this week fessed up to having been hacked, most notably an employee account that contained project data including a list of customer emails (at least it shows they use their own product). That resulted in a rash of spam that eventually led to the discovery of the compromised passwords.
A couple weeks ago, we started getting emails from some users about spam they were receiving at email addresses used only for Dropbox. We’ve been working hard to get to the bottom of this, and want to give you an update.
Our investigation found that usernames and passwords recently stolen from other websites were used to sign in to a small number of Dropbox accounts. We’ve contacted these users and have helped them protect their accounts.
A stolen password was also used to access an employee Dropbox account containing a project document with user email addresses. We believe this improper access is what led to the spam. We’re sorry about this, and have put additional controls in place to help make sure it doesn’t happen again.
They claim it was usernames and password stolen from other sites that led to the trickledown effects on Dropbox accounts. Another reason to use a different password for every site you sign up for.
Their post on the topic includes news of a new page that lets you examine all active logins to your account.
LinkedIn Leaked Password Statistics (Infographic)
Secure Remote Access for Businesses with Limited IT Staff and Budgets
With some of the recent breaches of restaurant chains, I’ve got to think that many of them were related to poor remote access practices. I say this because in all of my years of consulting, I have found that very weak controls around the remote access is a lot more common than one would think. Even today you will commonly find things like POS Servers directly accessible on the Internet via VNC, RDP, or pcAnywhere. I have even seen SQL databases that contain credit card data made directly accessible over the Internet.
Sometimes the organization itself is to blame. Usually because they just don’t know any better. For many, this has been the standard way to connect with their restaurants or stores remotely. They may lack the skills needed to setup secure remote access. Other times, and this is also very common, a vendor or service provider is responsible. I can’t tell you how many times I have found completely unsecure remote access setup and enabled by the POS vendor or service provider that the merchant didn’t even know about—or at least wasn’t told about as far as the risks and compliance issues this creates. In one case I even found that the service provider had opened up a port on the firewall so they could connect directly to the POS SQL database across the Internet. No matter who is to blame, this needs to be fixed right away.
First, these organizations need to stop allowing systems in their restaurants/stores to be directly accessible across the Internet. It’s actually quite easy fix if you have fairly recent firewall hardware. Set yourself up an IPSEC site-to-site VPN tunnel between each of your stores and the central office using some form of two-factor authentication. Certificate-based along with a pre-shared key for authentication isn’t that hard to set up and meets PCI DSS requirements. Now you can provide vendors and service providers with remote access into your central office where you can centrally log their activities and implement restrictions on what they will have access to at each of the stores. And remember that they also need to be using some form of two-factor authentication to access your environment.
If you are the type of business that doesn’t have full time connectivity from your stores back to your central office then remote access is a bit more complex to manage. Each of your locations needs to be configured to support client-to-site VPN connections from your own IT department as well as from your service providers and vendors. IPSEC or SSL VPNs can be set up on most of today’s small firewalls and UTM devices without much fuss. But remember that two-factor authentication is a requirement and some of these devices don’t support such strong authentication methods. For this type of connectivity, some form of hardware or software token or even SMS-based token code authentication is a good choice. Sometimes this involves the implementation of a separate two-factor authentication solution, but some firewall/UTM devices have two-factor authentication features built in. This is a big plus and makes setting up secure remote access less complex and less expensive. If you go with these types of remote access connections—direct connections to the stores—it’s very important to get the logs from remote access activity (as well as all other logs of course) from the firewalls pulled back into a central logging server for analysis and audit purposes.
To get started, your first step should be to review your external PCI ASV scans to see if any remote console services are accessible from the Internet. Look for RDP (tcp port 3389), VNC (tcp port 5900), or PCAnywhere (tcp port 5631 and udp port 5632). Also look for databases such as MS SQL (tcp port 1433), MySQL (tcp port 3306), or PostgreSQL (tcp port 5432). If any of these show up then you should get working on a plan to implement secure and compliant remote access.
If you’re looking for more information, I’ll be hosting a security webinar on July 18th to cover common security mistakes and how your organization can avoid many of them!
Hosting.com Extends Security Offerings with Cloud Firewall Solution
Hosting.com, a leading provider of enterprise-class, cloud-based application availability and recovery solutions, today extended the security options for cloud customers with the announcement of their Cloud Firewall service. Leveraging Juniper Networks vGW Series Virtual Gateway, a comprehensive virtualization security platform, Cloud Firewall is a hypervisor-based, VMsafe-certified stateful virtual firewall with more than ten times the throughput of firewalls typically deployed in cloud environments. Cloud Firewall meets the needs of cloud customers looking for an easy, affordable way to comply with major regulatory and industry security standards and to lock down their virtual environments.
“Cloud Firewall expands protection for cloud customers who want higher levels of security and VM workload access control. We already provide the highest level of physical firewall protection and now, another option is available at a granular, VM level. This furthers our commitment to enterprise-class, Always Secure cloud solutions,” said Jim Potter, Vice President of Products at Hosting.com.
Cloud Firewall satisfies the dynamic security and compliance needs of IT managers by offering a self-managed firewall that can be deployed in minutes. Managed through rich instrumentation in the Hosting.com Customer Portal, customers view and administer their complete VM and VM group inventory, including virtual network settings, and intra/inter-network traffic monitoring and access controls. Modifications to security rules can be made quickly and enforced nearly instantaneously through the Portal.
Companies with strict compliance mandates get granular control of VM traffic, without impacting the throughput of high-performance applications. Enterprise businesses with hybrid data solutions – those running on dedicated hardware servers in conjunction with workloads on cloud-based VMs – can add granular control and scalability to their virtual environment with Cloud Firewall, extending traditional perimeter-based security to the virtualized realm.
“The vGW platform that powers Cloud Firewall delivers layers of protection without the performance tradeoffs that users typically experience when implementing sophisticated security,” said Johnnie Konstantas, director of product marketing at Juniper Networks. “The innovations inherent in the hypervisor-based Cloud Firewall offer very compelling value to cloud service providers because they are able to maximize security and cloud VM capacity.”
Total Defense Acquires iSheriff
Total Defense, Inc., provider of solutions to combat the growing threat of cybercrime, today announced the company has acquired iSheriff. Together, the companies will offer “one of the most robust cloud security solutions on the market.”
“The days of employees safely accessing the internet from behind a corporate firewall are increasingly history for modern businesses. Today’s workforce is increasingly mobile, connecting through a broad array of devices and adopting cloud services at an accelerating pace. This reality requires a new approach to security,” said Paul Lipman, CEO at Total Defense. “A truly effective security solution requires a multi-layered approach. The cloud enables companies to very easily scale and deploy a powerful additional layer of security that is specifically tailored to today’s ‘de-perimeterized’ environments. As the security industry transitions, our acquisition of iSheriff puts us at the forefront of Internet security firms, by providing customers solid, best of breed, integrated security that’s managed through the cloud. We are thrilled to have the opportunity to truly make an impact and change the dynamic of the security market,” added Lipman.
Recently, Total Defense announced its first cloud product, Total Defense Cloud Security, an integrated cloud based SaaS (Security as a Service) solution for Web and email protection. The new offering provides a powerful and versatile Web and email security platform that protects users anytime and anywhere. This game changing solution provides a comprehensive additional layer of security that enhances the company’s existing endpoint solutions, giving Total Defense the advantage of a global cloud for real time malware protection across multiple platforms.
Oscar Marquez, CEO & Director of the Board of iSheriff, commented, “I have shared a vision for transforming the way businesses consume internet security with Paul Lipman for some time. Becoming part of Total Defense creates an ideal synergy. Total Defense’s large base of customers and extensive network of global partners will quickly accelerate the growth of our cloud offerings giving Total Defense a multi-tenant solution to provision and manage their customers, partners and OEM providers. This coupled with our global cloud infrastructure and cloud security expertise and Total Defense’s complete line of internet security solutions make a formidable company even stronger.”
For more information about Total Defense and its products, please visit: www.totaldefense.com.