Archivo de la etiqueta: security

Tenable Network Security Gets $50 Million for Vulnerability Management

Tenable Network Security, Inc., whose software identifies network security gaps before they are exploited by attackers, today announced $50 million in first-round funding from Accel Partners.

Tenable will use the funds to expand its innovative security offerings and accelerate global growth – while deepening its research into evolving threats that are becoming a critical trust issue for CEOs, regulators and customers worldwide.

“Security is a mainstream issue – especially with the explosion of mobile, cloud and virtual computing,” said Ron Gula, CEO of Tenable. “Serious network attacks are far more common than anyone wants to publicly admit – and our customers count on us to keep their networks safe.”

Tenable is the top choice for businesses of all sizes, governments and universities to manage network threats. The company’s flagship vulnerability management products, Nessus and SecurityCenter, are used by the most demanding security professionals and compliance auditors at 15,000 organizations worldwide, including:

  • The entire U.S. Department of Defense, where Tenable has become the
    vulnerability management standard
  • 12 of the 14 U.S. Federal Civilian Departments
  • Top Financial Services Companies: Barclays, Deloitte, RBS, Morgan
    Stanley, T. Rowe Price, Visa
  • Technology Leaders: Spotify, Dell, Etsy, Google, Intel, Microsoft,
    Skype, Apple, Yahoo
  • Top Universities: Brown, Dartmouth, Michigan, Ohio State, Purdue
  • Healthcare Leaders: Coventry, HealthSouth, Johnson & Johnson,
    Merck, Scripps, Sutter Health
  • Key Energy Players: Chevron, Chesapeake Energy, ConocoPhillips,
    ConEdison, Duke Energy, TXU
  • Retailer Innovators: Chipotle, GSI, Meijer, Diapers.com, Zappos.com
  • Media Visionaries: British Sky Broadcasting, CBS, Comcast, Time
    Warner, 20
    th Century Fox
  • Telecom Providers: Alcatel Lucent, Bell Canada, British Telecom,
    Softbank Mobile, Verizon

Tenable’s user community has 1 million people who have learned the benefits of automated vulnerability scanning through their viral adoption of Nessus.

“Tenable is the thought leader in the rapidly growing and critical area of vulnerability assessment,” said Ping Li, General Partner at Accel, who will join Tenable’s board of directors. “IT security practitioners fight the constant battle to stay ahead of network attacks, and it’s only getting harder. Many of these practitioners globally rely on Tenable for their vulnerability management platform.”

Tenable’s SecurityCenter is the only security platform combining essential active and innovative passive vulnerability scanning, real-time network monitoring, and configuration and compliance management. Tenable’s Nessus, the industry’s most widely deployed vulnerability scanner, provides the deepest database of known vulnerabilities and compliance risks on the market today.

 


Four Things You Need to Know About PCI Compliance in the Cloud

By Andrew Hay, Chief Evangelist, CloudPassage

Andrew HayAndrew Hay is the Chief Evangelist at CloudPassage, Inc. where he is lead advocate for its SaaS server security product portfolio. Prior to joining CloudPassage, Andrew was a a Senior Security Analyst for 451 Research, where he provided technology vendors, private equity firms, venture capitalists and end users with strategic advisory services.

Anyone who’s done it will tell you that implementing controls that will pass a PCI audit is challenging enough in a traditional data center where everything is under your complete control. Cloud-based application and server hosting makes this even more complex. Cloud teams often hit a wall when it’s time to select and deploy PCI security controls for cloud server environments. Quite simply, the approaches we’ve come to rely on just don’t work in highly dynamic, less-controlled cloud environments. Things were much easier when all computing resources were behind the firewall with layers of network-deployed security controls between critical internal resources and the bad guys on the outside.

Addressing the challenges of PCI DSS in cloud environments isn’t an insurmountable challenge. Luckily, there are ways to address some of these key challenges when operating a PCI-DSS in-scope server in a cloud environment. The first step towards embracing cloud computing, however, is admitting (or in some cases learning) that your existing tools might be not capable of getting the job done.

Traditional security strategies were created at a time when cloud infrastructures did not exist and the use of public, multi-tenant infrastructure was data communications via the Internet. Multi-tenant (and even some single-tenant) cloud hosting environments introduce many nuances, such as dynamic IP addressing of servers, cloud bursting, rapid deployment and equally rapid server decommissioning, that the vast majority of security tools cannot handle.

First Takeaway: The tools that you have relied upon for addressing PCI related concerns might not be built to handle the nuances of cloud environments.

The technical nature of cloud-hosting environments makes them more difficult to secure. A technique sometimes called “cloud-bursting” can be used to increase available compute power extremely rapidly by cloning virtual servers, typically within seconds to minutes. That’s certainly not enough time for manual security configuration or review.

Second Takeaway: Ensure that your chosen tools can be built into your cloud instance images to ensure security is part of the provisioning process.

While highly beneficial, high-speed scalability also means high-speed growth of vulnerabilities and attackable surface area. Using poorly secured images for cloud-bursting or failing to automate security in the stack means a growing threat of server compromise and nasty compliance problems during audits.

Third Takeaway: Vulnerabilities should be addressed prior to bursting or cloning your cloud servers and changes should be closely monitored to limit the expansion of your attackable surface area.

Traditional firewall technologies present another challenge in cloud environments. Network address assignment is far more dynamic in clouds, especially in public clouds. There is rarely a guarantee that your server will spin up with the same IP address every time. Current host-based firewalls can usually handle changes of this nature but what about firewall policies defined with specific source and destination IP addresses? How will you accurately keep track of cloud server assets or administer network access controls when IP addresses can change to an arbitrary address within a massive IP address space?

Fourth Takeaway: Ensure that your chosen tools can handle the dynamic nature of cloud environments without disrupting operations or administrative access.

The auditing and assessment of deployed servers is an addressable challenge presented by cloud architectures. Deploying tools purpose-built for dynamic public, private and hybrid cloud environments will also ensure that your security scales alongside your cloud server deployments. Also, if you think of cloud servers as semi-static entities deployed on a dynamic architecture, you will be better prepared to help educate internal stakeholders, partners and assessors on the aforementioned cloud nuances – and how your organization has implemented safeguards to ensure adherence to PCI-DSS.

 


Woz on Cloud Dangers Started a Useful Conversation

When Apple co-founder and all-around tech icon Steve Wozniak was quoted as saying he expected horror stories from the cloud, and in the wake of a cautionary tale of total cloud hack horror from xxxxxx, it set off a useful round of comment.

Yesterday we had a guest post on the topic.

Today you might read the I,  Cringley take, which as can be expected is full of his usual cobbled-together, but pretty effective, roll-your-own solutions.


Woz is Worried About “Everything Going to the Cloud” — the Real Issue is Giving Up Control

Guest Post By Nati Shalom, CTO and Founder of GigaSpaces

In a recent article, Steve Wozniak, who co-founded Apple with the late Steve Jobs, predicted “horrible problems” in the coming years as cloud-based computing takes hold. 

“I really worry about everything going to the cloud,”.. “I think it’s going to be horrendous. I think there are going to be a lot of horrible problems in the  next five years. ….“…with the cloud, you don’t own anything. You already signed it away.”

When I first read the title I thought, Wozniak sounds like Larry Ellison two years ago, when he pitched the Cloud is hype, before he made a 180-degree turn to acknowledge Oracle wished to be a cloud vendor too. 

Reading it more carefully, I realized the framing of the topic is instead just misleading. Wozniak actually touches on something that I hear more often, as the cloud hype cycle is moves from a Peak of Inflated Expectations into through the Trough of Disillusionment.

Wozniak echos an important lesson, that IMO, is major part of the reason many of the companies that moved to cloud have experienced lots of outages during the past months. I addressed several of these aspects in in a recent blog post: Lessons from the Heroku/Amazon Outage.

When we move our operations to the cloud, we often assume that we’re out-sourcing our data center operation completely, including our disaster recovery procedures. The truth is that when we move to the cloud we’re only outsourcing the infrastructure, not our operations, and the responsibility of how to use this infrastructure remain ours.

Choosing better tradeoffs between producivity and control

For companies today, the main reason we chose to move to the cloud in the first place was to gain better agility and productivity. But in starting this cloud journey, we found that we had to give up some measure of control to achieve the agility and productivity.

The good news is that as the industry mature there are more choices that provides better tradeoffs between producivity and control:

  • Open source cloud such as OpenStack and CloudStack
  • Private cloud offering
  • DevOps and automation tools such as Chef and Puppet
  • OpenSource PaaS such as Cloudify, OpenShift and CloudFoundry
  • DevOps and PaaS combined such Cloudify

As businesses look at cloud strategy today, there isn’t a need to give up control over productivity. With technologies like Cloudify, businesses can get the best out of both worlds.

References


The Operational Consistency Proxy

#devops #management #webperf Cloud makes more urgent the need to consistently manage infrastructure and its policies regardless of where that infrastructure might reside

f5friday

While the potential for operational policy (performance, security, reliability, access, etc..) diaspora is often mentioned in conjunction with cloud, it remains a very real issue within the traditional data center as well. Introducing cloud-deployed resources and applications only serves to exacerbate the problem.

F5 has long offered a single-pane of glass management solution for F5 systems with Enterprise Manager (EM) and recently introduced significant updates that increase its scope into the cloud and broaden its capabilities to simplify the increasingly complex operational tasks associated with managing security, performance, and reliability in a virtual world.

f5em2.0AUTOMATE COMMON TASKS

The latest release of F5 EM includes enhancements to its ability to automate common tasks such as configuring and managing SSL certificates, managing policies, and enabling/disabling resources which assists in automating provisioning and de-provisioning processes as well as automating what many might consider mundane – and yet critical – maintenance window operations.

Updating policies, too, assists in maintaining operational consistency across all F5 solutions – whether in the data center or in the cloud. This is particularly important in the realm of security, where control over access to applications is often far less under the control of IT than even the business would like. Combining F5’s cloud-enabled solutions such as F5 Application Security Manager (ASM) and Access Policy Manager (APM) with the ability for F5 EM to manage such distributed instances in conjunction with data center deployed instances provides for consistent enforcement of security and access policies for applications regardless of their deployment location. For F5 ASM specifically, this extends to Live Signature updates, which can be downloaded by F5 EM and distributed to managed instances of F5 ASM to ensure the most up-to-date security across enterprise concerns.

The combination of centralized management with automation also ensures rapid response to activities such as the publication of CERT advisories. Operators can quickly determine from the centralized inventory the impact of such a vulnerability and take action to redress the situation.

INTEGRATED PERFORMANCE METRICS real-time-app-perf-monitoring-cloud-dc

F5 EM also includes an option to provision a Centralized Analytics Module. This module builds on F5’s visibility into application performance based on its strategic location in the architecture – residing in front of the applications for which performance is a concern. Individual instances of F5 solutions can be directed to gather a plethora of application performance related statistics, which is then aggregated and reported on by application in EM’s Centralized Analytics Module.

These metrics enable capacity planning, troubleshooting and can be used in conjunction with broader business intelligence efforts to understand the performance of applications and its related impact whether those applications are in the cloud or in the data center. This global monitoring extends to F5 device health and performance, to ensure infrastructure services scale along with demand. 

Monitoring includes:

  • Device Level Visibility & Monitoring
  • Capacity Planning
  • Virtual Level & Pool Member Statistics
  • Object Level Visibility
  • Near Real-Time Graphics
  • Reporting

In addition to monitoring, F5 EM can collect actionable data upon which thresholds can be determined and alerts can be configured.

Alerts include:

  • Device status change
  • SSL certificate expiration
  • Software install complete
  • Software copy failure
  • Statistics data threshold
  • Configuration synchronization
  • Attack signature update
  • Clock skew

When thresholds are reached, triggers send an alert via email, SNMP trap or syslog event. More sophisticated alerting and inclusion in broader automated, operational systems can be achieved by taking advantage of F5’s control-plane API, iControl. F5 EM is further able to proxy iControl-based applications, eliminating the need to communicate directly with each BIG-IP deployed.

OPERATIONAL CONSISTENCY PROXY

By acting as a centralized management and operational console for BIG-IP devices, F5 EM effectively proxies operational consistency across the data center and into the cloud. Its ability to collect and aggregate metrics provides a comprehensive view of application and infrastructure performance across the breadth and depth of the application delivery chain, enabling more rapid response to incidents whether performance or security related.

F5 EM ensures consistency in both infrastructure configuration and operational policies, and actively participates in automation and orchestration efforts that can significantly decrease the pressure on operations when managing the critical application delivery network component of a highly distributed, cross-environment architecture.

Additional Resources:

Happy Managing!


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


read more

Dropbox Employee Account Hack Led to Customers being Spammed

Image representing Dropbox as depicted in Crun...

Dropbox this week fessed  up to having been hacked, most notably an employee account that contained project data including a list of customer emails (at least it shows they use their own product). That resulted in a rash of spam that eventually led to the discovery of the compromised passwords.

A couple weeks ago, we started getting emails from some users about spam they were receiving at email addresses used only for Dropbox. We’ve been working hard to get to the bottom of this, and want to give you an update.

Our investigation found that usernames and passwords recently stolen from other websites were used to sign in to a small number of Dropbox accounts. We’ve contacted these users and have helped them protect their accounts.

A stolen password was also used to access an employee Dropbox account containing a project document with user email addresses. We believe this improper access is what led to the spam. We’re sorry about this, and have put additional controls in place to help make sure it doesn’t happen again.

They claim it was usernames and password stolen from other sites that led to the trickledown effects on Dropbox accounts. Another reason to use a different password for every site you sign up for.

Their post on the topic includes news of a new page that lets you examine all active logins to your account.


Secure Remote Access for Businesses with Limited IT Staff and Budgets

With some of the recent breaches of restaurant chains, I’ve got to think that many of them were related to poor remote access practices. I say this because in all of my years of consulting, I have found that very weak controls around the remote access is a lot more common than one would think. Even today you will commonly find things like POS Servers directly accessible on the Internet via VNC, RDP, or pcAnywhere. I have even seen SQL databases that contain credit card data made directly accessible over the Internet.

Sometimes the organization itself is to blame. Usually because they just don’t know any better. For many, this has been the standard way to connect with their restaurants or stores remotely. They may lack the skills needed to setup secure remote access.  Other times, and this is also very common, a vendor or service provider is responsible. I can’t tell you how many times I have found completely unsecure remote access setup and enabled by the POS vendor or service provider that the merchant didn’t even know about—or at least wasn’t told about as far as the risks and compliance issues this creates. In one case I even found that the service provider had opened up a port on the firewall so they could connect directly to the POS SQL database across the Internet. No matter who is to blame, this needs to be fixed right away.

First, these organizations need to stop allowing systems in their restaurants/stores to be directly accessible across the Internet. It’s actually quite easy fix if you have fairly recent firewall hardware. Set yourself up an IPSEC site-to-site VPN tunnel between each of your stores and the central office using some form of two-factor authentication. Certificate-based along with a pre-shared key for authentication isn’t that hard to set up and meets PCI DSS requirements. Now you can provide vendors and service providers with remote access into your central office where you can centrally log their activities and implement restrictions on what they will have access to at each of the stores. And remember that they also need to be using some form of two-factor authentication to access your environment.

If you are the type of business that doesn’t have full time connectivity from your stores back to your central office then remote access is a bit more complex to manage. Each of your locations needs to be configured to support client-to-site VPN connections from your own IT department as well as from your service providers and vendors. IPSEC or SSL VPNs can be set up on most of today’s small firewalls and UTM devices without much fuss. But remember that two-factor authentication is a requirement and some of these devices don’t support such strong authentication methods. For this type of connectivity, some form of hardware or software token or even SMS-based token code authentication is a good choice. Sometimes this involves the implementation of a separate two-factor authentication solution, but some firewall/UTM devices have two-factor authentication features built in. This is a big plus and makes setting up secure remote access less complex and less expensive. If you go with these types of remote access connections—direct  connections to the stores—it’s very important to get the logs from remote access activity (as well as all other logs of course) from the firewalls pulled back into a central logging server for analysis and audit purposes.

To get started, your first step should be to review your external PCI ASV scans to see if any remote console services are accessible from the Internet. Look for RDP (tcp port 3389), VNC (tcp port 5900), or PCAnywhere (tcp port 5631 and udp port 5632).  Also look for databases such as MS SQL (tcp port 1433), MySQL (tcp port 3306), or PostgreSQL (tcp port 5432). If any of these show up then you should get working on a plan to implement secure and compliant remote access.

If you’re looking for more information, I’ll be hosting a security webinar on July 18th to cover common security mistakes and how your organization can avoid many of them!