Tag Archives: security

Big Data Without Security = Big Risk

Guest Post by C.J. Radford, VP of Cloud for Vormetric

Big Data initiatives are heating up. From financial services and government to healthcare, retail and manufacturing, organizations across most verticals are investing in Big Data to improve the quality and speed of decision making as well as enable better planning, forecasting, marketing and customer service. It’s clear to virtually everyone that Big Data represents a tremendous opportunity for organizations to increase both their productivity and financial performance.

According to WiPro, the leading regions taking on Big Data implementations are North America, Europe and Asia. To date, organizations in North America have amassed over 3,500 petabytes (PBs) of Big Data, organizations in Europe over 2,000 PBs, and organizations in Asia over 800 PBs. And we are still in the early days of Big Data – last year was all about investigation and this year is about execution; given this, it’s widely expected that the global stockpile of data used for Big Data will continue to grow exponentially.

Despite all the goodness that can stem from Big Data, one has to consider the risks as well. Big Data confers enormous competitive advantage to organizations able to quickly analyze vast data sets and turn it into business value, yet it can also put sensitive data at risk of a breach or violating privacy and compliance requirements. Big Data security is fast becoming a front-burner issue for organizations of all sizes. Why? Because Big Data without security = Big Risk.

The fact is, today’s cyber attacks are getting more sophisticated and attackers are changing their tactics in real time to get access to sensitive data in organizations around the globe. The barbarians have already breached your perimeter defenses and are inside the gates. For these advanced threat actors, Big Data represents an opportunity to steal an organization’s most sensitive business data, intellectual property and trade secrets for significant economic gain.

One approach used by these malicious actors to steal valuable data is by way of an Advanced Persistent Threat (APT). APTs are network attacks in which an unauthorized actor gains access to information by slipping in “under the radar” somehow. (Yes, legacy approaches like perimeter security are failing.) These attackers typically reside inside the firewall undetected for long periods of time (an average of 243 days, according to Mandiant’s most recent Threat Landscape Report), slowly gaining access to and stealing sensitive data.

Given that advanced attackers are already using APTs to target the most sensitive data within organizations, it’s only a matter of time before attackers will start targeting Big Data implementations. Since data is the new currency, it just makes sense for attackers to go after Big Data implementations because that’s where big value is.
So, what does all this mean for today’s business and security professionals? It means that when implementing Big Data, they need to take a holistic approach and ensure the organization can benefit from the results of Big Data in a manner that doesn’t negatively affect the risk posture of the organization.
The best way to mitigate risk of a Big Data breach is by reducing the attack surface, and taking a data-centric approach to securing Big Data implementations. These are the key steps:

Lock down sensitive data no matter the location.

The concept is simple; ensure your data is locked down regardless of whether it’s in your own data center or hosted in the cloud. This means you should use advanced file-level encryption for structured and unstructured data with integrated key management. If you’re relying upon a cloud service provider (CSP) and consuming Big Data as a service, it’s critical to ensure that your CSP is taking the necessary precautions to lock down sensitive data. If your cloud provider doesn’t have the capabilities in place or feels data security is your responsibility, ensure your encryption and key management solution is architecturally flexible in order to accommodate protecting data both on-premise and in the cloud.

Manage access through strong polices.

Access to Big Data should only be granted to those authorized end users and business processes that absolutely need to view it. If the data is particularly sensitive, it is a business imperative to have strong polices in place to tightly govern access. Fine-grained access control is essential, including things like the ability to block access by even IT system administrators (they may have the need to do things like back up the data, but they don’t need full access to that data as part of their jobs). Blocking access to data by IT system administrators becomes even more crucial when the data is located in the cloud and is not under an organization’s direct control.

Ensure ongoing visibility into user access to the data and IT processes.

Security Intelligence is a “must have” when defending against APTs and other security threats. The intelligence gained can support what actions to take in order to safeguard and protect what matters – an organization’s sensitive data. End-user and IT processes that access Big Data should be logged and reported to the organization on a regular basis. And this level of visibility must occur whether your Big Data implementation is within your own infrastructure or in the cloud.

To effectively manage that risk, the bottom line is that you need to lock down your sensitive data, manage access to it through policy, and ensure ongoing visibility into both user and IT processes that access your sensitive data. Big Data is a tremendous opportunity for organizations like yours to reap big benefits, as long as you proactively manage the business risks.

CJRadford

You can follow C.J. Radford on Twitter @CJRad.

Survey Infographic: Customer Relying on Virtualization Vendors For Security

BeyondTrust has released a survey, Virtual Insecurity, that reveals organizations are relying heavily on virtualization vendors for security if for any security at all. Key survey takeaways from the 346 respondents that participated include:

  • 42 percent do not use security tools regularly as part of their virtual systems administration
  • 34 percent lean heavily on antivirus protection as a primary security tool
  • 57 percent often use existing image templates for new virtual images
  • Nearly 3 out of every 4 respondents say that up to a quarter of virtual guests are offline at any given time
  • 64 percent have no security controls in place that require a security sign off prior to releasing a new virtual image or template

Here’s an infographic based on these results:

Virtual Insecurity Infographic FINAL

Parallels supports Cisco researcher assessment: “website operators and administrators must keep systems up-to-date.”

 

Recently, a Cisco security research analyst used an old Parallels Plesk Panel vulnerability as an example of why it is important to patch servers that may be running old software. His point is valid, and Parallels agrees fully that “the active exploit of this year-old vulnerability serves as an important reminder that website operators and administrators must keep systems up-to-date.”

 

It turns out the exploit this researcher was referring to was (a) for Parallels Plesk Panel 9.3 and earlier – products from 2009 and earlier that are now at end-of-life, and (b) in the 3rd party Horde webmail component, not in the Parallels Plesk control panel itself. A patch was promptly issued by Parallels in February 2012.

 

This reported vulnerability – which certainly is not anything new (considering the patch has been out for over a year), was later confused in some subsequent blogs and comments with another vulnerability in Parallels Plesk 10.3 and earlier versions (products from summer 2011 and earlier) also discovered and fixed in February 2012. Though the current version of Parallels Plesk Panel at that time, 10.4, did not have this vulnerability, Parallels immediately issued a security advisory and patches in February 2012 for all prior impacted versions and advised partners about actions to take. Additionally, Parallels created a comprehensive page on securing Parallels Plesk Panel and a Malware Removal Tool, responding quickly and thoroughly to these exploits.

 

For Parallels partners who install patches and reset passwords, Parallels Plesk Panel is not subject to this vulnerability. Customers running Parallels Plesk Panel 10.4 and 11 never had this vulnerability in the first place.

 

Parallels agrees that the point of the Cisco researcher is still very valid: “The active exploit of this year-old vulnerability serves as an important reminder that website operators and administrators must keep systems up-to-date. This is especially urgent with vulnerabilities that are remotely detectable. This means not just the operating system, but every program and add-on for those programs also needs to be kept up-to-date. A vulnerability left unpatched in any one of them can lead to total system compromise.”

 

We strongly encourage our customers to subscribe to our support e-mails by clicking here, subscribe to our RSS feed here and add our KnowledgeBase browser plug-in here.

 

Parallels Plesk Panel 11 and the upcoming 11.5 are the most secure versions ever, and we strongly encourage our Partners and customers to upgrade to these versions. In Parallels Plesk Panel 11, all Security Updates are clearly reported in the panel. Partners can force Security Updates when they choose. The option to turn on auto-upgrades is also highly recommended for anyone on Parallels Plesk Panel 10 or above.  It is the best way to keep you fully secure.

 

– The Parallels Plesk Panel Team

 

 

Locking Down the Cloud

Guest Post by Pontus Noren, director and co-founder, Cloudreach.

The good news for cloud providers is that forward-thinking CIOs are rushing to embrace all things ‘cloud’, realising that it provides a flexible and cost-effective option for IT infrastructure, data storage and software applications. The bad news is that the most significant obstacle to implementation could be internal: coming from other parts of the organisation where enduring myths about legal implications, security and privacy issues remain. The reality is that today such fears are largely unfounded. CIOs need help in communicating this to their more reluctant colleagues if they want to make the move to the cloud a success.

Myth No 1: The Security Scare

In many cases, moving to the cloud can in fact represent a security upgrade for the organisation. Since the introduction of cloud-based computing and data storage around ten years ago, the issue of security has been so high profile that reputable cloud providers have made vast investments in their security set-ups – one that an individual organisation would be unable to cost-effectively match due to the far different scale on which it operates.

For example, data stored in the cloud is backed-up, encrypted and replicated across multiple geographically distributed data centres in order to protect it from the impact of natural disasters or physical breaches.  All this takes place under the watchful eyes of dedicated data centre security experts. If you compare this to the traditional in-house approach – which all too frequently sees data stored on a single server located somewhere in the basement of an office – it is not difficult to see which is the most secure option. By working with an established and respected cloud provider, such as Google or Amazon Web Services businesses can benefit from such comprehensive security measures without having to make the investment themselves.

Myth No 2: Data in Danger

Security and data privacy are closely related, but different issues. Security is mainly about physical measures taken to mitigate risks, while ‘privacy’ is more of a legal issue about who can access sensitive data, how it is processed, whether or not it is being moved and where it is at any moment in time.

Concerns around compliance with in-country data protection regulations are rife, especially when dealing with other countries.  Across Europe, for example, data protection laws vary from country to country with very strict guidelines about where data can be stored.  A substantial amount of data cannot be moved across geographical boundaries, so the security practice of replicating data across the globe has far-reaching compliance applications for data protection. However, data protection legislation states that there is always a data processor and data controller and a customer never actually ‘hands over’ its data. This doesn’t change when the cloud is involved – all large and reputable cloud services providers are only ever the data processor. For example, the provider will only ever process data on behalf of its customer, and the customer always maintains its ownership of its data, and role of data controller.

However, much of data protection law predates the cloud and is taking a while to catch up. Change is most definitely on its way. Proposed European legislation aims to make data protection laws consistent across Europe, and with highly data-restricted industries such as financial services now starting to move beyond private clouds into public cloud adoption, further change is likely to follow as organisations start to feel reassured.

So what can CIOs do to change perceptions? It comes down to three simple steps:

  • Be Specific – Identify your organization’s top ten queries and concerns and address these clearly.
  • Be Bold – Cloud computing is a well-trodden path and should not be seen as the future, rather as the now. Having tackled company concerns head on, it is important to make the jump and not just dip a toe in the water.
  • Be Early – Engage reluctant individuals early on in the implementation process, making them part of the change. This way CIOs can fend off ill-informed efforts to derail cloud plans and ensure buy-in from the people who will be using the new systems and services.

The cloud has been around for a while now and is a trusted and secure option for businesses of all sizes and across all sectors. In fact, there are more than 50 million business users alone of Google Apps worldwide. It can hold its own in the face of security and privacy concerns.  CIOs have an important role to play in reassuring and informing colleagues so that the firm can harness the many benefits of the cloud; future-proof the business and release IT expertise to add value across the business.  Don’t let fear leave your organisation on the side lines.

Pontus Noren, director and co-founder, Cloudreach Pontus Noren is director and co-founder, Cloudreach.

 

SmartRulesR DLP Thwarts email Distribution of Confidential Info

New Zealand-owned cloud email security and hosting company SMX has released SmartRules DLP, designed to safeguard confidential information against unauthorized email distribution.

SmartRules DLP (Data Loss Prevention) is one of a number of new service improvements currently being rolled out by SMX, following research and development support from Callaghan Innovation.

SMX’s co-founder and chief technology officer, Thom Hooker, says the R&D funding has enabled SMX to accelerate software development in several key areas. He says SmartRules® DLP has been given urgent priority, following the recent security breaches experienced by Government organizations.

“SMX is the leading cloud email security solution used by Government organizations with around 60 Government sector customers,” Thom Hooker says. “SmartRules® DLP meets the most stringent compliance requirements with easy-to-use rule building and related compliance processes.

“Email makes it very easy for employees to accidentally – or intentionally – send sensitive documents to recipients outside the organization,” Hooker says. “By deploying SMX’s SmartRules® DLP, customers can define rules to block and report on employees attempting to send sensitive documents externally. SmartRules® DLP can be configured to detect visible data as well as scanning for hidden metadata. The use of hidden metadata tags inside documents makes it harder for users to subvert DLP rules looking for visible text – that is, by changing the document name.”

Hooker says SMX’s SmartRules® DLP can also detect sensitive content embedded in archives – such as .zip, .rar, .tar, .gz, and so on – and can be configured to block emails containing archives that cannot be opened – for example, password protected or unknown document types.

Another significant new enhancement to the SMX Cloud Email Security Suite, Hooker says, will be beefing up the SMX email hosting platform with enterprise-grade security, reliability and new features. SMX will offer 100 percent availability, as well as enterprise-ready tools such as shared calendars, online data storage similar to Dropbox, global address books and support for ActiveSync to sync contacts, emails and calendars with mobile devices.

Oops! Startup DigitalOcean Forgets to Format Recycled Drives, Exposing Private Data

Wired has a cautionary tale for you to read as you consider the perils as well as the promise of could computing.

New York startup DigitalOcean says that its cloud server platform may be leaking data between its customers.

Kenneth White stumbled across several gigabytes of someone else’s data when he was noodling around on DigitalOcean’s service last week. White, who is chief of biomedical informatics with Social and Scientific Systems, found e-mail addresses, web links, website code and even strings that look like usernames and passwords — things like 1234qwe and 1234567passwd.

The problem started in mid-January, when DigitalOcean introduced a new solid state drive storage service. “The code that wipes the data — that securely deletes the data — was not being activated under the new SSD storage plans,” according to DigitalOcean CEO Ben Uretsky .

Read the details (and weep).

SaaSID Releases CAM 2.0, Adding Audit Dashboard for Security, Compliance

Web application security provider, SaaSID, has launched Cloud Application Manager 2.0 (CAM), the latest version of its browser-based authentication, management and auditing solution. CAM 2.0’s comprehensive audit report is now displayed in CAM Analytics, an intuitive dashboard that provides clear visibility of Web application use throughout an organization. The new software simplifies administration of authentication, feature controls and password management to help CIOs comply with data security regulations, standards and internal policies, by making it easier to govern, monitor and audit every user interaction with Web applications.

CAM 2.0’s comprehensive suite of dashboards in CAM Analytics provide at-a-glance graphics, showing managers exactly how employees are interacting with Web applications and associated corporate data, regardless of whether employees are working on company workstations or personally-owned computing devices. Detailed analytics provide managers with a complete overview of Web application use and the ability to drill down into reports for additional information. Activities such as exporting customer lists, or attaching sensitive files to Webmail, are tracked and clearly displayed for compliance. A range of graphic elements show social media activity and interactions with corporate applications, providing managers with complete visibility of departmental and individual use of Web applications.

CAM 2.0 users can now be authenticated and logged into Web applications from the SaaSID server. This server-side authentication improves security by ensuring that log-in credentials are protected from malware that might be present on an unsecured device. Users do not know their login details, so they cannot write them down, share them, or access managed applications from unprotected devices. Once CAM 2.0 has authenticated a user, the session is handed to the device and the user works with the application as normal.

Additional new features within CAM 2.0 include:

  • The new Restriction Learning feature which allows in-house IT staff to apply their own restrictions to application features. The simple GUI allows administrators to test the effect of restrictions prior to implementation.
  • Support for more two factor authentication solutions, including offerings from RSA, Vasco and ActivIdentity.
  • The new Password Wizard which learns the workflow for Web application authentication processes, enabling automated password resets. Organisations can use this new feature to change passwords at chosen intervals and to enforce strong password security for all Web applications managed by CAM 2.0: saving administration time and support costs, without impeding productivity.

CAM is a browser extension that goes beyond single sign-on (SSO) by enabling IT staff to manage Web application features according to employee roles. CAM assists organisations in maintaining security and compliance when they adopt Web applications and implement bring your own device (BYOD) programmes, by creating a comprehensive audit trail of all employee interactions with these Web applications.

To request a free trial or a demo of SaaSID’s CAM 2.0, see www.saasid.com.

CloudBerry Adds SFTP to Explorer 3.8

CloudBerry Lab, a provider of backup and management solutions for public cloud storage services, has added secure ftp to the newest release of Cloudberry Explorer version 3.8, an application that allows accessing, moving and managing data in remote locations such as FTP servers and public cloud storage services including Amazon S3, Amazon Glacier, Windows Azure, OpenStack and others.

In the new version of CloudBerry Explorer SFTP server is supported as one of the remote location options. Now users can perform file access, file transfer and file management operations across SFTP server and local storage.

Secure File Transfer Protocol (SFTP) also known as SSH File Transfer Protocol is an extension of the SSH-2 protocol that provides a secure file transfer capability. This protocol assumes that it is run over a secure channel, such as SSH, that the server has already authenticated the client, and that the identity of the client user is available to the protocol.

Do You Know the Top Threats to Cloud Security?

Where computing goes, trouble follows — in the form of hackers, disgruntled employees, and plain old destructive bugs. And as computing is moving to the Cloud (it says so right there in our logo!) that’s where some of the newest threats are emerging.

The Cloud Security Alliance has identified The Notorious Nine, (registration required) the top nine cloud computing threats for 2013.

Data breaches, data loss, account and traffic hijacking, insecure interfaces and APIs, denial of service attacks, malicious insiders, cloud “abuse” (using the power of the cloud to crack passwords), lack of due diligence, and shared technology platforms leading to shared vulnerabilities.

 

Mandian, Palo Alto Networks Partner for Malware Security

Mandiant has announced that it will team with Palo Alto Networks, a network security company, to integrate Palo Alto Networks’  firewalls and its WildFire malware prevention subscription with Mandiant’s recently announced product, Mandiant for Security Operation. Both companies will be presenting their solutions as participants at the RSA Conference 2013 in San Francisco from February 25th to 28th.

The joint solution from Palo Alto Networks and Mandiant provides a holistic approach to thwart advanced attackers by integrating malware detection and prevention capabilities on the network with the ability to resolve security incidents on endpoints. With this integration, Mandiant for Security Operations will automatically generate Indicators of Compromise (IOC) based on malware alerts generated by the Palo Alto Networks platform and identify which endpoints have been compromised. WildFire modern malware prevention service uses the inherent advantages of Palo Alto Networks next-generation firewalls to find new types of malware that have never been seen before across all applications – not just Web and email. To date, WildFire has discovered more than 70,000 new malware files that had not been identified by existing anti-malware solutions.

“Our mutual customers view this joint solution as a significant advantage to creating actionable insights to assess risk, prevent threats, and improve security,” said Chad Kinzelberg, senior vice president of business and corporate development, Palo Alto Networks. “We are also confident that this strategic partnership will continue to lead our industry in security intelligence for enterprise organizations.”

Mandiant for Security Operations is an appliance-based solution that utilizes a lightweight agent deployed on endpoints to enable security teams to confidently detect, analyze and resolve security incidents in a fraction of the time it takes using conventional approaches.

Palo Alto Networks offers a subscription service for WildFire, the company’s cloud-based modern malware prevention service. The WildFire service gives subscribers one-hour response times for the delivery of modern malware signatures, and integrated, on-box logging and reporting. The enhanced response time ensures that the damage caused by attackers using “zero-day” malware is mitigated for Palo Alto Networks customers.

“The tactics of targeted attackers and well-funded adversaries are constantly evolving,” said Mandiant’s Chief Technology Officer, Dave Merkel. “With the integration of the WildFire subscription malware detection service and Mandiant for Security Operations, security professionals will now be able to respond to threats faster and automatically investigate alerts from WildFire so they can confirm and resolve targeted attacks as they are unfolding.”