Category Archives: Data security

5 Reasons Why Companies Are Using Macs

5 Reasons Why Companies Are Using Mac Mac devices have been gaining impetus in business for many years. This is, on one hand, due to the fact that employees tend to choose Mac when given the option under the CYOD (choose your own device) scheme, and on the other hand, because companies such as Axel […]

The post 5 Reasons Why Companies Are Using Macs appeared first on Parallels Blog.

Can Safe Harbour stay afloat?

When the European Court of Justice declared the US-EU Safe Harbour framework invalid in the case of Schrems v Data Protection Commissioner, some 4,500 companies began to panic. Many are still struggling to decide what to do: should they implement an alternative method of transferring personal data from the EEA to the US, or should they simply wait to see what happens next?

Waiting is a risky game, as the European data protection authorities’ (DPAs) grace period extends only until January 31 2016, by which time companies must have their cross-Atlantic data transfers in order. After this date, enforcement action may be taken against those transferring personal data without a suitable mechanism in place to ensure adequate protections to personal data. Although the slow churning of US and EU authorities negotiating a replacement for Safe Harbour can be heard in the distance, no timeline has yet been set for its implementation. There is also the added complication of the newly approved EU General Data Protection Regulation, which is likely to muddy the waters of an already murky negotiation.

Will Safe Harbour 2.0 come to the rescue?

According to the European Commissioner for Justice, Consumers and Gender Equality (the Commissioner), the negotiations on ‘Safe Harbour 2’ continue, undoubtedly under added pressure following the invalidation of the original Safe Harbour framework. Whilst both sides understand the sense of urgency, no proposal has yet met the needs of both the national security services and the European DPAs.

In Autumn 2013, the European Commission created a report providing 13 recommendations for improving Safe Harbour Number 13 required that the Safe Harbour national security exception is used only to an extent that is strictly necessary. This recommendation remains a sticking point in negotiations. Human rights and privacy organisations have little hope that these hurdles will be effectively overcome: In November 2015, a letter was sent to the Commissioner from EU and US NGOs, urging politicians to commit to a comprehensive modernisation of data protection laws on both sides of the Atlantic.

Of course, the real bridge to cross is on US law reform, which the Commissioner sees as more about guaranteeing EU rules in the US than changing US law. It seems the ball is very much in the North American court.

Do not, however, be fooled by the House of Representatives passing the Judicial Redress Act, which allows foreign citizens to bring legal suits in the US for alleged violations of their privacy rights. Reform is not easy, and it is now for the Senate to decide whether to follow suit, or to find a way to water down the Act. The govtrack.us website which follows the progress of bills through Capitol Hill gives the act a 22% chance of success. With odds like these, maybe we shouldn’t bet on cross-Atlantic privacy reform in the immediate future

The future of global surveillance

Whilst there have been positive noises coming from the White House regarding the privacy rights of non-Americans, it is unlikely in a post-9/11 world that any government will allow itself to be prevented from accessing data of either its own or foreign nationals.

In light of recent terror attacks all over the world, the Snowden debate is more relevant than ever. How far should government intelligence agencies go towards monitoring communications? Snowden forced governments to think twice about their surveillance practices, but recent attacks may have the opposite effect. Although their so-called ‘snooping’ may breach citizens’ fundamental rights, it may be more a question of how many civil liberties citizens are willing to exchange for safety and security.

The British Government has suggested that fast-track aggressive surveillance proposals (dubbed ‘the Snoopers’ Charter’) are the way forward in helping prevent acts of terror. This new emphasis on drones and cyber-experts marks a big shift from 2010’s strategic defence review. This is a war fought online and across borders and one cannot ignore the context of Safe Harbour here.

The implications on global e-commerce

Hindering cross-border data transfer impedes e-commerce and can potentially causes huge industries to collapse. By 2017, over 45 percent of the world is expected to be engaging in online commerce. A clear path across the Atlantic is essential.

The Information Technology and Innovation Foundation put it bluntly in stating that, aside from taking an axe to the undersea fibre optic cables connecting Europe to the US, it is hard to imagine a more disruptive action to transatlantic digital commerce than a stalemate on data transfer– a global solution must be reached, and soon.

The future of global cross-border data transfer

Time is running out on the Safe Harbour negotiations, and creating frameworks such as this is not simple – especially when those negotiating are starting so far apart and one side (the EU) does not speak with a unified voice.

Most of the 28 European Member States have individual national DPAs, not all of whom agree on the overall approach to reform. If the DPAs could speak in one voice, there could be greater cooperation with the Federal Trade Commission, which could hasten agreements on suitable frameworks for cross-Atlantic data transfers. In the US, much will come down to the law makers and, with an election brewing, it is worth considering the different scenarios.

Even though the two main parties in the US stand at polar ends of the spectrum on many policies, they may not be so distant when it comes to global surveillance. In the wake of the Snowden revelations, Hilary Clinton defended US global surveillance practices. The Republican Party has also been seen in favour of increased surveillance on certain target groups. The question remains: if either party, when elected, is happy to continue with the current surveillance programme, how will the US find common ground with the EU?

Conclusion

Europe seems prepared to act alone in protecting the interests of EU citizens, and the CJEU’s decision in Schrems was a bold and unexpected move on the court’s part. However, with the ever increasing threat to EU citizens’ lives through organised terror, the pressure may be mounting on the EU to relax its stance on data privacy, which could mean that finding common ground with the US may not be so difficult after all. We shall have to wait and see how the US-EU negotiations on Safe Harbour 2 evolve, and whether the European Commission will stand firm and require the US to meet its ‘equivalent’ standard.

 

Written by Sarah Pearce, Partner & Jane Elphick, Associate at Cooley (UK) LLP.

Close to 60 per cent of confidential cloud data can’t have risk levels assessed – research

UK IT professionals claim to be struggling with accurately assessing the risk of storing their confidential data in the cloud

UK IT professionals claim to be struggling with accurately assessing the risk of storing their confidential data in the cloud

Data from a recent Ponemon Institute survey commissioned by Informatica suggests UK enterprises are struggling to assess the risk associated with placing confidential data in the cloud, with respondents claiming they can’t determine the risk to 58 per cent of the confidential data they store in the cloud.

The problem seems particularly acute when it comes to cloud-based data specifically – enterprises said they faced the same challenge with 28 per cent of the sensitive information held on-premise.

The survey results, which include responses from 118 UK IT and IT security professionals with responsibility for data protection, hint at differences in the level of data management tool deployments for on-premise and cloud-based systems, which does seem to skew the results in terms of confidence in data risk allocation. About 46 per cent are using such tools for data on premise and 34 per cent for data in the cloud.

Still, less than half of respondents claimed to have common processes in place for discovering and classifying the sensitive or confidential data on-premise, and just a quarter said they have a process in place for data stored in the cloud.

About 54 per cent of respondents said they are not confident in their ability to proactively respond to a new threat in the cloud, and 30 per cent of the sensitive or confidential data located in the cloud is believed to be at risk according to respondents.

“The survey highlights that whilst organisations continue to fear cyberattacks, what really keeps them up at night is the unknown. Namely not knowing where data is and the associated risk to it,” said Larry Ponemon, chairman and founder, Ponemon Institute.

“Whilst businesses are more confident about having data on premise, the shift towards cloud computing is continuing to accelerate and organisations can’t afford to be held back by data security concerns. Instead, security practitioners need to get a handle on the classification of data so that they can feel more confident about the information that they are moving to the cloud. Regardless of whether information is held on premise or in the cloud, data governance protocols should be the same,” Ponemon said.

Informatica senior vice president and general manager, data integration and security Amit Walia said the results demonstrate the majority of organisations do not have a handle on their sensitive data, regardless of whether it exists on-premise or in the cloud.

He explained that as data volumes grow enterprises are leaning more on customised software and automated processes rather than manual processes to classify data risk and apply rules and policies, which is creating somewhat of a false perception when it comes to risk.

“Because businesses have less confidence in their understanding of sensitive data then they perceive more risk. To reduce threat exposure and improve breach resiliency, organisations need to invest in data centric security technologies, which enable businesses to enact the need-to-know data access policies that help limit the exposure of sensitive data,” Walia said.

Microsoft to improve transparency, control over cloud data

Microsoft wants to improve the security of its offerings

Microsoft wants to improve the security of its offerings

Microsoft has announced a series of measures to give customers more control over their cloud-based data, a move it claims will improve transparency around how data is treated as well as the security of that data.

The company announced enhanced activity logs of user, admin and policy-related actions, which customers and partners can tap into through a new Office 365 Management Activity API to use for compliance and security reporting.

Microsoft said by the end of this year it plans to introduce a Customer Lockbox for Office 365, which will give Office users the ability to approve or reject a Microsoft engineer’s request to log into the Office 365 service.

“Over the past few years, we have seen the security environment change and evolve. Cyber threats are reaching new levels, involving the destruction of property, and governments now act both as protectors and exploiters of technology. In this changing environment, two themes have emerged when I talk with our customers – 1) they want more transparency from their providers and more control of their data, and 2) they are looking for companies to protect their data through leading edge security features,” explained Scott Charney, corporate vice president, trustworthy computing at Microsoft.

“In addition to greater control of their data, companies also need their technology to adhere to the compliance standards for the industries and geographic markets in which they operate.”

The company is also upping its game on security and encryption. Office 365 already encrypts data in transit, but in the coming months Charney said the company plans to introduce content-level encryption, and by 2016 plans to enable the ability for customers to require Microsoft to use customer-generated and customer-controlled encryption keys to encrypt their content at rest.

It also plans to bolster network security through Azure-focused partnerships with the likes of Barracuda, Check Point, Fortinet, Websense, Palo Alto Networks, F5 and Alert Logic, and broaden the security capabilities of its enterprise mobility management suite.

Microsoft has over the past couple of years evolved into a strong proponent of and active participant in discussions around data security and data protection, including legislative change impacting these areas in the US. It’s also among a number of US cloud providers that are convinced many still lack trust in the cloud from a security standpoint, consequently hampering its ability to make inroads into the cloud market, which gives it an added incentive to double down on securing its own offerings.

Ovum: Security skills shortage remains most prevalent barrier in cloud

Security skills shortages are hampering IT's ability to adopt cloud services

A security skills shortage is hampering cloud adoption

Security and an IT security skills shortage remain the most prevalent barriers to cloud uptake, according to Ovum principle analyst Andrew Kellett.

Although Ovum’s research suggests the volume of sensitive corporate data stored in the cloud continues to grow, with enterprise cloud adoption rates exceeding 80 per cent, in many cases this data is not adequately protected.

“Security, or lack thereof, is a significant issue. If there is one problem area inhibiting further adoption of cloud-based services, it is enterprise concerns about shortfalls in the protection regimes of many cloud service providers,” Kellet said, adding that since more sensitive data appears to be stored in the cloud the most basic security practices and controls aren’t necessarily enough.

“On too many occasions, security policies only come into place once a new technology has already gone mainstream, and this is certainly true of the cloud industry. Many cloud providers have been guilty of ‘bolting on’ security as an afterthought, something which has left previous generations of technology vulnerable to malware attacks, advanced persistent threats and other breach tactics.”

“Whether they like it or not, organisations are putting their trust in the hands of the service provider, often without being completely satisfied that such trust is justified or that service levels and protection can be maintained,” he concluded.

Other recently published research from Ovum suggests enterprises are quite concerned with how their cloud service providers implement security controls. The company recently surveyed 818 ITDMs for their views on cloud security and found that in the US specifically, respondents seemed most concerned about lack of control over the location of data (82 per cent), increased vulnerability of shared infrastructure (79 per cent), and “privileged user” abuse of the cloud service provider (78 per cent).

Big Data Without Security = Big Risk

Guest Post by C.J. Radford, VP of Cloud for Vormetric

Big Data initiatives are heating up. From financial services and government to healthcare, retail and manufacturing, organizations across most verticals are investing in Big Data to improve the quality and speed of decision making as well as enable better planning, forecasting, marketing and customer service. It’s clear to virtually everyone that Big Data represents a tremendous opportunity for organizations to increase both their productivity and financial performance.

According to WiPro, the leading regions taking on Big Data implementations are North America, Europe and Asia. To date, organizations in North America have amassed over 3,500 petabytes (PBs) of Big Data, organizations in Europe over 2,000 PBs, and organizations in Asia over 800 PBs. And we are still in the early days of Big Data – last year was all about investigation and this year is about execution; given this, it’s widely expected that the global stockpile of data used for Big Data will continue to grow exponentially.

Despite all the goodness that can stem from Big Data, one has to consider the risks as well. Big Data confers enormous competitive advantage to organizations able to quickly analyze vast data sets and turn it into business value, yet it can also put sensitive data at risk of a breach or violating privacy and compliance requirements. Big Data security is fast becoming a front-burner issue for organizations of all sizes. Why? Because Big Data without security = Big Risk.

The fact is, today’s cyber attacks are getting more sophisticated and attackers are changing their tactics in real time to get access to sensitive data in organizations around the globe. The barbarians have already breached your perimeter defenses and are inside the gates. For these advanced threat actors, Big Data represents an opportunity to steal an organization’s most sensitive business data, intellectual property and trade secrets for significant economic gain.

One approach used by these malicious actors to steal valuable data is by way of an Advanced Persistent Threat (APT). APTs are network attacks in which an unauthorized actor gains access to information by slipping in “under the radar” somehow. (Yes, legacy approaches like perimeter security are failing.) These attackers typically reside inside the firewall undetected for long periods of time (an average of 243 days, according to Mandiant’s most recent Threat Landscape Report), slowly gaining access to and stealing sensitive data.

Given that advanced attackers are already using APTs to target the most sensitive data within organizations, it’s only a matter of time before attackers will start targeting Big Data implementations. Since data is the new currency, it just makes sense for attackers to go after Big Data implementations because that’s where big value is.
So, what does all this mean for today’s business and security professionals? It means that when implementing Big Data, they need to take a holistic approach and ensure the organization can benefit from the results of Big Data in a manner that doesn’t negatively affect the risk posture of the organization.
The best way to mitigate risk of a Big Data breach is by reducing the attack surface, and taking a data-centric approach to securing Big Data implementations. These are the key steps:

Lock down sensitive data no matter the location.

The concept is simple; ensure your data is locked down regardless of whether it’s in your own data center or hosted in the cloud. This means you should use advanced file-level encryption for structured and unstructured data with integrated key management. If you’re relying upon a cloud service provider (CSP) and consuming Big Data as a service, it’s critical to ensure that your CSP is taking the necessary precautions to lock down sensitive data. If your cloud provider doesn’t have the capabilities in place or feels data security is your responsibility, ensure your encryption and key management solution is architecturally flexible in order to accommodate protecting data both on-premise and in the cloud.

Manage access through strong polices.

Access to Big Data should only be granted to those authorized end users and business processes that absolutely need to view it. If the data is particularly sensitive, it is a business imperative to have strong polices in place to tightly govern access. Fine-grained access control is essential, including things like the ability to block access by even IT system administrators (they may have the need to do things like back up the data, but they don’t need full access to that data as part of their jobs). Blocking access to data by IT system administrators becomes even more crucial when the data is located in the cloud and is not under an organization’s direct control.

Ensure ongoing visibility into user access to the data and IT processes.

Security Intelligence is a “must have” when defending against APTs and other security threats. The intelligence gained can support what actions to take in order to safeguard and protect what matters – an organization’s sensitive data. End-user and IT processes that access Big Data should be logged and reported to the organization on a regular basis. And this level of visibility must occur whether your Big Data implementation is within your own infrastructure or in the cloud.

To effectively manage that risk, the bottom line is that you need to lock down your sensitive data, manage access to it through policy, and ensure ongoing visibility into both user and IT processes that access your sensitive data. Big Data is a tremendous opportunity for organizations like yours to reap big benefits, as long as you proactively manage the business risks.

CJRadford

You can follow C.J. Radford on Twitter @CJRad.

How Tough are the Final HIPAA Privacy, Security Rules?

Online Tech is hosting an educational webinar on the new final HIPAA omnibus rule, No More Excuses: HHS Releases Tough Final HIPAA Privacy and Security Rules Thursday, January 31 at 2 P.M. ET. The webinar will discuss how the latest HIPAA modifications affect the healthcare industry and healthcare vendors.

Dickinson Wright’s Brian Balow will lead the No More Excuses webinar with April Sage, Director of Healthcare Vertical for Online Tech. On January 17, 2013, the Department of Health and Human Services released its long-anticipated modifications to the Privacy, Security, Enforcement, and Breach Notification Rules under HIPAA/HITECH.

These modifications leave no doubt that covered entities, business associates, and their subcontractors must understand the application of these Rules to their operations, and must take steps to ensure compliance with these Rules in order to avoid liability. To find out more about the webinar and register via GoToMeeting, click here.