Netskope gives another warning to businesses struggling with GDPR compliance

(c)iStock.com/maxkabakov

An overwhelming 94% of cloud apps in enterprises across EMEA are not enterprise-ready, while two thirds of overall cloud services are not up to scratch, according to the latest research from Netskope.

The findings, which appear in the company’s latest quarterly cloud report, found that 82.4% of services do not encrypt data at rest, 66.4% of cloud services do not specify that the customers owns the data in their terms of services, while 42% do not allow admins to enforce password controls.

The report arrives amidst the backdrop of the European General Data Protection Regulation (GDPR), which comes into effect in May of next year. Netskope warns that businesses are potentially falling behind in their efforts to become au fait with the legislation; 40% of services analysed back up to a secondary location, the company says, not all of which are GDPR-compliant.

“Until very recently, organisations had to take an all-or-nothing approach to allowing cloud services,” said Sanjay Beri, Netskope founder and CEO. “If they sanctioned a cloud storage service for corporate use, they also needed to accept any additional personal instances of that cloud storage service or block the service entirely.

“As our customers make cloud services a strategic advantage for their businesses, when it comes to governing and securing those services, they are realising granular policies can ensure that sensitive data does not leak from the sanctioned instance of a corporate cloud service to an unsanctioned one,” Beri added.

The report also examined wider cloud trends. More than 90% of Netskope customers use IaaS services, predominantly AWS, Microsoft Azure and Google Cloud Platform, with enterprises using on average four IaaS services, while Office 365 remains the number one app with Slack continuing to rise up the top 20.

In February last year, Netskope reported that more than 4% of enterprises have sanctioned cloud apps laced with malware.

Dev and Production | @DevOpsSummit #DevOps #SDN #ContinuousDelivery

We’re all aware that dev/test != production environments. While the software stacks upon which applications are deployed may be (and hopefully are) the same, there still remains a whole lot of “infrastructure” (that’s everything else) that isn’t the same. Routers, switches, security devices, load balancers, caches, and other devices dedicated to ensuring the secure delivery of applications to hungry consumer and corporate users simply don’t exist in the dev/test environment. That’s particularly true as organizations continue to view “the cloud” as its ideal dev/test environment while continuing to insist that production remain firmly rooted on-premises.

read more

Why Kubernetes promises much for those willing to embrace a cultural shift

(c)iStock.com/Sjoerd van der Wal

Opinion It’d be fantastic if developers could write one piece of code on their machine and have full confidence it could run on any server without a glitch.

This is far from the norm as the technical headaches of pushing code on to different servers and environments can have profound consequences for any organisation. This can undermine confidence and slow down the roll-out of new apps and amendments to them.

Things are changing though, as the technology supporting cloud infrastructures matures. We’ve seen this with the largely universal acceptance of open source container solution Docker, which is helping standardise code for cloud environments.

This has now been enhanced by the arrival of Kubernetes, a platform used for automating deployment and scaling, as well as the operations of application containers across clusters of hosts.

High level control in cloud environments

The layer of abstraction that Kubernetes provides means we now only need to talk to one technology to gain a higher level of control over everything at a lower level. It also means we can take a cloud infrastructure built in one cloud environment, such as AWS, and move it into another environment, including Azure or Google Cloud.

This really is the next generation of cloud, with lots of big name organisations already jumping on the bandwagon and embracing Kubernetes. However, this move will not be as simple for everyone. Many companies will need to undergo a major cultural shift before this is possible.

To appreciate the benefits of Kubernetes, of which there are many, organisations first need to embrace the DevOps philosophy that places a clear focus on small scale continuous change to an organisation’s infrastructure. This approach is one that concentrates on deploying a microservice architecture and away from a huge code base that will become brittle as its scales.

Scaling up with confidence

When developers focus on one component of an app at a time, the risk of things being broken elsewhere is reduced. Developers can concentrate on making enhancements to one area of a cloud infrastructure without having to worry about the implications their actions are having elsewhere.

An eCommerce site, for example, will have a separate app for every component – be that user sign-up and login management or for processing orders. Kubernetes allows microservices performing different tasks to be deployed and scaled individually. Along with this, it ensures these separate services can still talk to each other.

This approach is allowing organisations to scale their cloud infrastructure in a more sustainable way – allowing features to be added quicker and with greater reliability. There is little doubt that the benefits, particularly when considering the current lack of alternatives, means Kubernetes will become the de facto tool for scaling up infrastructures as cloud computing matures.

It’s important to recognise, however, that this not about writing code in a new way, it’s about working in a new way and facilitating continuous change.

Read more: App container market to hit $2.7bn by 2020 – with consolidation signs already afoot

Why Kubernetes promises much for those willing to embrace a cultural shift

(c)iStock.com/Sjoerd van der Wal

Opinion It’d be fantastic if developers could write one piece of code on their machine and have full confidence it could run on any server without a glitch.

This is far from the norm as the technical headaches of pushing code on to different servers and environments can have profound consequences for any organisation. This can undermine confidence and slow down the roll-out of new apps and amendments to them.

Things are changing though, as the technology supporting cloud infrastructures matures. We’ve seen this with the largely universal acceptance of open source container solution Docker, which is helping standardise code for cloud environments.

This has now been enhanced by the arrival of Kubernetes, a platform used for automating deployment and scaling, as well as the operations of application containers across clusters of hosts.

High level control in cloud environments

The layer of abstraction that Kubernetes provides means we now only need to talk to one technology to gain a higher level of control over everything at a lower level. It also means we can take a cloud infrastructure built in one cloud environment, such as AWS, and move it into another environment, including Azure or Google Cloud.

This really is the next generation of cloud, with lots of big name organisations already jumping on the bandwagon and embracing Kubernetes. However, this move will not be as simple for everyone. Many companies will need to undergo a major cultural shift before this is possible.

To appreciate the benefits of Kubernetes, of which there are many, organisations first need to embrace the DevOps philosophy that places a clear focus on small scale continuous change to an organisation’s infrastructure. This approach is one that concentrates on deploying a microservice architecture and away from a huge code base that will become brittle as its scales.

Scaling up with confidence

When developers focus on one component of an app at a time, the risk of things being broken elsewhere is reduced. Developers can concentrate on making enhancements to one area of a cloud infrastructure without having to worry about the implications their actions are having elsewhere.

An eCommerce site, for example, will have a separate app for every component – be that user sign-up and login management or for processing orders. Kubernetes allows microservices performing different tasks to be deployed and scaled individually. Along with this, it ensures these separate services can still talk to each other.

This approach is allowing organisations to scale their cloud infrastructure in a more sustainable way – allowing features to be added quicker and with greater reliability. There is little doubt that the benefits, particularly when considering the current lack of alternatives, means Kubernetes will become the de facto tool for scaling up infrastructures as cloud computing matures.

It’s important to recognise, however, that this not about writing code in a new way, it’s about working in a new way and facilitating continuous change.

Read more: App container market to hit $2.7bn by 2020 – with consolidation signs already afoot

A #Blockchain Framework | @CloudExpo #FinTech #ML #DigitalTransformation

As good Venn diagrams always show it’s the intersection areas that offer the sweet spot for trends and markets, and an especially potent one is the intersection of Cloud computing, with the blockchain and also digital identity, particularly when considered within contexts such as Digital Democracy.
Given the diversity of each field alone there are multiple perspectives to this, more the start of a conversation rather than a definitive design.

read more

WSM International Partners with @LeaseWeb | @CloudExpo @WSMINTL #IaaS #Cloud

WSM International will provide migration services for organizations interested in easily, safely and quickly moving their IT infrastructure to LeaseWeb’s range of hosted and cloud services.
WSM is the leading provider of cloud migrations services, with a 14-year history of and experience with migrating workloads from onsite to hosted cloud services. In working with the world’s leading hosting and cloud services company, WSM has successfully completed tens of thousands of migrations, from the simplest web sites to complex enterprise IT workloads.
LeaseWeb provides global Infrastructure-as-a-Service (IaaS) and on-demand hosting solutions including dedicated servers, cloud-based servers, colocation services and hybrid cloud offerings.

read more

Citrix Focuses on Cloud

Citrix hosted its annual summit at Anaheim in California from 9th to 11th of January, 2017, and in this summit, it revealed a roadmap for the company. It offered a glimpse into what customers and investors can expect from the company in 2017, and the steps it is taking to increase its market share in the global cloud market.

One of the defining aspects is a plan to reinvigorate relationship with Microsoft. This is an interesting plan considering that its President and CEO, Krill Tatrinov has deep experience with Microsoft, and was in fact, the former Executive Vice-President of Microsoft’s Business Solutions Division. Before that, he was in charge of many key technological divisions in Microsoft. Since he took helm in 2006, Citrix has taken many steps to move closer to Microsoft, and in this summit, this strategy was made clear.

During a keynote address, PJ Hough, Senior Vice President products, announced that Citrix and Microsoft customers can deploy Windows 10 desktop on the Microsoft Azure platform directly, with the additional choice to deploy apps as well on Azure.  With such an integration in place, both Microsoft and Citrix are reaching out to mutual customers to help them transition to the cloud more easily. They are specifically planning to focus on those customers who have concerns regarding cloud. A report titled Global Business Technographics Infrastructure Survey released by Forrester shows that 38 percent of enterprises that were surveyed have not adopted any kind of cloud infrastructure, while another 23 percent have adopted just one cloud. This report clearly shows that cloud is not as ubiquitous as it may seem, and there’s always more room for penetration.

That is exactly what both the companies may do together under the invigorated relationship. Customers who already own XenApp or XenDesktop licenses can choose to move to Citrix Cloud-as-a-service. To help them with this transition, Citrix will be offering a substantial set of tools and expert knowledge using both Citrix’s and Microsoft’s offerings. This way, existing customers can get the advantages of cloud without having to pay extra, as the cost of their license will be adjusted towards this transition.

Besides its partnership with Microsoft, Citrix also announced a new pilot program for existing Citrix Service Providers (CPS), who wish to move their deployments to Citrix Cloud. Though this service is free as of now, Citrix plans to introduce a monthly licensing model in the future. With such a product, providers can have a Desktop-as-a-service option for hosting different Citrix technologies.

Also, Citrix is planning to introduce a new set of tools called Smart Check and Smart Scale to help customers deploy and configure apps and mobile workspaces on Citrix Cloud. These two offerings will join the existing Smart Tools family, and are believed to ease the process of cloud transition.

In all, Citrix is making a big push to reach out to those enterprises that have not moved to the cloud yet. To this end, it has renewed its partnership with Microsoft and has come up with a slew of tools to make the process easier for enterprises.

The post Citrix Focuses on Cloud appeared first on Cloud News Daily.

Building your data castle: Protecting from ransomware and restoring data after a breach

(c)iStock.com/Pobytov

The data centre is the castle. You can pull up the drawbridge, fill up the moat, or pull down the portcullis. But at some point, you have to let data in and out, and this opens up the opportunity for ransomware attacks.

No longer is it a matter of pride and peer recognition in the hacker community for circumnavigating and exposing the security of an organisation because it is now a fully-fledged industry in its own right with the use of ransomware.  That cybersecurity company Herjavec Group estimates to top a $1 Billon in 2016. In the past, those under siege used to flood the moats, pull up the drawbridges and drop the portcullis to protect themselves but with the modern data centre organisations life blood is the movement of data in and out of the data centre. 

The question now is not just how can organisations protect themselves from ransomware, but also what are the best practices and policies for recovery in case they get through.  Data has to flow in and out and that opens up the route in for security breaches and the most profitable one is ransomware. So can it be prevented from ever occurring, and how can that be achieved? After all, as always, prevention is better than cure and the first line of defence has to involve firewalls, email virus scanners and other such devices. The problem is that the writers of the code of computer viruses are always one step ahead of the data security companies that offer solutions to protect their customers. This is because the industry tends to be reactive to new threats rather than proactive.

With so many devices connecting to the corporate network, including bring your own devices (BYOD), there will always be an attack that gets through, especially as many end users are not totally savvy with how viruses and other such scams can be attached to emails while masquerading as normal everyday files. A certain amount of end user education will help but there will be the one that gets through.  So to protect ourselves, organisations have to have back-up plans on policies to deal with the situation when it does happen because we can’t keep the drawbridge up forever.

Is ransomware new?

So how long have ransomware attacks been around? Well excluding the viruses written by governments for subversion, we have always had viruses that hackers write for fun, notoriety, or to use as a robot in a denial of service attack. They may also use an email relay. With the coming of Bitcoin, where payments can be received anonymously and as you see from the Herjavec Group’s estimates it can be very lucrative while also being very costly to the organisations that are attacked. This is why companies should be creating their very own data castles, and they should only drop their drawbridges whenever it is absolutely safe or necessary to do so. Due diligence at all times is otherwise crucial.

One of the key weapons against ransomware is the creation of air gaps between data and any back-ups.  A solid back-up system is the Achilles heel of any ransomware and it has been proven many times over, such as in the case of Papworth Hospital. However, with the ever-increasing sophistication of ransomware and the use of online back-up devices, it won’t be long before it turns its attention to those devices as well. It’s therefore important to have back-up devices and media that have an air gap between themselves and the corporate storage network. This is going to be crucial in the future.  When you think about it, there is a lot of money at stake here on both sides if ransomware becomes back-up aware. So it’s important to think and plan ahead, and it’s perhaps a good idea to make back-ups appear less visible to any ransomware that might be programmed to attack them.

Disaster recovery

So what is the most effective way to recover from an attack? Any and every back-up strategy should be based around the recovery strategy for the organisation. Once the offending programs, and all its copies are removed.  Obviously, the key systems should be recovered first, but this will depend on the range and depth of the attack. One of the things that is easily overlooked in a recovery plan is the ability to reload the recovery software with standard operating system tools – it is something that is often overlooked in recovery scenario tests.

The key is to have a back-up plan. In the future that ransomware will, rather than blasting its way through the file systems, work silently in the background encrypting files over a period of time so that these files become a part of the back-up data sets. It is therefore important to maintain generations of data sets, not only locally but offsite in a secure location. Remember the old storage adage that your data is not secure until you have it in 3 places and in 3 copies.

I’d also recommend the following top 5 tips for protecting your organisation against ransomware:

  • Educate your end-users to make them more aware of the implications of ransomware and how it is distributed
  • Ensure that you deploy an up-to-date firewall and email scanners
  • Air gap your backups and archives from the corporate network
  • Maintain good generation controls for backups
  • Remember that backup is all about recovery; it’s better to prevent the need to recover by planning ahead for disasters such as a ransomware attack to maintain business continuity

These principles don’t change for enterprises that are based in the cloud. Whilst the cloud provides some resilience through the economies of scale that many could not afford in their own data centre, one should not assume that the data is any more secure in the cloud than in your own data centre.  Back-up policies for offsite back-ups and archive should still be implemented.

Inflight defence

But how can you prevent an attack while data is inflight? Whilst we have not seen this type of attack yet, it is always a strong recommendation that data inflight is encrypted preferably with your own keys before it hits your firewall. However, as many companies use WAN optimisation to improve their performance over WAN networks transporting encrypted files means little or no optimisation is possible. This can affect those all-important offsite DR, backup and archive transfers.  Products such as PORTrockIT can, however, enable organisations to protect their data while mitigating the effects of data and network latency. Solutions like this can enable you to build and maintain your data castle. 

Building your data castle: Protecting from ransomware and restoring data after a breach

(c)iStock.com/Pobytov

The data centre is the castle. You can pull up the drawbridge, fill up the moat, or pull down the portcullis. But at some point, you have to let data in and out, and this opens up the opportunity for ransomware attacks.

No longer is it a matter of pride and peer recognition in the hacker community for circumnavigating and exposing the security of an organisation because it is now a fully-fledged industry in its own right with the use of ransomware.  That cybersecurity company Herjavec Group estimates to top a $1 Billon in 2016. In the past, those under siege used to flood the moats, pull up the drawbridges and drop the portcullis to protect themselves but with the modern data centre organisations life blood is the movement of data in and out of the data centre. 

The question now is not just how can organisations protect themselves from ransomware, but also what are the best practices and policies for recovery in case they get through.  Data has to flow in and out and that opens up the route in for security breaches and the most profitable one is ransomware. So can it be prevented from ever occurring, and how can that be achieved? After all, as always, prevention is better than cure and the first line of defence has to involve firewalls, email virus scanners and other such devices. The problem is that the writers of the code of computer viruses are always one step ahead of the data security companies that offer solutions to protect their customers. This is because the industry tends to be reactive to new threats rather than proactive.

With so many devices connecting to the corporate network, including bring your own devices (BYOD), there will always be an attack that gets through, especially as many end users are not totally savvy with how viruses and other such scams can be attached to emails while masquerading as normal everyday files. A certain amount of end user education will help but there will be the one that gets through.  So to protect ourselves, organisations have to have back-up plans on policies to deal with the situation when it does happen because we can’t keep the drawbridge up forever.

Is ransomware new?

So how long have ransomware attacks been around? Well excluding the viruses written by governments for subversion, we have always had viruses that hackers write for fun, notoriety, or to use as a robot in a denial of service attack. They may also use an email relay. With the coming of Bitcoin, where payments can be received anonymously and as you see from the Herjavec Group’s estimates it can be very lucrative while also being very costly to the organisations that are attacked. This is why companies should be creating their very own data castles, and they should only drop their drawbridges whenever it is absolutely safe or necessary to do so. Due diligence at all times is otherwise crucial.

One of the key weapons against ransomware is the creation of air gaps between data and any back-ups.  A solid back-up system is the Achilles heel of any ransomware and it has been proven many times over, such as in the case of Papworth Hospital. However, with the ever-increasing sophistication of ransomware and the use of online back-up devices, it won’t be long before it turns its attention to those devices as well. It’s therefore important to have back-up devices and media that have an air gap between themselves and the corporate storage network. This is going to be crucial in the future.  When you think about it, there is a lot of money at stake here on both sides if ransomware becomes back-up aware. So it’s important to think and plan ahead, and it’s perhaps a good idea to make back-ups appear less visible to any ransomware that might be programmed to attack them.

Disaster recovery

So what is the most effective way to recover from an attack? Any and every back-up strategy should be based around the recovery strategy for the organisation. Once the offending programs, and all its copies are removed.  Obviously, the key systems should be recovered first, but this will depend on the range and depth of the attack. One of the things that is easily overlooked in a recovery plan is the ability to reload the recovery software with standard operating system tools – it is something that is often overlooked in recovery scenario tests.

The key is to have a back-up plan. In the future that ransomware will, rather than blasting its way through the file systems, work silently in the background encrypting files over a period of time so that these files become a part of the back-up data sets. It is therefore important to maintain generations of data sets, not only locally but offsite in a secure location. Remember the old storage adage that your data is not secure until you have it in 3 places and in 3 copies.

I’d also recommend the following top 5 tips for protecting your organisation against ransomware:

  • Educate your end-users to make them more aware of the implications of ransomware and how it is distributed
  • Ensure that you deploy an up-to-date firewall and email scanners
  • Air gap your backups and archives from the corporate network
  • Maintain good generation controls for backups
  • Remember that backup is all about recovery; it’s better to prevent the need to recover by planning ahead for disasters such as a ransomware attack to maintain business continuity

These principles don’t change for enterprises that are based in the cloud. Whilst the cloud provides some resilience through the economies of scale that many could not afford in their own data centre, one should not assume that the data is any more secure in the cloud than in your own data centre.  Back-up policies for offsite back-ups and archive should still be implemented.

Inflight defence

But how can you prevent an attack while data is inflight? Whilst we have not seen this type of attack yet, it is always a strong recommendation that data inflight is encrypted preferably with your own keys before it hits your firewall. However, as many companies use WAN optimisation to improve their performance over WAN networks transporting encrypted files means little or no optimisation is possible. This can affect those all-important offsite DR, backup and archive transfers.  Products such as PORTrockIT can, however, enable organisations to protect their data while mitigating the effects of data and network latency. Solutions like this can enable you to build and maintain your data castle. 

Three #DevOps Predictions | @DevOpsSummit @CAinc @Aruna13 #DevSecOps

A lot of time, resources and energy has been invested over the past few years on de-siloing development and operations. And with good reason. DevOps is enabling organizations to more aggressively increase their digital agility, while at the same time reducing digital costs and risks. But as 2017 approaches, the hottest trends in DevOps aren’t specifically about dev or ops. They’re about testing, security, and metrics.

read more