Enterprises given another warning over cloud app GDPR compliance

(c)iStock.com/DWalker44

More than 4% of enterprises have put their data at risk by sanctioning cloud apps laced with malware, according to research released by cloud security provider Netskope.

The study, which uses anonymised data from millions of users in the global Neskope Active Platform, found 88% of apps used were not enterprise-ready, while almost half (43%) of apps analysed keep data for more than a week after the service has ended, going against the upcoming EU General Data Protection Regulation (GDPR).

This represents another warning sign for businesses to become compliant before the regulations take hold, within two years of the GDPR becoming law in spring of this year. Previous research from Netskope, as this publication has examined, found almost 80% of IT pros were not confident of making the 2018 deadline. Eduard Meelhuysen, VP EMEA at Netskope, wrote at the time: “The GDPR will have significant and wide-ranging consequences for both cloud-consuming organisations and cloud vendors, and security teams will need to make the most of the two-year grace period before penalties for non-compliance come into force.”

Of the 88% of apps analysed which aren’t enterprise secure, the key failing were auditing and certification, service level agreements, vulnerability remediation, and legal, privacy and financial viability. Perhaps not surprisingly, technology and IT services represent the highest number of cloud apps per enterprise on average (794), ahead of healthcare and life sciences (773) and retail, restaurants and hospitality (734).

Netskope warns that employees can be unwittingly spreading malware throughout their company through using unsanctioned cloud storage apps from multiple devices. “More than ever, it’s imperative that organisations have complete visibility into and real-time actionable control over their cloud app usage to better monitor and understand trends and vulnerabilities,” said Sanjay Beri, co-founder and CEO of Netskope.

Elsewhere, Microsoft has eclipsed Google in cloud app usage for the first time in Netskope’s reports, with Outlook and Office 364 OneDrive overtaking Gmail and Google Drive respectively.

Read more: How to ensure enterprise app cloud usage complies with the GDPR

Terminal Server Setup Guide – Fast Start Environment

Terminal Server Setup Guide With Terminal Services, organizations can provide employees access to Windows applications from virtually any device no matter the geographic location. Terminal Services (known as RDS beginning with Windows 2008 R2) is a server role in Windows Server that enables the server to host multiple, simultaneous client sessions to Windows desktops and […]

The post Terminal Server Setup Guide – Fast Start Environment appeared first on Parallels Blog.

Data Platforms as a Service | @CloudExpo @Pythian #BigData #Microservices

With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right provider and with the proper expectations.
In his session at 18th Cloud Expo, Warner Chaves, Principal Consultant at Pythian, will compare the NoSQL and SQL offerings from AWS, Microsoft Azure and Google Cloud, their similarities, differences and use cases for each one based on our own client projects.

read more

Lifecycle of Microservices | @CloudExpo #BigData #IoT #API #Microservices

More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively.
What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how developers and operators work together to streamline cohesive systems.

read more

Docker launches DDC to support ‘container as a service’ offering

Container company Docker has announced a Docker Data Center along with the new concept of ‘containers as a service’ in a bid to extend its cloud based technology to customer sites.

The Docker Datacenter (DDC) resides on the customer’s premises and gives them a self service system for building and running applications across multiple production systems while under operations controls.

It has also announced the general availability of Docker Universal Control Plane, a service that has been undergoing beta-testing since November 2015, which underpins the running of the container as a service (CaaS).

The advantage of the DDC is that it creates a native environment for the lifecycle management for Dockerized applications. Docker claims that 12 Fortune 500 companies have been beta testing the DDC along with smaller and companies in a range of industries.

Since every company has different systems, tools and processes the DDC was designed to work with whatever the clients have got and adjust to their infrastructure without making them recode their applications, explained Docker spokesman Banjot Chanana on the Docker website. Networking plugins, for example, can be massively simplified if clients use Docker to define how app containers network together. They can do this by choosing from any number of providers to provide the underlying network infrastructure, rather than have to tackle the problem themselves. Similarly, connecting to an internal storage infrastructure is a lot easier. Application programming interfaces provided by the on site ‘CaaS’ allow developers to move stats and logs in and out of logging and monitoring systems more easily.

“This model enables a vibrant ecosystem to grow with hundreds of partners,” said Chanana, who promised that Docker users will have much better options for their networking, storage, monitoring and workflow automation challenges

Docker says its DDC is integrated with Docker’s commercial Universal Control Plane and Trusted Registry software. It achieved this with open source Docker projects Swarm (orchestration), Engine (container runtime), Content Trust (security) and Networking. Docker and its partner IBM provide dedicated support, product engineering teams and service level agreements.

Salesforce quarterly figures prove cloud industry resistant to IT downturn

Salesforce’s latest quarterly figures have reversed the conventional logic of valuing cloud company stock, according to stock markets reaction.

Before the cloud giant’s latest figures were released, many Wall Street analysts were looking for signs of a downturn in the cloud industry, according to Reuters, which reported that the cloud software leaders is regarded as a barometer for conditions across the cloud industry. After a poor sales outlook from Tableau earlier this month, many analysts were looking for proof of a downturn in the cloud industry. Conventional wisdom in the money markets was that poor cloud performance would follow a downturn in the IT industry, related to worries about the economy.

However, when Salesforce returned higher better than expected revenue reports in its quarterly review and raised its yearly revenue forecast, analysts began to speculate that cloud sales and IT investment may be inversely related. At the end of the first day’s trading after Salesforce’s figures were released its stock has risen 7.2%, reported Reuters.

The cloud giant company upped its revenue forecast for the year from $8.0 billon-$8.1 billion to $8.08 billion-$8.12 billion. Analysts on average were expecting a profit of 99 cents per share on revenue of $8.08 billion.

Salesforce’s Chief Financial Officer Mark Hawkins dismissed the pessimistic outlook the money markets have for the cloud industry in the current uncertain economy. “We aren’t seeing an economic impact,” said Hawkins.

The opposite of analysts’ expectations is taking place, he argued, since the cloud computing sector thrives when businesses make more careful buying decision and choose cheaper, simpler to install services that can be costed more flexibly. Another point of departure between cloud and IT company stocks is that they are bought by different people. Salesforce is often installed over the head of the IT department, Hawkins said.

In January BCN reported how BT has effectively become a reseller channel for Salesforce, giving its corporate customers the option of a hotline to Salesforce’s cloud service through its BT Cloud Connect service.

Dell launches new backup and recovery services that straddle the cloud and the client

Dell office logoDell has announced a programme of new data protection initiatives to protect systems, apps and data that straddle private premise computers and the cloud.

There are four main strands to the new offerings: Dell Data Protection and Rapid Recovery, three new data deduplication appliances models, Dell Data Protection and Endpoint Recovery and a new Dell Data Protection and NetVault Backup offering.

The Dell Data Protection and Rapid Recovery system integrates with previous Dell offering such as AppAssure in order to help eliminate downtime for customer environments. Dell claims that users get ‘ZeroImpact’ recovery of systems, applications and data across physical, virtual and cloud environments. The Rapid Snap for Applications technology works by taking snapshots of entire physical or virtual environments every five minutes so users can get immediate access to data in the event of an incident. Rapid Snap for Virtual technology also offers agentless protection of VMware virtual machines.

The new Dell DR deduplication appliances are named as the Dell DR4300e, DR4300 and DR6300. The mid-market DR4300 offers up to 108TB of usable capacity while ingesting up to 23TB of data per hour. The entry-level DR4300e is a smaller scale, low-cost appliance that can scale up to 27TB. The DR63000 is a larger midmarket and small enterprise solution that delivers up to 360TB of usable capacity while ingesting up to 29TB of data per hour.

Dell Data Protection and Endpoint Recovery is a light-weight, easy-to-use software offering that gives customers endpoint protection and recovery solution for Windows clients. This is an offering for single users and starts off being free.

The Dell NetVault Backup is a cross-platform, enterprise backup and recovery solution that offers a spectrum of operating systems, applications and backup target support. A new option is to allow customers to break up backups into smaller, simultaneously executed chunks to increase performance.

Announcing @Tintri to Exhibit at @CloudExpo New York & Silicon Valley | #Cloud

SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 18th International CloudExpo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.

read more

IoT Disrupts the Cloud by @AllSeenAlliance | @ThingsExpo #IoT #M2M #InternetOfThings

In his session at @ThingsExpo, Noah Harlan, Founder of Two Bulls and President of AllSeen Alliance, will discuss the coming move from Cloud to Edge and what this means for business.
Noah Harlan is President of AllSeen Alliance and a Founder of Two Bulls, a leading mobile software development company with offices in New York, Berlin, and Melbourne. He is also Managing Director of Digital Strategy for Sullivan NYC, a brand engagement firm based in New York. He has served as an advisor for the White House Office of Science & Technology Policy on gaming and the outdoors, has appeared as a commentator on Bloomberg TV discussing mobile technology, and has an Emmy Award for Advanced Media Interactivity.

read more

Incident response on the AWS cloud and the case for outsourcing

(c)iStock.com/nzphotonz

By Jason Deck, VP strategic development, Logicworks

It is 4pm on a Friday before a holiday, right before your team leaves for a long weekend. An engineer on your team suddenly cannot connect to certain instances in your AWS environment. The error is affecting the largest projects and biggest customers — across hundreds of instances — including DR.

What happens now?

The answer to this question depends on your support model. In most companies, an incident like this means that no one is going home for the holiday weekend; they will spend 15+ hours diagnosing the problem, then 200+ hours fixing it manually, instance by instance. If they do not get it fixed in time, they will lose data and have to tell their customers — a potentially damaging situation.  

This is a true story, but what actually happened was very different. The company instead called their managed service provider (MSP) – us – who diagnosed the problem and fixed it over the holiday weekend.

Every system has weak points. Every system can fail. It is how you deal with catastrophe — and who you trust to help you during failure — that makes the difference. Every enterprise team needs an insurance policy against mistakes, large and small.

It turns out that one of the company’s internal engineers had caused the problem, inadvertently changing permissions for their entire environment. Logicworks was able to diagnose the problem in less than an hour, determine blast radius, get our smartest engineers in a room to develop a complex remediation strategy, and implement that fix before business resumed after the holiday. This involved writing custom scripts (in Python, BASH, and Puppet) to investigate the scope of the failure and another more complex script to partially automate the fix, so that each instance could be repaired in 3-5 minutes, rather than 2-3 hours. Ultimately it took 170+ hours of engineering effort, but the company readily admitted that it would have taken them two weeks to fix on their own.

Managed infrastructure service providers were born in an age when implementing a fix meant going to a data centre, swapping out hardware, and doing manual configurations. The value of an MSP to enterprises was not having to manage hardware and systems staff.

In the cloud, MSPs must do more. They must be programmers; instead of replacing hardware, they need to write custom scripts to repair virtual cloud instances. MSPs need to think and act like a software company: infrastructure problems are bugs, the solution is code, and speed is paramount.

Not all MSPs operate this way. Many MSPs would have looked at this company’s issue and applied the traditional incident response model: just reboot everything manually, one at a time. (Many also would have said “you caused the problem, you fix it.”) This is the traditional MSP line of thinking, and it would have meant that the company would have lost three to five days of data and customer trust.

MSPs need to think and act like a software company: infrastructure problems are bugs, the solution is code, and speed is paramount.

Running on cloud infrastructure comes with unique risks. It is often easier for your engineers to make a career-limiting mistake when a single wrong click of a button in an automated script can change permissions across an entire system. These new challenges require new answers and a new line of defence.

Importantly, this means that MSPs no longer replace internal IT teams; they provide additional expertise that the enterprise currently lacks (or is in the process of building) in fields like cloud security and automation. They provide an additional layer of defense. In the example above, the internal and MSP team collaborated to fix the problem, since there is shared control of the infrastructure.

In the cloud, the conversation no longer has to be insourcing vs. outsourcing. In fact, you will get the most out of an MSP if are also setting up internal DevOps teams or implementing software development best practices. As an MSP, companies with an existing or growing DevOps team are the most exciting to work beside. As an example, an MSP cannot automate your entire deployment pipeline alone; most only operate below the application level and can only automate instance spin-up and testing. But if the two teams are working together, they can balance application-specific needs with advanced scaling and network options and create a very mature pipeline very quickly.

An MSP can accelerate your DevOps team building strategies, not substitute them.

In other words, an MSP can accelerate your DevOps team building strategies, not substitute them. This is an incredibly powerful model that we have watched transform and mature entire cloud projects in a matter of months. Plus, they can subtract all the crucial compliance work your DevOps team dreads, like setting up backups and logging — and even improve the quality of that compliance work by creating automated tests to ensure logs and backups are always kept.

It is true that internal IT teams sacrifice some control by using an MSP. The key is that you are sacrificing control to a group of people who are held responsible for making your environment secure, available, etc. You control how and when the MSP touches your environment.

Cloud projects are complex, and cloud problems can be equally so. Just make sure that when they happen, you have the right team on the bench.

The post Incident Response on AWS Cloud: The Case for Outsourcing appeared first on Gathering Clouds.