[session] Data as a Service By @LakshmiLJ | @CloudExpo #Cloud

Data-as-a-Service is the complete package for the transformation of raw data into meaningful data assets and the delivery of those data assets.
In her session at 18th Cloud Expo, Lakshmi Randall, an industry expert, analyst and strategist, will address:
What is DaaS (Data-as-a-Service)?
Challenges addressed by DaaS
Vendors that are enabling DaaS
Architecture options for DaaS

read more

Privacy Shield data agreement dismissed as ‘reheated Safe Harbour’

Europe US court of justiceThe new framework for transatlantic data flows proposed by legislators for the European Commission has had a mixed reaction from the cloud industry.

The EU-US Privacy Shield agreement over data transfer replaces the 15 year arrangement that was voided by the Court of Justice of the European Union in October. The new arrangement has to meet official approval from all 28 member states of the European Union. If it does both sides will finalise the details of the new pact in the next fortnight and the agreement could come into effect in April.

The foundation of the agreement is that American intelligence agencies will no longer have indiscriminate access to Europeans’ data when it is stored in the US. EC Commissioner Vera Jourová claimed that Europeans can now be sure their personal data is fully protected and that the EC will closely monitor the new arrangement to make sure it keeps delivering.

“For the first time ever, the United States has given the EU binding assurances that the access of public authorities for national security purposes will be subject to clear limitations, safeguards and oversight mechanisms,” said Jourová, who promised that EU citizens will benefit from redress if violations occur. “The US has assured that it does not conduct mass or indiscriminate surveillance of Europeans,” said Jourová.

Whether the decision really will build a Digital Single Market in the EU, a trusted environment and closer partnership with the US remains a moot point among cloud industry experts.

Approval of the arrangement cannot be taken for granted, according to a speaker for The Greens and the European Free Alliance. “This new framework amounts to little more than a reheated serving of the pre-existing Safe Harbour decision. The EU Commission’s proposal is an affront to the European Court of Justice, which deemed Safe Harbour illegal, as well as to citizens across Europe, whose rights are undermined by the decision,” said Green home affairs and data protection spokesperson Jan Philipp Albrecht. The proposal creates no legally binding improvements and the authorities must make clear that this ‘legally dubious declaration’ will not stand said Albrecht.

The EU/US data sharing deal won’t stop surveillance, according to former Whitehouse security advisor French Caldwell. As a Gartner research VP, Caldwell once advised on national and cyber security and led the first ever cyber wargame, Digital Pearl Harbor. As the new chief evangelist at software vendor MetricStream, Caldwell said there were many flaws in the logic of the agreement.

“The legal definitions of personal data are so antiquated that, even if that data covered under privacy law is protected, there is still so much data around people’s movements and online activities that an entire behavioural profile can be built without accessing that which is considered legally protected,” said Caldwell.

Privacy protections have evolved significantly in the US, Caldwell said, and US authorities are much more aggressive than EU authorities in penalising companies that don’t follow privacy policies. “It is hard to discount nationalism and trade protectionism as underlying motivations [for European legislation],” said Caldwell.

It should alarm cloud customers to see how little has been done to give assurance of their privacy, said Richard Davies, CEO of UK based ElasticHosts. “This gives little assurance to EU customers trusting a US provider with hosting their websites or sensitive data.” Customers with servers with US companies in the EU are likely to move their data to non-US providers to minimize risk, Davies said.

Businesses will need to be much more involved with where their information exists and how it is stored. Until details emerge of the new privacy shield, many European companies wont want to risk putting data on US servers, warned Ian Wood, Senior Director Global Solutions.

However, this could be a business opportunity for the cloud industry to come up with a solution, according to one commentator. The need for transparency and accountability calls for new data management skills, according to Richard Shaw, senior director of field technical operations at converged data platform provider MapR.

“Meeting EU data privacy standards is challenging at the best of times, let alone when the goal posts are constantly being moved,” said Shaw. The only way to give the US authorities the information they demand, while complying with regulations, is to automate governance processes around management, control and analysis of data, Shaw said.

Would the Privacy Shield and the attendant levels of new management affect performance?

Dave Allen, General Counsel at Internet performance specialist Dyn said regional data centres are a start but that the data residence perspective is incomplete at best and give a false sense of confidence that the myriad of regulations is properly addressed.

“Businesses will now need to understand the precise paths that their data travels down, which will be a more complex problem given the amount of cross-border routing of data across several sovereign states. Having access to traffic patterns in real time, along with geo-location information, provides a much more complete solution to the challenges posed by the EU-US Privacy Shield framework,” said Allen.

Hitachi launches Hyper Scalable Platform with in-built Pentaho

HDS HSPHitachi Data System (HDS) has launched a rapid assembly datacentre infrastructure product that comes with a ready-mixed enterprise big data system built in.

The HDS hyper scalable platform (HSP) is a building block for infrastructure that comes with computing, virtualisation and storage pre-configured, so that modules can be snapped together quickly without any need for integrating three different systems. HDS has taken the integration stage further by embedding the big data technology it acquired when it bought Pentaho in 2015. As a consequence the new HSP 400 system creates a simple to install but sophisticated system for building enterprise big data platforms fast, HDS claims.

HDS claims that the HSP’s software-definition centralises the processing and management of large datasets and supports a pay-as-you-grow model. The systems can be supplied pre-configured, which means installing and supporting production workloads can take hours, whereas comparable systems can take months. The order of the day, says HDS, is to make it simple for clients to create elastic data lakes, by bringing all their data together and integrating it in preparation for advanced analytic techniques.

The system’s virtualised environments can work with open source big data frameworks, such as Apache Hadoop, Apache Spark and commercial open source stacks like the Hortonworks Data Platform (HDP).

Few enterprises have the internal expertise for analytics of complex big data sources in production environments, according to Nik Rouda, Senior Analyst at HDS’s Enterprise Strategy Group. Most want to avoid experimenting with still-nascent technologies and want a clear direction without risk and complexity. “HSP addresses the primary adoption barriers to big data,” said Rouda.

Hitachi will offer HSP in two configurations, Serial Attached SCSI (SAS) disk drives, generally available now, and all-flash, expected to ship in mid-2016. These will support all enterprise applications and performance eventualities, HDS claims.

“Our enterprise customers say data silos and complexity are major pain points,” said Sean Moser, senior VP at HDS, “we have solved these problems for them.”

Barracuda’s New Essentials for Office 365

Barracuda has recently released its new Essentials for Office 365 offering. In the past, I would get questions from customers about wanting to back up Office 365 to be able to control it themselves and not rely on Microsoft. I unfortunately never had much to tell them. You’re option was to go through Microsoft. Barracuda is now offering single email recovery without recovering the entire mailbox, associated attachments recovery, and conversation recovery. Barracuda has heard customers and delivered on those requests in a great way. If you’d like to hear me discuss Office 365 in more detail, check out a webinar I recently did.

Essentials for Office 365

 

Would you like to hear more from David around Office 365? Download his webinar, “Microsoft Office 365: Expectations vs. Reality

 

By David Barter, Practice Manager, Microsoft Technologies

How cloud providers are fighting the data sovereignty fight for European customers

(c)iStock.com/caracterdesign

For many US-centric cloud providers, Europe is quickly becoming a fierce battleground for their business. Earlier this week, Oracle announced plans to recruit up to 1400 cloudy salespeople across EMEA, while data centres are popping up all over the continent, from Microsoft’s commitment to the UK by the end of 2016 to IBM SoftLayer opening a data centre in Italy last year.

With two UK data centres, in London and Manchester respectively, infrastructure as a service (IaaS) provider iland is acutely aware of the issues European customers want to get solved in terms of latency and, more importantly, data sovereignty. Monica Brink, who has recently taken up the post of EMEA marketing director at the Houston-based firm, tells CloudTech of the issues underlying the data sovereignty scares.

“It’s obviously very important for European customers, that whole data sovereignty issue, and that’s where we noticed a real difference between our European and North American customer database,” she explains. “With all of the hacking attacks we’ve seen, and natural disasters over the last year, there is a very keen focus on advanced security, things like vulnerability scanning, encryption, intrusion detection, and the cloud provider being able to prove they are meeting all of those regulations for the customer.”

The problem is, however, that the customer is sometimes hard to please. Research conducted by iland back in June found that, in two in five cases, customers argued their cloud provider “doesn’t know [them] or [their] company.” Brink argues that many of the larger scale IaaS public cloud vendors are ‘high on functionality but low on support, transparency and visibility’, but with data sovereignty and compliance, the issue is far more complex.

“It’s making sure they’re protecting their own customers’ data, they’re following all those rules on opting in and opting out, they know exactly where their customers’ data is at any point in time, and that they have assurance from their cloud providers that it’s not across different data centre borders,” says Brink.

As a result, iland has invested significantly in a compliance professional services arm, headed by director of compliance Frank Krieger. Writing for this publication in January on changes to the Safe Harbour ruling, Krieger noted: “Data sovereignty is ever-changing and new rules are being implemented constantly. This is a disruptor but not a destroyer for business. If you make sure your business is staying on top of the regulations, you’ll not get caught out when new laws come into play in the near future.”

Brink argues the two key issues for companies are data privacy and data security, and in particular putting the intrusion detection and vulnerability scanning within the cloud infrastructure. “It is complex for customers to navigate this post-Safe Harbour world – they need help,” she says. Overall, it represents another potential pain point for customers – and as ever, due diligence is its own reward.

Hybrid Cloud Versus Hybrid IT: What’s the Hype? By @Kevin_Jackson | @CloudExpo #Cloud

Once again, the boardroom is in a bitter battle over what edict its members will now levy on their hapless IT organization. On one hand, hybrid cloud is all the rage. Adopting this option promises all the cost savings of public cloud with the security and comfort of private cloud. This environment would not only check the box for meeting the cloud computing mandate, but also position the organization as innovative and industry-leading. Why wouldn’t a forward-leaning management team go all in with cloud?

read more

How Are Cloud-Based Solutions Benefiting Procurement Organizations? | @CloudExpo #Cloud

I sat down with Michael Rösch, COO of POOL4TOOL, to chat about cloud computing. With a lot of buzz about the impact of the cloud on business, it was a chance to get a perspective, as well as a few hints and tips, from someone who has been at the coalface of procurement cloud services for the past 15 years. Michael has been at POOL4TOOL since 2000, becoming COO in 2012, and has worked on projects with German giants like Behr, Hansgrohe, Heidelberger Printing Presses, Carl Zeiss and ThyssenKrupp Presta in that time.

read more

Introducing Parallels Remote Application Server v15

New brilliant mobile end user experience, IT productivity features, and security enhancements mark the first major release of Parallels Remote Application Server since Parallels acquired the product line a year ago. Today, we launched Parallels Remote Application Server version 15, giving IT organizations the easiest and the most cost-effective solution to deliver Windows applications and desktops to […]

The post Introducing Parallels Remote Application Server v15 appeared first on Parallels Blog.

After the flood: Why IT service continuity is your best insurance policy

(c)iStock.com/monkeybusinessimages

The severe floods that hit the north of England and parts of Scotland in December 2015 and January 2016 devastated both homes and businesses, and led to questions about whether the UK is sufficiently prepared to cope with such calamities.

On December 28, the Guardian newspaper went so far as to say that the failure to ensure that flood defences can withstand the unprecedented high water levels, would cost at least £5bn. Lack of investment was cited as the cause of the flooding.

Even companies such as Vodafone were reported to have been affected. The IT press said that the floods had hit the company’s data centre. A spokesperson at Vodafone, for example, told Computer Business Review on January 4: “One of our key sites in the Kirstall Road area of Leeds was affected by severe flooding over the Christmas weekend, which meant that Vodafone customers in the North East experienced intermittent issues with voice and data services, and we had an issue with power at one particular building in Leeds.”

Many reports said that the flooding restricted access to the building, which was needed in order to install generators after the back-up batteries had run down. Once access became possible engineers were able to deploy the generators and other disaster recovery equipment. However, a recent email from Jane Frapwell, corporate communications manager at Vodafone, claimed: “The effects on Vodafone of flooding were misreported recently because we had an isolated problem in Leeds, but this was a mobile exchange not a data centre and there were no problems with any of our data centres.”

While Vodafone claims that its data centres weren’t hit by the flooding, and that the media had misreported the incident, it is a fact that data centres around the world can be severely hit by flooding and other natural disasters. Floods are both disruptive and costly. Hurricane Sandy is a case in point.

Hurricane Sandy

In October 2012 Data Center Knowledge reported that at least two data centres located in New York were damaged by flooding. Rich Miller’s article, ‘Massive Flooding Damages Several NYC Data Centres’ said: “Flooding from Hurricane Sandy has hobbled two data centre buildings in Lower Manhattan, taking out diesel fuel pumps used to refuel generators, and a third building at 121 Varick is also reported to be without power…” Outages were also reported by many data centre tenants at a major data hub on 111 8th Avenue.

At this juncture it’s worth noting that a survey by Zenium Technology has found that half of the world’s data centres have been disrupted by natural disasters, and 45% of UK companies have – according to Computer Business Review’s article of June 17 – experienced downtime due to natural causes.

Claire Buchanan, chief commercial officer at Bridgeworks, points out that organisations should invest in at least two to three disaster recovery sites, but quite often like most insurance policies they often just look at the policy’s price rather than as the total cost of not being insured. This complacency can lead to a disaster, costing organisations their livelihood, customers and their hard fought for reputations.  “So I don’t care whether it’s Chennai, Texas or Leeds. Most companies make do with what they have or know, and they aren’t looking out of the box at technologies that can help them to do this”, says Buchanan.

Investment needed

Buchanan suggests that rather than accepting that the flood gates will be opened, drowning out their data centres of their ability to operate, organisations should invest in IT service continuity.

The problem is that traditionally, the majority of data centres are quite often placed within the same circle of disruption. This could lead to all of an organisations data centres being put out of service. The main reason why they place their data centres within close proximity to each other is caused by the limitations of most of the technologies available on the market. Placing data centres and disaster recovery sites at a distance brings the latency issues. Buchanan explains: “Governed by an enterprises Recovery Time Objective (RTO), there has been a requirement for organisations to place their DR centre within fairly close proximity due to the inability to move data fast enough over distance.”

She adds: “Until recently, there hasn’t been the technology available that can addresses the effect of latency when transferring data over distance. The compromise has been how far away the can DR centre be without too much of a compromise on performance? ” With the right technology in place to mitigate the effects of latency it should, however, be possible to situate an organisation’s disaster recovery site far away as you like, for example in green data centres in countries such as Iceland or Scandanavia, as well as other countries to ensure that each data centre is not located within the same circles of disruption.

Green data centres have many plus points in their favour, most specifically the cost as power and land are comparatively inexpensive. The drawback has always been the distance from European hubs and the ability to move the data taking into account distance and bandwidth. With 10Gb bandwidth starting to become the new normal, coupled with the ability to move data unhindered at link speed, there is no reason why enterprises cannot now take this option.

Traditional approach

Clive Longbottom, client services director at analyst firm Quocirca, explains the traditional approach. “The main way of providing business continuity has been through hot replication,” he explains.  “Therefore, you need a full mirror of the whole platform in another data centre, along with active mirroring of data.  This is both costly and difficult to achieve.”

But as most companies already have the network infrastructure in place, they should be looking for solutions that won’t cost the Earth. For this reason, organisations should look outside of the box and consider smaller and more innovative companies to find solutions to the problems they face – solutions that can mitigate latency with the organisation’s existing infrastructure, making in unnecessary to buy new kit in order to have a dramatic impact.

“With products like WANrockIT and PORTrockIT you don’t need dark fibre networks or low latency network because the technology  provides the same level of performance whether the latency is 5, 50 or 150ms of latency”, says David Trossell, CEO of Bridgeworks. He claims that the biggest cost is the network infrastructure, but “you can reduce the costs considerably with these solutions, and it widens up the scope of being able to choose different network providers as well.”

“CVS Healthcare, for example, wanted electronic transfer DR from 2 of their sites, but the latency killed performance and so they still had to use a man in the van to meet their required recovery time objectives (RTO)”, explains Trossell. He adds: “They had electronic transfer to improve the RTO, but this was still too slow and yet with WANrockIT in the middle we got the RTO down to the same or better, and we reduced the RPO from 72 hours down to 4 hours.” Before this CVS Healthcare was doubling up on its costs by using the “man in the van” and electronic transfer. 

Plan for continuity

While companies need to plan for business continuity first and foremost, they also need to have a good disaster recovery plan. Buchanan and Trossell have found that many organisations lack adequate planning. They don’t see a need for it until disaster strikes – ‘like everything else organisations quite often don’t think it would happen to them.’ For example, what would happen in the Thames Barrier failed to prevent Canary Wharf from being flooded? It’s after all located on a flood plain and there are many disaster recovery sites in its vicinity.

Longbottom raises a key challenge. “If flooding such as we have just seen happens – what was meant to be a once in a hundred years’ event – then planning for that puts the costs to the data centre owner out of reach,” he says. “Having water levels two or more metres above normal means that attempting to stop ingress of water become exceedingly difficult, and pumping it out just as hard.” 

He therefore advises organisations to have two plans, for disaster recovery and business continuity. It’s also important to remember that IT service continuity is multi-tiered, and these two considerations are a part of it. 

To ensure that they work effectively as well as efficiently together, there is a need to understand the business-related risk profile. He says this will also help organisations to define how much the business is willing to spend on continuity, and it will allow for some forethought into the types of risks that will affect the business. Disaster recovery sites may need to be located in different countries to ensure that investment in IT service continuity is the best insurance policy.

[session] Offline-First Apps with PouchDB and Cloudant By @BradleyHolt | @CloudExpo #Cloud

It’s easy to assume that your app will run on a fast and reliable network. The reality for your app’s users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn’t work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection.

read more