The cloud in 2019 – our predictions

Cloud Pro

1 Jan, 2019

“If I was to put money on what would be the biggest cloud trend for 2019, edge computing would be a sure bet. Companies have already started vying for early control of this lucrative market, said to be worth up to $21 billion by 2025 according to some reports. In fact, billions of dollars have already been thrown at developing the technology, which promises to be the next generation of data analysis, especially if you’re in the businesses of IoT.

“The idea of pushing some of the heavy lifting closer to the point of data capture is dismantling the traditional topography of a data centre operation. Instead of having all the compute power of a data centre at the core of a network, you can now have a setup that operates multiple mini-data centres at the edge. Resource demand is distributed and data analysis is done faster than currently possible, at least that’s the sales pitch.

“Those looking to take control early on have already unveiled their first products, and 2019 will likely see that arms race accelerate as companies seek to refine their technology and expand on capabilities.

“HPE, Google, AWS, Cisco, IBM, Microsoft… everyone is getting involved, and they believe every business can benefit in some way from edge computing. For 2019, the reality is likely to be far more conservative, with a select few industries becoming the initial flag bearers for the technology. Those already adopting IoT at scale are prime contenders, such as large manufacturers or shipping and supply chain companies. But make no mistake, in five years time, it’s likely edge will be the network setup everyone is turning to.”

Dale Walker, acting deputy and features editor

“GDPR has proved to be less apocalyptic than was predicted by some sources; neither the EU nor the ICO came straight out of the gate with major fines for tech companies like Amazon, Facebook or Google – or anyone else, for that matter. There have been scarcely a handful of fines issued under GDPR – but they are most certainly coming.

“If next year is anything like this one, the ICO will undoubtedly have its hands full investigating the slew of forthcoming data breaches without digging into companies’ everyday data handling practices, but that doesn’t mean that cloud companies can start slacking on compliance. The fines are coming, and I predict they’ll start to land in 2019.”

Adam Shepherd, reviews and community editor

“The public sector has engaged in pockets of constructive cloud migration in the last few months, but this has only happened in silos. Take the Met Police, which in September partnered with a firm to move its data to the cloud, or the work done by various local councils, such as the London-based Lambeth.

“None of this, however, is joined-up. It’s a far cry from the high ambitions set when the government outlined a cloud-first public sector strategy in 2013, but a host of bodies and public services are edging towards effective cloud adoption.

“Some six years later, we may finally see genuine fruits of these efforts, with some promising digital transformation work being done behind the scenes, albeit not in any way first envisaged in 2013.

It won’t be a swimming success story, and will by no means take the form of a massive centralised push with a coherent strategy; for starters, the legacy systems and existing infrastructure between different bodies and departments vary wildly.

“But the groundwork laid by massive institutions such as NHS Digital in the last 12 months could see the public sector making huge strides in adopting cloud technology, as well as using its data in a meaningful way for the first time.”

Keumars Afif-Sabet, staff writer

That was the year that was in cloud: A look back on 2018

Cloud Pro

24 Dec, 2018

“In many ways, 2018 was the year that major cloud companies started to deliver on the hybrid promises made in 2017. The likes of AWS and Google Cloud now seem to understand that the idea of a ‘cloud-first’ approach remains, for many companies, entirely unrealistic. Larger organisations or those heavily invested in legacy hardware are simply too entrenched to shift their operations to a public cloud.

“Product releases this year seem to reflect this understanding, with Google Cloud, in particular, going as far as to not only encourage customers towards hybrid, but also help them cut administration costs once they arrive. AWS, somewhat late to the party, used its re:Invent conference in November to make its own hybrid push. Outposts, it’s latest hybrid service, gives customers a chance to buy preconfigured server racks designed to run AWS services as if they were operating on Amazon’s own data centres.

“It’s announcements like these that make it clear that going all in on cloud is no longer a viable marketing position for cloud giants – it simply doesn’t align with the reality of their customers.”

Dale Walker, acting deputy and features editor

“As predicted last year, 2018 was the year of multi-cloud; rather than trying to lock customers into a walled ecosystem, cloud vendors embraced the concept of integration and interoperability, allowing customers to adopt best-of-breed cloud solutions by cherry-picking the providers which best fit their needs.

“And adopt they did. Cloud adoption is now in full swing across the majority of businesses, and a good chunk have finished their deployments and are moving onto other transformation projects.

“A popular one is AI, which has steadily become more of a focus for both cloud vendors and cloud consumers. As machine learning technology develops, companies have started to explore the possibilities, with several AI-based products and services already in market.”

Adam Shepherd, reviews and community editor

“A small story we spotted back in September turned out to be one of the biggest takeaways from AWS re:Invent 2018. The company posted job adverts for satellite specialists and swiftly removed them before anyone could see them. But it wasn’t swift enough as Amazon enthusiast website This Just In managed to get screenshots of the postings.

“The positions were Space and Satellite System Software Development Engineer and Space and Satellite Product Manager. There was little detail provided beyond brief job descriptions, but we got answers, surprisingly, in November when AWS CEO Andy Jassy announced AWS Ground Station, a new operation to improve the transfer of satellite data to the cloud.

“The announcement was one of the biggest talking points at re:Invent 2018, as it means AWS can download and migrate geospatial data into the cloud for its customers. Now, it’s less about what AWS will do next and more about what customers and partners are going to do with it.”

Bobby Hellard, staff writer

“Cloud technology has become more crucial to businesses in the last 12 months, and this is a trend that’s sure to continue for some time to come.

“But, to some extent, 2018 betrayed a certain fallibility in service providers, and confirmed organisations will never be able to rely on them to offer an undisrupted service, or fully guarantee data security. It’s an issue that affected the two biggest players in this space, AWS and Microsoft’s Azure.

“Although critical changes to AWS towards the backend of 2017 and in 2018 – such as default encryption to S3 buckets – aimed to bolster security, a fluttering of incidents suggested issues still prevailed. For instance, the Buckhacker tool, a white hat hacker-developed plugin, comprised a search engine that trawled through AWS servers for unsecure servers.

“Microsoft, meanwhile, suffered a series of embarrassing and occasionally bizarre service outages, exemplified by a “severe” weather event knocking out its Texas-based data centre, leading to a global Azure and Office 365 outage.

“It demonstrates, if anything, that service providers can never guard against absolutely everything, including the forces of nature, and businesses who rely on the cloud to host their data and run their critical operations must take this into account.”

Keumars Afifi-Sabet, staff writer

How to backup to the cloud with a WAN data acceleration layer

Software-Defined WANs (SD-WANs) are, along with artificial intelligence, the talk of the town, but they have their limitations for fast cloud back-up and restore. However, before SD-WANs organisations had to cope with conventional wide area networks – just plain old WANs – with all applications, bandwidth congestion and heavy quality of service (QoS) going through one pipe using multi-protocol label switching (MPLS) to connect each branch office to one or more clouds.

The advent of the SD-WAN was a step forward to a certain extent, allowing branch offices to be connected to wireless WANs, the internet, private MPLS, cloud services and to an enterprise data centre using a number of connections. In essence, SD-WANs are great for midsized WAN bandwidth applications with their ability to pull disparate WAN connections together under a single software- managed WAN.  Yet, they don’t sufficiently resolve latency and packet loss issues. This means that any performance gains are, again, usually due to inbuilt deduplication techniques.

SD-WANs and SDNs

Some people may also think of SD-WANs as being the little brother of their better-known sibling: Software-defined networking (SDN). Although they are related since they are both software-defined, the difference is that SDN is often for internal data centre use at a branch or an organisation’s headquarters, while SDN is perceived as being architecture.

In contrast, SD-WANs are a technology you can buy to help manage a WAN. This is done by using a software-defined approach that allows for branch office network configurations to be automated, compared to the past when they were handled manually.  This latter, traditional approach required an organisation to have an on-site technician present. So, if for example, an organisation decides that it wants to roll out teleconferencing to their branch offices, the pre-defined network bandwidth allocations would have to be manually re-architected at each and every branch location.

SD-WANs allow all of this to be managed from a central location using a graphical user interface (GUI). It can also allow organisations to buy cheaper bandwidth while maintaining a high level of uptime. Yet much of the SD-WAN technology isn’t new, and organisations have also had the ability in the past to manage WANs centrally. So, SD-WANs are essentially an aggregation of technologies that create the ability to dynamically share network bandwidth across several connection points. What’s new about them is how they package all of the technologies together to make a whole new solution.

Bandwidth conundrum

However, buying a cheaper bandwidth won’t often solve the latency and packet loss issues. Nor will WAN optimisation sufficiently mitigate the effects of latency and packet loss, and it won’t improve an organisation’s ability to back up data to one or more clouds. So, how can it be addressed? Well the answer is that a new approach is required. By adding a WAN data acceleration overlay, it will be possible to resolve the inherent WAN performance issues head on.  WAN data acceleration can also handle encrypted data, and it allows data to be moved at speed at a distance over a WAN.

This is because WAN data acceleration takes a totally different approach in the way it addresses latency and packet loss issues. The only limitation is the speed of light, which is simply not fast enough. Yet, it governs latency. So, with traditional technologies, latency decimates WAN performance over distances. This will inevitability effect SD-WANs and adding more bandwidth won’t change the impact that latency can have on WAN performance.

TCP/IP parallelisation

By using TCP/IP parallelisation techniques and artificial intelligence to control the flow of data across the WAN, it’s possible to mitigate the effects of latency and packet loss – typically customers see a 95% WAN utilisation rate. The other upside of not using compression or dedupe techniques is that WAN data acceleration will accelerate any and all data in identical ways. There is no discrimination about what the data is.

This permits it to reach Storage Area Networks (SANs), and by decoupling the data from the protocol, customers have been able to transfer data between SAN devices across thousands of miles. One such Bridgeworks customer, CVS Caremark, connected two virtual tape libraries over 2,860 miles at full WAN bandwidth. This achieved a performance gain of 95 times the unaccelerated performance. So, imagine the performance gains that could be achieved by overlaying SD-WANs with WAN data acceleration solutions such as PORTrockIT and WANrockIT.

Making a difference

These WAN performance gains could make the difference to cloud or data centre to data centre backup times, while also having the ability to improve recovery time objectives (RTOs) and recovery point objectives (RPOs). So, rather than having to cope with disaster recovery, organisations could use SD-WANs with WAN data acceleration overlays to focus on service continuity. They would also be wise to back-up their data to more than one location, including to more than one cloud.

Furthermore, the voluminous amounts of data that keep growing daily can make backing up data to a cloud or simply to a data centre a very slow process. Restoring the data could also take too long whenever a disaster occurs, whether it be caused by human error or by a natural disaster. Another tip would be to ensure that more than one disaster recovery site is used to back-up and restore the data. These DR sites should be located outside of their own circles of disruption to increase the potential of being able to maintain uptime whenever, for example, a flood affects one or more of them. You might also like to keep certain types of sensitive data elsewhere by creating an air gap.

Cloud backups and security

Whenever the cloud is involved in backing up and storing data – or any network connectivity for that matter – there should also be some consideration about how to keep the data safe from hackers. Cloud security has improved over the years, but it’s not infallible – even the largest of corporations are fighting to prevent data breaches on a daily basis, and some including Facebook have been hacked.

Not only can this lead to lost data, but it can also create unhappy customers and lead to huge fines – particularly since the European Union’s General Data Protection Regulations came into force in May 2018. The other consequence of data breaches is lost reputation. So, it’s crucial to not just think about how to back up data to the cloud, but also to work on making sure its security is tight.   

That aside, you may also wish to move data from one cloud to another for other reasons because latency and packet loss doesn’t only affect an organisation’s ability to back and restore data from one or several clouds. It can also make it harder for people to simultaneously share data and to collaborate on certain types of data-heaving projects, such as those that use video data. Yet, CVS Healthcare has found that WAN data acceleration can mitigate latency and packet loss while increasing its ability to back up, restore, transmit and receive, as well as share data at a higher level of performance.

Case Study: CVS Healthcare

By accelerating data, with the help of machine learning, it becomes possible to increase the efficiency and performance of the data centre, backing up data to more than one cloud, and thereby improving the efficiency and performance of their clients. CVS Healthcare is but one organisation, that has seen the benefits of WAN data acceleration. The company’s issues were as follows:

• Back-up RPO and RTO

• 86ms latency over the network (>2,000 miles)

• 1% packet loss

• 430GB daily backup never completed across the WAN

• 50GB incremental taking 12 hours to complete

• Outside RTO SLA – unacceptable commercial risk

• OC12 pipe (600Mb per second)

• Excess Iron Mountain costs

To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up from 12 hours to 45 minutes. That equates to a 94% reduction in backup time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than 4 hours per day. So, in the face of a calamity, it could perform disaster recovery in less than 5 hours to recover everything completely.

Amongst other things, the annual cost-savings created by using data acceleration amounted to $350,000. Interestingly, CVS Healthcare is now looking to merge with Aetna, and so it will most probably need to roll this solution out across both merging entities.

Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and backup performance.

Eight tips for cloud back-ups

To improve the data acceleration of SD-WANs, as well as the ability to perform a cloud backup and restore large amounts of data fast, consider the following 8 best practice tips:

  • Defer to the acronym PPPPP, which means Proper Planning Prevents Poor Performance because of network upgrades – whether they be LAN or WAN upgrades.

  • Begin by defining the fall-back plan for cloud back-up, and at what stage(s) this should be invoked. Just pushing on with the hope that you will have fixed all the issues before it is time to hand it over is crazy, and no one will thank you for it.

  • Know when you have to use the fall-back plan because you can learn the lesson for next time while keeping your users and operations working as your primary focus. This may involve having more than one cloud to back up data to, and some types of sensitive data may require you to create an air gap to ensure data security remains very tight.

  • Remember that SD-WAN have great potential to manage workflow across the WAN. You can still overlay data accelerations solutions, such as WANrockIT and PORTrockIT, to mitigate the effects of latency for faster cloud back-up and restore.

  • Consider whether you can implement the fall-back plan in stages, rather than a Big Bang implementation. If it’s possible, can you run both in parallel? By implementing a fall-back plan in stages, you can take time to learn what works and what doesn’t to allow for improvement to be made to your ability to SD-WAN-data acceleration overlays to improve cloud back-up and restore efficiency.

  • Work with users and your operations team to define the data groups and hierarchy and to get their sign off for the plan.  Different types of data may require different approaches or require a combination of potential solutions to achieve data acceleration.

  • Create a test programme to ensure the reliability and functionality as part of the implementation programme.

  • Monitor and feedback – is it performing as you expected? This has to be a constant process, rather than a one-off.

SD-WANs are a popular tool; they can gain marginal performance achievements by using WAN optimisation, but this does not address the underlying cause of poor WAN performance: latency and packet loss. To properly address the increasing latency caused by distance, organisations should consider opting for an SD-WAN- with a data acceleration overlay for cloud back-ups.

To achieve business and service continuity, they should also back up their data to more than one cloud.  This may require your own organisation to engage with more than one cloud service provider, with each one located in different circles of disruption. So, when one fails for whatever reason, back-ups from the other disaster recovery sites and clouds can be restored to maintain business operations. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Slack has started blocking users who visited US sanctioned countries

Bobby Hellard

21 Dec, 2018

Communication service Slack is reportedly blocking users with ties to countries that are under sanction by the US government, with immediate effect and no chance of appeal.

Slack said the ban is in response to obligations to US regulations and is aimed at users that have visited countries under US sanctions, such as Iran, Cuba and North Korea.

However, some users claim bans have been made in error as they haven’t visited the listed nations in recent years.

A number of users have taken to Twitter to question the company’s reasoning and some have even posted screenshots of the messages Slack have sent them explaining why they’ve been blocked.

“In order to comply with export control and economic sanctions laws and regulations promulgated by the U.S. Department of Commerce and the U.S. Department of Treasury, Slack prohibits unauthorised use of its products and services in certain sanctioned countries and regions including Cuba, Iran, North Korea, Syria, and the Crimea region of Ukraine,” said Slack in a message to banned software developer Amir Omidi.

“We’ve identified your team/account as originating from one of those countries and are closing the account effective immediately.”

Underneath the screenshot, Omidi explained that the immediate ban was also undisputable because he couldn’t appeal: “So @SlackHQ decided to send me this email. No way to appeal this decision. No way to prove that I’m not living in Iran and not working with Iranians on slack. Nope. Just hello we’re banning your account,” he Tweeted.

How Slack has determined who to ban is coming under scrutiny with users questioning how they know if they’ve visited any of the sanctioned nations or how it knows what their ethnicity is. A PhD student from Vancouver, Canada said he received the ban despite having no Slack contacts in Iran.

“Slack closed my account today! I’m a PhD student in Canada with no teammates from Iran! Is Slack shutting down accounts of those ethnically associated with Iran?! And what’s their source of info on my ethnicity?” he tweeted. 

A company representative told The Verge, that the deactivations were a result of an upgrade to Slack’s geolocation system. 

“We updated our system for applying geolocation information, which relies on IP addresses, and that led to the deactivations for accounts tied to embargoed countries,” the representative said. “We only utilize IP addresses to take these actions. We do not possess information about nationality or the ethnicity of our users.

“If users think we’ve made a mistake in blocking their access, please reach out to and we’ll review as soon as possible.”

The team at SlackHQ did eventually get back to Omidi, but as yet have not resolved his issue: “Still no response at all and its the end of the workday in eastern US. I am surprised how long it takes them to reverse a ban or to issue some sort of statement on this,” he tweeted.

Encouraging a DevOps culture: On the pathway to change

As the name suggests, DevOps describes a close collaboration between software development and operations teams in IT. The goal? To create a faster and more effective way of developing and managing software by delivering features, fixes and updates in a more efficient manner. But you’ve already heard about these advantages elsewhere.

DevOps strategies and their success have been highly publicised, citing faster lead times, more frequent code deployment and quicker incident recovery times for IT teams. However, transitioning to this method of management is often approached incorrectly.

The emergence of DevOps as a buzzword has created a surge in new technologies, all claiming to make the journey from separate development and operations silos, to a collaborative approach, much simpler. The recent innovations in virtual and cloud-based technologies have supported this surge. However, deploying new technology isn’t necessarily the best way to begin this shift – a cultural change is often required first.  

By nature, IT workers will favour technological tools as their preferred method of meeting business objectives. Tools are tangible, usually arrive with installation guidelines, and their purpose is well defined. Cultural changes, however, do not come with an instructional guidebook and can therefore be harder to implement during a DevOps journey.

A common challenge of DevOps is ensuring both teams — development and operations —respect each other’s thoughts and opinions. There have often been long baked-in divisions between those teams, both organisationally and culturally. Unlike technological tools, you cannot change attitudes by deploying a simple update. Colleagues from both departments will come with past experiences and motivations which may be at odds with the mindset required to make DevOps a success.

To get around this, organisations should make the decision-making process simple and without hierarchy. Teams should be able to have an open and respectful debate, and fundamentally, everyone will be working towards the same objective. Information sharing tools can therefore make the decision-making process easier, as actual production and build metrics will be visible to all.

Data is not subjective, so ensuring visibility of live running statistics will ensure that decisions are made based on actual performance information, rather than the personal opinions of workers in either team.

This reassurance through visibility can improve team dynamics, as it reduces the feeling of risk and uncertainty. In fact, when researchers at Google studied over 180 engineering teams, they found that the most important factor in predicting a high performing team is psychological safety — feeling safe taking considered risks within the team. In a relatively new area of collaboration, like DevOps, this sentiment could not be more critical.

Cultural hesitations can also be made easier by publicising the success of projects which have been completed this way. Perhaps more importantly, organisations need to be clear about the benefits that DevOps is providing to specific teams.

Some workers can be short-sighted and will have a natural bias to strategies that clearly complement their own efforts. Successful communication of the benefits of DevOps will differ between workers from development and operations, so organisations should consider this in their internal communications strategies.  

The goal of DevOps is to help deliver software quickly, robustly and efficiently. However, it is often misinterpreted as simply a need to deploy new technological tools to meet this goal.  In practice, DevOps relies more on cultural acceptance than the integration of new tools.

Of course, the organisational change can be supported by a collection of improved software development practices, but organisations cannot rely only on these tools. Ultimately, it starts with a change to people’s mindsets. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud creates works of art using big data

Bobby Hellard

19 Dec, 2018

Creative minds at Google Cloud have come up with a way to make data storage more interesting by visualising storage traffic data to create stunning works of art.

In collaboration with Staman Design, a data visualisation design studio, the two companies used the trajectory, velocity and density of data moving around the globe to create virtual maps.

“Looking at Cloud Storage requests over time showed us a distinct pattern, the pattern gave us a way to correlate countries, and each correlation gave us an insight into connections around the globe,” said Chris Talbott, Google Cloud’s head of cloud storage product marketing.

“So we put it all together in a video that gave every country a turn in the spotlight. It jumps from country to correlated country, showing unexpected connections and prompting conversation and discussion.”

Most of the art created by the request data has been on display as Google Cloud’s Next events in San Francisco, Tokyo or London. The idea was to create a global picture of its service that highlighted patterns that would help the company better serve its customers.

But, as Talbott put it “somewhat jokingly”, Google wondered if it could make boring old storage beautiful. The answer was yes, as it’s managed to paint wonderfully vivid pictures using the data.

The process began by looking at cloud storage data, requested by customers. This data charted a request from it’s country of origin to the relevant cloud region, and vice versa. The team took one weeks’ worth of storage data and searched for useful patterns for customers. The information detailed the direction of the data, but not who it belonged too.

Visualised data migration from around the world – courtesy of Google Cloud

“The associated data also tells us the size of the request in GBs and a timestamp,” explained Talbott. “Since the data is anonymized, we don’t know which user is making the request, whose data is being requested or what the content is.”

“You can make storage beautiful when you look at it in different ways,” he said, “and in doing so you can really generate some thought-provoking insights for your customers.”

Cloud adoption in the UK is outpacing the EU average

Research published by Eurostat reveals the UK is the sixth largest cloud users among other countries in the EU.

According to statistics published by the European Statistical Office, British enterprises claim to have a relatively high rate of cloud adoption, with 41.9% of companies adopting some form of cloud service. This is compared with the average of 26.2% in EU nations.

The UK is marginally behind some predominant Scandinavian countries like Finland, Sweden, and Denmark, which have 65.3%, 57.2%, and 55.6% of cloud users, respectively.

Figures show organisations in the UK are overtaking the rest of Europe in cloud adoption with a 17.9% increase over 2014, compared to a relatively modest EU-wide average increase of 7.2%.

Eurostat experts Magdalena Kaminska and Maria Smihily said:

“Cloud computing is one of the strategic digital technologies considered important enablers for productivity and better services. Enterprises use cloud computing to optimise resource utilisation and build business models and market strategies that will enable them to grow, innovate and become more competitive.

Growth remains a condition for businesses' survival and innovation remains necessary for competitiveness. In fact, the European Commission in the wider context of modernisation of the EU industry develops policies that help speed up the broad commercialisation of innovation.”

The rate of cloud adoption in France and Germany was far below average with just 19% and 22%, respectively.

Furthermore, the data shows that only 23% of European businesses use cloud computing power for enterprise software and just 29% of firms use cloud-based customer relationship management (CRM) tools and apps. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

UK cloud adoption outpacing the EU average

Keumars Afifi-Sabet

17 Dec, 2018

The UK is the sixth largest cloud user among European Union (EU) countries, up one place four years ago, according to research. 

British enterprises boast a relatively high rate of cloud adoption, with 41.9% of organisations adopting some form of cloud service, against an EU average of 26.2%. Cloud services were mostly used for hosting email systems, and storing electronic files.

The UK is only beaten by a handful of predominately Scandinavian nations, like Denmark in third place with 55.6%, Sweden in second place with 57.2% and Finland leading the pack with 65.3%.

Statistics published by Eurostat show that UK organisations are outpacing the rest of Europe, on average, with UK cloud adoption representing a 17.9% increase on 2014, against a relatively modest EU-wide average increase of 7.2%.

Meanwhile, public cloud usage among both large organisations and SMBs in the EU-28 generally overshadows private cloud usage, 40% versus 31% for enterprises against 17% versus 11% for SMBs. But these statistics also point to overall cloud usage among larger businesses dwarfing cloud adoption among SMBs.

“Cloud computing is one of the strategic digital technologies considered important enablers for productivity and better services,” said authors Magdalena Kaminska and Maria Smihily.

“Enterprises use cloud computing to optimise resource utilisation and build business models and market strategies that will enable them to grow, innovate and become more competitive.

“Growth remains a condition for businesses’ survival and innovation remains necessary for competitiveness. In fact, the European Commission in the wider context of modernisation of the EU industry, develops policies that help speed up the broad commercialisation of innovation.”

Surprisingly, the rate of cloud adoption in countries such as France and Germany was considerably below average, 19% and 22% respectively, and a far cry from the host of Scandinavian leaders.

The specific reasons for adopting cloud computing technology among businesses also varied to some extent, with nearly seven out of ten enterprises using the cloud for storing files in electronic form, and for email; 68% and 69% respectively.

Moreover, just 23% of European businesses use cloud computing power for an enterprise’s software and just 29% of firms use cloud-based customer relationship management (CRM) tools and apps.

UK-based organisations’ cloud usage which also includes office software, and financial or accounting apps, was generally higher than average across the board, with 77% of British organisations using the cloud for storage of files, for example.

Unsurprisingly, the highest proportion of enterprises using cloud computing services were in the information and communication sector, 64%.

This was followed by ‘professional, scientific and technical activities’ businesses, which stood at 44%, while the rate of adoption for firms in almost all other sectors languishing between 20% to 33%.

Google Cloud says it won’t sell general facial recognition software

Connor Jones

17 Dec, 2018

Google Cloud has announced that it will not sell general-purpose AI-driven facial recognition technology until the technology is polished and concerns over data protection and privacy have been addressed in law.

“Google has long been committed to the responsible development of AI. These principles guide our decisions on what types of features to build and research to pursue,” said Kent Walker, SVP of global affairs at Google. “Facial recognition merits careful consideration to ensure its use is aligned with our principles and values and avoids abuse and harmful outcomes.

“Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions,” he added.

It’s unclear what these questions are or what needs reworking in the technology, but Walker believes that AI can benefit good causes such as “new assistive technologies and tools to help find missing persons”. But despite that, recent movements argue that facial recognition tech needs regulating.

The announcement follows news surfacing about the tech community, specifically AI researchers, lawmakers and technology companies, forming a rare consensus regarding the regulation of facial recognition technology.

The Algorithmic Justice League and the Center of Privacy & Technology at Georgetown University Law Center unveiled the Safe Face pledge earlier this month which aims to get big AI developers to commit to limiting the sale of their tech, including to law enforcement, unless specific laws have been debated and implemented.

The call to action was initiated because of the rising concern around the bias and mass surveillance risks associated with facial recognition technology deployed on a commercial scale.

Notable signatures on the pledge have so far come from leading researchers and esteemed figures in the tech community but none of the big developers, such as Microsoft, Amazon or Google, have committed as of yet.

This could be because multi-billion dollar contracts are at stake for vendors that develop the first marketable tech in emerging fields such as AI-driven video analysis, according to market researcher IHS Markit. Video surveillance technology is already a market worth $18.5 billion and with AI making the analysis more efficient, it would be unwise for any of the big developers to walk away.

“There are going to be some large vendors who refuse to sign or are reluctant to sign because they want these government contracts,” said Laura Moy to Bloomberg, executive director of the Center on Privacy & Technology.

Sundar Pichai, CEO of Google announced back in June a set of AI principles following the mass backlash from Google’s staff after its AI tech was being used by the Pentagon’s drone program in Project Maven.

The seven principles were drafted to ensure Google develops AI tech in an ethical way and following its publication, Google announced that it would not renew the Pentagon’s drone contract.

The same principles have influenced its decision to not market general-purpose facial recognition APIs. One of its AI principles is to avoid creating or reinforcing unfair bias, something current tech has shown to have issues with, specifically with errors around the detection of skin colours other than white.

It’s unclear whether the necessary laws that are needed for the technology’s implementation will arrive any time soon. Brad Smith, president of Microsoft and chief legal officer put the chances of federal legislation in 2019 at 50-50, in a televised Bloomberg interview.

He predicts that if law comes, it will most likely come as part of a broader privacy bill, adding that there is a much better chance of getting a state or city law drafted first. If that was drafted in a more influential state, such as California, it could spur major vendors to change the way they develop AI in a way that tackles key issues.

Despite the current flaws in facial recognition tech, it can be used for good. In Kent Walker’s blog post, he detailed Google’s AI and how it’s being used to treat diabetic retinopathy, a condition that affects one in three diabetics, causing blindness.

The new technology, which has been in development for years, can detect early signs of diabetic retinopathy before it damages the patient’s sight, with the same accuracy as an ophthalmologist.

Specifically targeting underserved regions such as Thailand where there are only 1,400 eye doctors for 5 million diabetics, the AI technology can help perform screens for early signs of the condition in a country where screening rarely takes place.

AWS re:Invent 2018 roundup: Product reviews, analysis, and the DevOps wildcard

Opinion AWS re:Invent is always rich in product launches. Some products are entirely new, while others are updates or enhancements to existing tools. The 2018 event was no exception. In case you missed it, here’s a drive-thru on some of the best products – and one important statement – from this year’s show.

AWS Marketplace for Containers

Announced at the Global Partner Summit keynote, the AWS Marketplace for Containers is the next logical step in the Marketplace ecosystem. Vendors will now be able to offer container solutions for their products, just as they do with AWS EC2 AMIs.

The big takeaway here is just how important containerisation is and how much of a growth we see in the implementation of containerised products and serverless architectures in general. Along with the big announcements around AWS Lambda, this solidifies the push in the industry to adopt serverless models for applications.

AWS Marketplace – Private Marketplace

The AWS Marketplace has added the Private Marketplace to its feature set. You can now have your own marketplace that’s shared across your AWS Organisations. This is neat and all, but I think what’s even more interesting is what it hints at in the background.

It seems to me that to have a well-established marketplace at all, your organisation is going to need to begin its journey on the DevOps trail. This news shows that a good deployment pipeline is really the best way to handle a project, whether for external or internal customers.


This looks really cool. Firecracker is a virtualisation tool that is built specifically for micro VMs and function-based services like Lambda or Fargate. It runs on bare metal… wait, what? I thought we’re trying to move away from our own hosted servers?! That’s true. However, consider all the new IoT products and features that were announced at the conference and you’ll see there’s still a lot of bare metal, both in use and in development. I don’t think Firecracker is meant solely for large server farm type setups, but quite possibly for items in the IoT space.

The serverless / microservice architecture is a strong one, and this allows that to happen in the IoT space. In fact, I’m currently working on installing it onto my kids’ Minecraft microcomputer.

Andy Jassy says what?

In the fireside chat with Andy Jassy in the partner keynote, there were several things I found interesting. But one comment stood out from the rest:

“I hear enterprises, all the time, wanting help thinking about how they can innovate at a faster clip. And, you know, it’s funny, a lot of the enterprise EBC’s I get to be involved in… I’d say roughly half the content of those are enterprises asking me about our offering and how we think about our business and what we have planned in the future, but a good chunk of every one of those conversations are enterprises trying to learn how we move quickly and how we invent quickly, and I think that enterprises realise that in this day and age if you are not reinventing fast and iterating quickly on behalf of your customers, it’s really difficult to be competitive.

“And so I think they want help from you in how to invent faster. Now, part of that is being able to operate on top of the cloud and operate on top of a platform like AWS that has so many services that you can stitch together however you see fit. Some of it also is, how do people think about DevOps? How do people think about organising their teams? You know… what are the right constraints that you have but that still allow people to move quickly.”

He said DevOps! Apparently, larger companies that are looking to change don’t just want fancy tools and new technology. They also need help getting better at affecting change.

That’s absolutely outside the wheelhouse of AWS, and clearly a call-to-action for the partner community.

Read more: AWS looks to redefine hybrid cloud at re:Invent 2018 – plus make big moves in blockchain in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.