Can Google Stadia finally bring success to cloud gaming?


Connor Jones

18 Nov, 2019

It’s not usually part of our remit, but despite it being a gaming-geared announcement, there’s something about the new Google Stadia gizmo that interests us, specifically its infrastructure which raises questions we can’t quite answer yet.

Codenamed Project Stream before its launch, Google’s game streaming service is far from the first to grace the market, but it might be the most well-timed attempt of them all. It aims to bring console-quality games directly to virtually any screen with no need for a physical console.

Games will be controlled by a new Stadia controller which connects directly to the platform via Wi-Fi. Not the screen, not the Chromecast – directly to the game client residing in the cloud.

It’s important to note this isn’t the first time a cloud streaming of this kind has been attempted. OnLive and PlayStation Now, to name but two, promised so much but then crashed and burned after delivering very little.

OnLive ran into money issues where the cost of running its infrastructure far exceeded the income it made. It’s reported that the service cost millions of dollars to run every month but on launch, and for a few weeks after, the company only received single-digit daily income because of its “try before you buy” policy on games.

PlayStation Now is actually still alive and kicking. The premium service hasn’t been adopted nearly as widely as first thought, probably due to a combination of its reported heavy input lag and poor variety of supported games.

So with that, Stadia must overcome some significant challenges to breathe new life into the platform. With edge computing, it theoretically has an advantage over OnLive and being Google, it has already managed to secure a decent selection of launch titles to ensure day-one success.

What caught our eye is the cloud and edge computing aspects of the service’s infrastructure and streaming strategy. OnLive attempted cloud gaming in the past but the existing support infrastructure, which was unfit for purpose, has been the main blockade in developing a system that actually works. The streaming speed and quality were passable as a proof of concept and just about playable with some games, but to launch Stadia eight years after the failure of OnLive, it must do much better.

So many questions

Google has said it will be using a combination of its highly advanced data centres and edge infrastructure to deliver gaming at low latencies, something that, especially in the online multiplayer space, is of vital importance.

Phil Harrison, VP and GM at Google, leading the Stadia project, said that the measurable latency issues seen in Project Stream are “solved and mitigated”.

“There are some investments in the datacentre that will create a much higher experience for more people, and there are some fundamental advances in compression algorithms,” he told Eurogamer.

“One important thing to keep in mind is that we are building our infrastructure at the edge. It’s not just in our central, massive datacentres. We’re building infrastructure as close to the end user, the gamer, as possible – so that helps mitigate some of the historical challenges”.

The interaction between the datacentre and the edge is unclear, specifically to what extent both will impact the overall processing and transmission of game data. What’s been said so far seems somewhat confusing and at times contradictory. For example, Harrison spoke about microsecond ping speeds for gamers but 20ms edge to datacentre speeds.

Google’s massive number of datacentres, according to Harrison, will be pivotal in delivering the Stadia experience the tech giant has imagined. Harrison said Google’s datacentres offer the theoretically unlimited compute capacity needed for a cloud-based streaming service to thrive. Gaming developments have, in years gone by, been limited because of hardware and people’s reluctance to upgrade for a few years, or until the next console life cycle starts. In the datacentre, CPU and GPU capacity is as powerful as the developer needs it to be to run its game.

Chris Gardner, senior analyst at Forrester is optimistic about the capability of Google’s infrastructure. “The network configuration is somewhat of a mystery, but clearly Google nailed this because benchmarks have shown perceived input latency to be extremely fast,” he tells Cloud Pro. “Google has experience with network optimisation (all the way down to designing its own protocols) so the performance is not a stretch.”

Take the specified hardware announced by Google and put it into one of Google’s many datacentres and you arguably have a recipe for success, he adds.

Trouble in the network

However, the promises around network speeds proved to be a point of contention for us. Firstly, Harrison told the BBC that 4K gaming can be achieved on download speeds of 25mbps; for reference, the average UK household gets just 18.5mbps speeds from its internet connection, far less in rural areas. The VP said Google expects network demands to improve, but it definitely wasn’t a promise.

Although Google seems to be confident in the fact that its back-end equipment is up to the task, it’s likely to face the problem of internet service provider (ISP) throttling – world-class servers or not. Harrison already confirmed that Google already has relationships within the wider industry, but it’s possible the company could run into the same problems that Netflix faced during its expansion where it started paying ISPs to allow faster speeds on the service but instead users were throttled.

It’s a very real possibility that ISPs would throttle bandwidth as popularity grows and network demands are greater. “[Netflix] had to negotiate with the major players to ensure the customer experience wasn’t dreadful,” says Gardner. “I expect the same experience for game streaming providers, although much more so because now it’s not just a bandwidth negotiation, it’s latency as well”.

On the topic of latency, Gardner cited this as his biggest concern of the whole project. “What I expect to see is streaming to be initially successful with casual games, platformers and roleplaying games,” he said. “However, multiplayer games demand low latency and low input lag to stay competitive and enjoyable. This is my biggest concern,” he added. “Shooters, MOBAs and other types of super competitive games – I honestly don’t expect gamers to tolerate the latency.”

Competition is just around the corner

There are only three companies in the world right now that are positioned well enough to feasibly deliver on a cloud-based product like this. Google is one of them, AWS and Microsoft are the others. We just wouldn’t expect any of these to pump so much time and money into something the world isn’t ready for yet.

Google’s main competitor in this area is Microsoft, which is working on Project xCloud, its own game streaming service currently in beta. The company behind the Xbox is certainly lagging compared to Google as its product is currently still in the development stage, but it arguably presents the best case to make this idea of game streaming work. Reports from those selected to test the beta version of xCloud also seem to be unanimously positive.

Couple Microsoft’s prowess in the cloud market with its strong presence in the console sector spanning nearly two decades and that provides an impressive backing for what could potentially be a better product compared to Google’s. It’s possible Microsoft could let Google launch Stadia, learn from its inevitable mistakes, and then blow it out of the water with a far superior service.

Regardless of how all of this plays out, it’s difficult to get excited about something that has failed in so many previous attempts and with so little information about the project disclosed – the kind of information that we need to make educated guesses about its viability as a service – we can’t help but look on with scepticism.

Main image credit: Marco Verch

Microsoft overhauls its privacy policy amid EU concerns

18 Nov, 2019

Microsoft has said it will be updating its privacy provisions for commercial cloud contracts after a report from EU regulators last month questioned the company’s ability to comply with data laws.

The European Data Protection Supervisor (EDPS), an independent authority that oversees the application of GDPR, launched an investigation in April to assess whether the company’s contracts with EU institutions violated the rules.

The results of that investigation, released in October, raised “serious concerns” about Microsoft’s ability to provide appropriate safeguards for the processing of data done on behalf of the EU bodies it services.

In a statement on its website on Monday, Microsoft said: “We are announcing today we will increase our data protection responsibilities for a subset of processing that Microsoft engages in when we provide enterprise services”.

Last year the company worked alongside the Dutch Ministry of Justice and Security to amend contractual terms of a services agreement after authorities raised similar concerns about the lack of technical safeguards for the processing of data.

Monday’s privacy policy update is designed to extend those amendments across all commercial cloud contracts globally for both the private and public sector, the company explained.

“We will clarify that Microsoft assumes the role of data controller when we process data for specified administrative and operational purposes incident to providing the cloud services covered by this contractual framework, such as Azure, Office 365, Dynamics and Intune,” the company said.

“This subset of data processing serves administrative or operational purposes such as account management; financial reporting; combatting cyberattacks on any Microsoft product or service; and complying with our legal obligations.

“The change to assert Microsoft as the controller for this specific set of data uses will serve our customers by providing further clarity about how we use data, and about our commitment to be accountable under GDPR to ensure that the data is handled in a compliant way.”

Microsoft will remain the data processor when providing its services, fixing bugs, operating security services, and providing software updates, the statement added.

The policy overhaul comes just days after the company committed to applying the California Consumer Privacy Act to all US states once it comes into force in January 2020, although it has no legal obligation to do so.

The company expects the new policy terms to be applied to all commercial cloud contracts by the beginning of 2020.

Enterprises risking data disaster by not fully exploring cloud backup timeframes, research says

The issue of shared responsibility in cloud security is an issue which refuses to go away. Yet according to a new report from backup and disaster recovery managed services provider (MSP) 4sl, organisations are risking a data disaster by misunderstanding cloud providers’ backup processes.

The study, which polled 200 UK enterprises, found a majority of respondents believe the backup times for their various cloud products are longer than the advertised standards.

The hyperscale clouds are a primary example. The report notes Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform do not offer backup as standard on its own. Securing such data has long been a booming channel industry for independent MSPs and others – at least until AWS, for instance, launched AWS Backup at the start of this year to take a cut.

Yet the vast majority of those polled believed backup did exist as standard. More than four in five agreed this for AWS (81%) and Azure (84%), while an overwhelming 92% of respondents said so for Google.

Even for products with standard backup included, respondents believed they were getting more than they had – although a difference was evident in how much. For Office 365 SharePoint Online and Teams files, where the backup is 93 and 90 days respectively, around half (55% and 50%) knew where they stood. For products with only a short sprint, such as 14 days for Teams messages and Office 365 Exchange Online, this drops to 22% and 27% respectively.

“With cloud infrastructure services and applications firmly entrenched in 21st century IT strategy, enterprises need to be certain that their cloud and backup strategies are operating in concert – with any change to cloud strategy accompanied by changes in backup policy,” the report notes. “However, this is not consistently the case.”

The one product which came out of the rankings relatively unscathed was Salesforce. The CRM giant promises 90 days as standard backup retention, with more than half of respondents (55%) knowing this and almost four in five whose backups are therefore not at risk as a result.

Yet the findings – perhaps not entirely surprising given 4sl’s line of business – should come as a warning to organisations. “The desire to pass on responsibility for backup to service providers is understandable – backup environments are becoming extremely complex, and the peace of mind that a responsible partner is managing backup can be invaluable,” said Barnaby Mote, 4sl CEO and founder. “However, enterprises need to understand that in the main the standard level of backup provided for infrastructure or software as a service won’t meet their needs.”

Organisations back up data as a matter of course, not least for privacy and compliance but also to garner insights and analysis. Speaking to this publication in August, David Friend, CEO of cloud storage provider Wasabi Technologies, noted his view that storage would become a ‘commodity’, and that issues of cost around backing up what where would simply no longer exist.

“We [shouldn’t] think of data as sort of a scarcity… more a mindset of data abundance,” said Friend. “The idea that data storage gets to be so cheap that it’s not worth deleting anything. We have to think about data as something which has probably got future value in excess of what we think it might have today; we need to think of cloud storage the same way we think of electricity or bandwidth.”

You can read the full 4sl report here (pdf, no opt-in required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

US Supreme Court agrees to end Google and Oracle’s ten-year copyright battle


Connor Jones

18 Nov, 2019

The US Supreme Court has agreed to hear a copyright lawsuit that has spanned nearly a decade between tech giants Google and Oracle.

The case was originally brought to Google following Oracle’s 2010 acquisition of Sun Microsystems, the company responsible for developing the Java language. Oracle alleged that Google stole code from the Java language to build its Android platform, a claim Google has repeatedly denied.

What followed was a series of court hearings and resulting appeals, and although a number of lower courts sided with Google in the case, Oracle has successfully challenged these rulings in superior courts.

The case’s most recent ruling came in March 2018 when the Federal Court sided with Oracle’s copyright claim, resulting in Google being hit with a $9 billion (£7 billion) damages bill.

The bill hasn’t yet been paid as Google petitioned the US Supreme Court in January 2019, asking it to overturn “a devastating one-two punch at the software industry”.

No date has been set by the Supreme Court but a one-hour window has been allotted to hear the companies’ arguments. Being the US’ highest court, the ruling is likely to be the last word on the lengthy case which could have lasting effects on the software development industry – that is, whether application programming interface (API) packages can be copyrighted.

Permitting these vital components of software interoperation to be copyrighted could potentially stifle the software industry, making it difficult for new apps to work with other apps and software platforms.

The Supreme Court previously refused to hear the case following a 2014 Federal Circuit ruling but agreed this time around following support from the likes of Microsoft and Mozilla. The Electronic Frontier Foundation (EFF) has also sided with Google, calling the case a “mess”.

“We welcome the Supreme Court’s decision to review the case and we hope that the Court reaffirms the importance of software interoperability in American competitiveness,” said Kent Walker, Google’s SVP of Global Affairs, speaking to Cloud Pro. “Developers should be able to create applications across platforms and not be locked into one company’s software.”

Cloud Pro has contacted Oracle for a statement but did not receive a reply by the time of publication.

AWS files paperwork to challenge Microsoft JEDI deal – reports

Amazon Web Services (AWS) has filed with the US Court of Federal Claims to protest the decision to award the $10 billion-rated JEDI government cloud computing contract to Microsoft, according to reports.

As first reported by the Federal Times, and later confirmed by AWS, chief executive Andy Jassy told employees of plans at an all-hands meeting on November 14, citing potential presidential interference making the contract process ‘very difficult’ for government agencies.

Per the report, Jassy also claimed in the meeting that customers claim AWS is ‘about 24 months ahead of Microsoft’ when it comes to functionality and maturity.

AWS already holds one key card with its continued running of the CIA’s cloud operations, having been at full operational capability since the start of 2015. According to Nextgov, the agency earmarked in April plans for more commercial cloud contracts at a cumulative value approaching $10bn.

Following the decision to award the contract to Microsoft last month, many industry pundits continued to question supposed executive interference, as well as the setup of the Department of Defense (DoD) with regard to single tenant or multi-cloud operations.

At the time, AWS said it was ‘surprised’ about the conclusion and that it ‘remain[ed] deeply committed to continuing to innovate for the new digital battlefield’, but stopped short of confirming whether an appeal would be put in place.

“AWS is uniquely experienced and qualified to provide the critical technology the US military needs, and remains committed to supporting the DoD’s modernisation efforts,” an AWS spokesperson said in a statement. “We also believe it’s critical for our country that the government and its elected leaders administer procurements objectively and in a manner that is free from political influence.

“Numerous aspects of the JEDI evaluation process contained clear deficiencies, errors, and unmistakable bias – and it’s important that these matters be examined and rectified,” the spokesperson added.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Salesforce chooses Microsoft Azure for marketing cloud migration

Salesforce is moving its Marketing Cloud suite onto Microsoft Azure in an expansion of the companies’ partnership – and a big win for Microsoft.

The move will also see the two companies integrate Sales Cloud and Service Cloud with productivity and collaboration suite Microsoft Teams.

“By bringing together the power of Azure and Microsoft Teams with Salesforce, our aim is to help businesses harness the power of Microsoft Cloud to better serve customers,” said Microsoft CEO Satya Nadella in a statement, while a statement attributed to Salesforce co-CEOs Marc Benioff and Keith Block noted that the company was ‘excited to expand our partnership with Microsoft and bring together the leading CRMs with Azure and Teams to deliver incredible customer experiences.’

Details are thin on the migration plan itself, aside from Salesforce moving Marketing Cloud from its own data centres to Azure in the coming months. The press materials can give away some of the intentions with words such as ‘preferred’ indicating a multi-cloud setup, yet the release simply notes that Salesforce ‘names Microsoft Azure as its public cloud provider for Marketing Cloud.’

Microsoft is by no means the only major cloud provider Salesforce works with. The company has had a longstanding relationship with Google Cloud on the software side, last year receiving a partner award from Google. As far as Amazon Web Services (AWS), the long time cloud infrastructure leader goes, only last week AWS and Salesforce, alongside Genesys and The Linux Foundation, launched the open data-focused Cloud Information Model. The companies also align on integration with Salesforce an advanced member of the AWS Partner Network (APN).

Last month Microsoft noted there was ‘material growth’ in Azure contracts of $10 million or more in what were strong results compared with stuttering figures for AWS and Google. Among the company’s more recent customer wins, alongside the controversial $10bn JEDI government cloud contract last month, are Walt Disney Studios and subsidiary LinkedIn.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS to appeal Pentagon’s ‘biased’ JEDI contract awarded to Microsoft


Bobby Hellard

15 Nov, 2019

AWS has suggested the evaluation process for the Pentagon’s $10 billion cloud computing contract contained “unmistakable bias”.

The cloud giant has said it intends to appeal the Department of Defence’s decision to award the contract to Microsoft.

The Joint Enterprise Defence Infrastructure (JEDI) contract is a ¢10 billion project to modernise the Pentagon’s IT systems. Major cloud computing companies such as IBM, Oracle, Google and AWS were involved in a controversial bidding process with Microsoft announced as the eventual winner in October.

This didn’t go down well with Amazon’s cloud computing arm which initially said it was “surprised” with the decision and is now challenging it.

“AWS is uniquely experienced and qualified to provide the critical technology the US military needs and remains committed to supporting the DoD’s modernisation efforts,” an AWS spokesperson said.

“We also believe it’s critical for our country that the government and its elected leaders administer procurements objectively and in a manner that is free from political influence. Numerous aspects of the JEDI evaluation process contained clear deficiencies, errors, and unmistakable bias and it’s important that these matters be examined and rectified.”

Donald Trump called his Pentagon Secretary James Mattis and directed him to “screw Amazon” out of a chance to bid on the JEDI contract, according to Mattis’ book ‘Holding The Line: Inside Trump’s Pentagon with Secretary Mattis‘. It was written by Guy Snodgrass, who served as a speechwriter for Mattis, and reports of the quote surfaced around the time Microsoft was awarded the JEDI contract.

Trump and Amazon CEO Jeff Bezos have a famous disliking for one another and there was already a suggestion that this had influenced the DoD’s final decision. In July the president became “concerned” with how the bidding was going after complaints other cloud providers were being unfairly excluded – AWS was the favourite at the time.

Oracle is also taking legal action against the DoD’s final decision, however, its argument is actually against AWS. It claims that two DoD officials were offered jobs at Amazon while they worked on the JEDI contract and that another was a former AWS consultant.

US Defence Secretary Mark Esper rejected any suggestion of bias. According to Reuters, he told a news conference in Seoul: “I am confident it was conducted freely and fairly, without any type of outside influence.”

Esper removed himself from reviewing the deal in October as his son was employed by IBM.

The importance of securing multi-cloud manufacturing systems in a Zero Trust world

Private equity firms are snapping up manufacturing companies at a quick pace, setting off a merger and acquisition gold rush, while leaving multi-cloud manufacturing systems unprotected in a Zero Trust world.

Securing the manufacturing gold rush of 2019

The intensity private equity (PE) firms have for acquiring and aggregating manufacturing businesses is creating an abundance of opportunities for cybercriminals to breach the resulting businesses. For example, merging formerly independent infrastructures often leads to manufacturers maintaining — at least initially — multiple identity repositories such as Active Directory (AD), which contain privileged access credentials, usernames, roles, groups, entitlements, and more. Identity repository sprawl ultimately contributes to maintenance headaches but, more importantly, security blind spots that are being exploited by threat actors regularly.

A contributing factor is a fact that private equity firms rarely have advanced cybersecurity expertise or skills and therefore don’t account for these details in their business integration plans. As a result, they often rely on an outdated “trust but verify” approach, with trusted versus untrusted domains and legacy approaches to identity access management.

The speed PE firms are driving the manufacturing gold rush is creating a sense of urgency to stand up new businesses fast – leaving cybersecurity as an afterthought, if even a consideration at all. Here are several insights from PwC’s Global Industrial Manufacturing Deals Insights, Q2 2019 and Private Equity Trend Report, 2019, Powering Through Uncertainty:

  • 39% of all PE investors rate the industrial manufacturing sector as the most attractive for acquiring and rolling up companies into new businesses
  • The manufacturing industry saw a 31% increase in deal value from Q1 2019 to Q2 2019 with industrial manufacturing megadeals driving deal value to $27.4B in Q2, 2019, on 562 deals
  • Year-to-date North American manufacturing has generated 184 deals worth $15.2B in 2019
  •  Worldwide and North American cross-sector manufacturing deal volumes increased by 32% and 30% in Q2, 2019 alone

PE firms are also capitalising on how many family-run manufacturers are in the midst of a generational change in ownership. Company founders are retiring, and their children, nearly all of whom were raised working on the shop floor, are ready to sell. PE firms need to provide more cybersecurity guidance during these transactions to secure companies in transition. Here’s why:

How to secure multi-cloud manufacturing systems in a Zero Trust world

To stop the cybercriminals’ gold rush, merged manufacturing businesses need to take the first step of adopting an approach to secure each acquired company’s identity repositories, whether on-premises or in the cloud. For example, instead of having to reproduce or continue to manage the defined rights and roles for users in each AD, manufacturing conglomerates can better secure their combined businesses using a multi-directory brokering approach.

Multi-directory brokering, such as the solution offered by privileged access management provider Centrify, empowers an organisation to use its existing or preferred identity directory as a single source of truth across the organisation, brokering access based on a single identity rather than having to manage user identities across multiple directories. For example, if an organisation using AD acquires an organisation using a different identity repository or has multiple cloud platforms, it can broker access across the environment no matter where the “master” identity for an individual exists. This is particularly important when it comes to privileged access to critical systems and data, as “identity sprawl” can leave gaping holes to be exploited by bad actors.

Multi-directory brokering is public cloud-agnostic, making it possible to support Windows and Linux instances in one or multiple infrastructure as a service (IaaS) platforms to secure multi-cloud manufacturing systems. The following diagram illustrates how multi-directory brokering scales to support multi-cloud manufacturing systems that often rely on hybrid multi-cloud configurations.

Securing Multi-Cloud Manufacturing Systems In A Zero Trust World

Manufacturers who are the most negatively impacted by the trade wars are redesigning and re-routing their supply chains to eliminate tariffs, so they don‘t have to raise their prices. Multi-cloud manufacturing systems are what they’re relying on to accomplish that. The future of their business will be heavily reliant upon how well they can secure the multi-cloud configurations of their systems. That’s why multi-directory brokering makes so much sense for manufacturers today, especially those looking for an exit strategy with a PE firm.

The PE firms driving the merger and acquisition (M&A) frenzy in specific sectors of manufacturing need to take a closer look at how identity and access management (IAM) is being implemented in the manufacturing conglomerates they are creating. With manufacturing emerging as a hot industry for PE, M&A, and data breaches, it’s time to move beyond replicating Active Directories and legacy approaches to IAM. One of the most important aspects of a successful acquisition is enabling administrators, developers, and operations teams to access systems securely, without massive incremental cost, effort, and complexity.

Conclusion

The manufacturing gold rush for PE firms doesn’t have to be one for cybercriminals as well. PE firms and the manufacturing companies they are snapping up need to pay more attention to cybersecurity during the initial integration phases of combining operations, including how they manage identities and access. Cybercriminals and bad actors both within and outside the merged companies are lying in wait, looking for easy-exploitable gaps to exfiltrate sensitive data for monetary gain, or in an attempt to thwart the new company’s success.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Mirantis snaps up Docker’s enterprise platform


Bobby Hellard

14 Nov, 2019

Mirantis has acquired Docker’s Enterprise Business Platform to accelerate its Kubernetes as a service deployment.

The terms of the deal are confidential but Mirantis will absorb all Docker enterprise customers and contracts, along with its strategic technology alliance and partner programs.

Docker was once the leader in containers but lost some ground after Google open-sourced Kubernetes. Its enterprise business was still healthy, however, with a fifth of global 500 companies on its roster, according to TechCrunch.

But with this section of its business now gone, Docker said it will continue to focus on tools for developers.

Mirantis said joining its Kubernetes technology with the Docker Enterprise Container Platform brings simplicity and choice to enterprises migrations. CEO Adrian Ionel said it’s the easiest and fastest path to the cloud for new and existing applications.

“The Docker Enterprise employees are among the most talented cloud-native experts in the world and can be immensely proud of what they achieved,” he said. “We’re very grateful for the opportunity to create an exciting future together and welcome the Docker Enterprise team, customers, partners and community.”

Mirantis will acquire the Docker Enterprise Technology Platform and all associated IP addresses. These include the Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker Command Line.

Neither firm has disclosed the fee for the deal, but it signals a new direction for Docker. Shortly after the announcement, the company revealed it had secured a $35 million investment from Benchmark and Insight. There has also been a change at the top, with former CPO Scott Johnston assuming the role of CEO from Rob Bearden, who replaced Steve Singh in May.

“Going forward, in partnership with the community and ecosystem, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps,” said Johnston.

“Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Cloud hyperscaler benchmark report shows China connectivity as a vital issue for all

No cloud is created equal – and according to a benchmark analysis of the biggest providers from network intelligence software provider ThousandEyes, performance varies between the hyperscalers with some potentially surprising findings.

The report, ThousandEyes’ 2019-2020 Cloud Performance Benchmark, assessed more than 320 million data points collected from almost 100 global metro locations over the course of a month. The study focused on Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP), as well as Alibaba Cloud and IBM Cloud.

The research not only assessed the speed of traffic being delivered by the biggest clouds, but also how it was getting there. ThousandEyes argued GCP and Azure rely heavily on private backbone networks, while AWS and Alibaba rely more heavily on the public internet. Fighting for room amid traffic jams means inevitable performance downturns. Last year’s report argued similar, exploring how AWS’ traffic only comes into its architectural backbone close to the target region.

Connectivity through China was seen as a crucial area of analysis – and the research found that even Alibaba suffered packet loss when crossing the Great Firewall.

Naturally, in some areas Alibaba would have been naturally considered the best of the bunch. Analysing the Singapore regions, customers in China using Alibaba would have a three times quicker service than IBM. Perhaps unsurprisingly, the research also found Alibaba outperformed the rest when it came to China-Hong Kong network speed.

As a result, for enterprises looking – and potentially avoiding – China, the research concluded there were viable options. Regular readers of this publication will be aware of the presence Singapore and Hong Kong can bring; the most recent analysis from the Asia Cloud Computing Association (ACCA) last year found the former had overtaken the latter as the strongest Asia-Pacific cloud nation. China, by contrast, was ranked second from last among 14 nations.

Compared with last year’s report, there are similarities. As can be expected, many of the headline-grabbing elements of reports such as this are to show that the long-term market leader – in this instance of course AWS – is more fallible than may be thought.

The report explored AWS Global Accelerator – Amazon’s fee paying service introduced this time last year for customers to use the AWS private backbone – and found that while performance gains were appreciable, it was not a one-size-fits-all solution.

Ultimately, as cloud workloads continue to become more complex, then the conversation around network and performance becomes more nuanced.

“It is imperative for enterprise IT leaders to understand that cloud architectures are complex and not to rely on network performance and connectivity assumptions or instincts while designing them,” the report concludes. “Enterprises relying heavily on the public cloud or considering a move to the cloud must arm themselves with the right data on an ongoing basis to guide the planning and operational stages.

“Every organisation is different, cloud architectures are highly customised and hence these results must be reviewed through the lens of one’s own business in choosing providers, regions and connectivity approaches.”

You can read the full report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.