All posts by James

How the AWS and Verizon re:Invent partnership shows the way forward for telcos and cloud providers

At the main AWS re:Invent keynote, the biggest headline is usually saved until the end. Last year, it was the announcement of AWS Outposts, with VMware CEO Pat Gelsinger taking to the stage to join AWS chief Andy Jassy.

This time around it was Verizon, whose CEO Hans Vestberg joined Jassy to announce a partnership to deliver cloud and edge computing souped up with 5G connectivity. The move is also a showcase for AWS Wavelength, which is a major edge play: embedding compute and storage services on the edge of operators’ 5G networks, enabling the delivery of ultra-low latency applications.

Vestberg noted Verizon’s ‘eight currencies’ it believed in for 5G; a message first put out at the start of this year at CES and which went far beyond speed and throughput, the only primary capabilities for 4G. “The most important [aspect] is when you can slice this and give them to individuals and applications; you have a transformative technology that’s going to transform consumer behaviour, transform businesses, transform society,” he said.

For the ‘builders’ – developers who form such a key part of the re:Invent audience base – this promise of 5G, encapsulating lower latency, mobility and connectivity, is vital for the applications they are creating. Yet the journey for the data being transmitted is arduous; going from the device to the cell tower, to the aggregation site, to the Internet, and to the cloud provider, before going back.

As Jassy noted, the most exciting applications to be ushered in, such as autonomous industrial equipment, or applications for smart cities, can’t wait that long. “If you want to have the types of applications that have that last mile connectivity, but actually do something meaningful, those applications need a certain amount of compute and a certain amount of storage,” he said. “What [developers] really want is AWS to be embedded somehow in these 5G edge locations.”

Hence this AWS and Verizon collaboration – which Jassy noted had been in the works for around 18 months. “In placing AWS compute and storage services at the edge of Verizon’s 5G Ultra Wideband network with AWS Wavelength, AWS and Verizon bring processing power and storage physically closer to 5G mobile users and wireless devices, and enable developers to build applications that can deliver enhanced user experiences like near real-time analytics for instant decision making, immersive game streaming, and automated robotic systems in manufacturing facilities,” the companies noted in the press materials.

The move also evidently strengthens the relationship between Verizon and AWS, for whom the lines of business are now clearly demarcated.

As industry watchers will recall, in 2011, when cloud infrastructure was still nascent, Verizon acquired early pioneer Terremark. The company said at the time the move would ‘clear the way for Verizon to lead the rapidly evolving global managed IT infrastructure and cloud services market.’ The telco’s efforts to become a cloud provider in its own right fell flat, eventually being sold up to IBM. As Synergy Research’s John Dinsdale put it to this reporter back in 2016, ‘the speed of cloud market development and the aggressiveness of the leading cloud providers largely left [telcos] behind.’

The thinking has since changed. 18 months ago – around the time the two companies started consulting on the edge and 5G partnership – Verizon moved to AWS as its preferred public cloud provider, migrating more than 1,000 of its business-critical applications and backend systems.

Today, the much-derided ‘telco cloud’ is now about partnerships and making the most out of both sides’ assets. AT&T announced deals with IBM and Microsoft in successive days in July in a move which raised eyebrows in the industry – and according to Nick McQuire, VP enterprise at CCS Insight, this is an idea finally beginning to bear fruit.

“The announcements, above all, are about developers,” said McQuire. “For 5G to meet the enormous hype and expectation surrounding it this year, operators are now desperate to woo developers to the platform to create 5G applications which at the moment are very thin on the ground.

“AWS has the cloud, edge computing and IoT assets – some of the best in the market – and it also has developers, so it’s no surprise it’s pushing into this area and partnering with leading telcos.”

Read more: AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

Picture credit: Amazon Web Services/Screenshot in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

“If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates,” wrote The Atlantic’s Derek Thompson in October, “you’ve interacted with seven companies that will collectively lose nearly $14 billion this year.”

It is a well-worn line, and as WeWork’s collapse showed, there is plenty of pushback when it comes to the gig economy champions. Yet at the start of his re:Invent keynote today, Amazon Web Services (AWS) CEO Andy Jassy cited Uber, Lyft and Postmates, as well as Airbnb, as examples of the overall keynote theme around transformation. “These startups have disrupted longstanding industries that have been around for a long time from a standing start,” said Jassy.

An eyebrow-raising opening, perhaps. Yet, backed by the re:Invent band once more with half a dozen songs ranging from Van Halen to Queen – AWS has heard of the former even if Billie Eilish hasn’t – the rationale was straightforward. If you’re making a major transformation, then you need to get your ducks in a row; senior leadership needs to be on board, with top-down aggressive goals and sufficient training.

“Once you decide as a company that you’re going to make this transition to the cloud, your developers want to move as fast as possible,” said Jassy. This beget the now-standard discussion around the sheer breadth of services available to AWS customers – more than 175 at the most recent count – with Jassy noting that certain unnamed competitors were ‘good at being checkbox heroes’ but little else.

This was not the only jibe the AWS chief exec landed on the opposition. From transformation, another key element for discussion was around modernisation. This was illustrated by a ‘moving house’ slide (below) which was self-explanatory in its message. Jassy took extra time to point out the mainframe and audit notices. While IBM and particularly Oracle have been long-term targets, the Microsoft box is an interesting addition. Jassy again noted AWS’ supremacy with regard to Gartner’s IaaS Magic Quadrant – adding the gap between AWS and Microsoft was getting bigger.

Last year, the two big headlines were around blockchain and hybrid cloud. Amazon Managed Blockchain did what it said on the tin, but AWS Outposts aimed to deliver a ‘truly consistent experience’ by bringing AWS services, infrastructure and operating models to ‘virtually any’ on-prem facility. Google Cloud’s launch – or relaunch – of Anthos was seen as a move in the same vein, while Azure Arc was seen by industry watchers as Microsoft’s response.

This is prescient as plenty of the product updates could be seen as an evolution of 2018’s re:Invent announcements. Instead of storage, Jassy this time focused on compute; instances and containers.

One piece of news did leak out last week around AWS building a second-generation custom server chip – and this was the first announcement which Jassy confirmed. The M6g, R6g, and C6g Instances for EC2 were launched based on the AWS Graviton 2 processors. “These are pretty exciting, and they provide a significant improvement over the first instance of the Graviton chips,” said Jassy. Another instance launch was seen as another upgrade. While AWS Inferentia was launched last year as a high-performance machine learning inference chip, this year saw Inf1 Instances for EC2, powered by Inferentia chips.

On the container side, AWS expanded its offering with Amazon Fargate for Amazon EKS. Again, the breadth of options to customers was emphasised; Elastic Container Services (ECS) and EKS, or Fargate, or a mix of both. “Your developers don’t want to be held back,” said Jassy. “If you look across the platform, this is the bar for what people want. If you look at compute, [users] want the most number of instances, the most powerful machine learning inference instances, GPU… biggest in-memory… access to all the different processor options. They want multiple containers at the managed level as well as the serverless level.

“That is the bar for what people want with compute – and the only ones who can give you that is AWS.”

Jassy then moved to storage and database, but did not stray too far from his original topic. Amazon Redshift RA3 Instances with Managed Storage enables customers to separate storage from compute, while AQUA (Advanced Query Accelerator) for Amazon Redshift flips the equation entirely. Instead of moving the storage to the compute, users can now move compute to the storage. “What we’ve built with AQUA is a big high-speed cache architecture on top of S3,” said Jassy, noting it ran on a souped-up Nitro chip and custom-designed FGPAs to speed up aggregations and filtering. “You can actually do the compute on the raw data without having to move it,” he added.

Summing up the database side, the message was not simply one of breadth, but one that noted how a Swiss Army knife approach would not work. “If you want the right tool for the right job, that gives you different productivity and experience, you want the right purpose-built database for that job,” explained Jassy. “We have a very strong belief inside AWS that there is not one tool to rule the world. You should have the right tool for the right job to help you spend less money, be more productive, and improve the customer experience.”

While various emerging technologies were announced and mentioned in the second half of last year’s keynote, the big gotcha arrived the day before. Amazon Braket, in preview today, is a fully managed AWS service which enables developers to begin experimenting with computers from quantum hardware providers in one place, while a partnership has been put in place between Amazon and the California Institute of Technology (Caltech) to collaborate on the research and development of new quantum technologies.

On the machine learning front, AWS noted that 85% of TensorFlow running in the cloud runs on its platform. Again, the theme remained: not just every tool for the job, but the right tool. AWS research noted that 90% of data scientists use multiple frameworks, including PyTorch and MXNet. AWS subsequently has distinct teams working on each framework.

For the pre-keynote products, as sister publication AI News reported, health was a key area. Transcribe Medical is set to be utilised to move doctors’ notes from the barely legible script to the cloud, and is aware of medical speech as well as standard conversation. Brent Shafer, the CEO of Cerner, took to the stage to elaborate on ML’s applications for healthcare.

With regard to SageMaker, SageMaker Operators for Kubernetes was previously launched to let data scientists using Kubernetes train, tune, and deploy AI models. In the keynote, Jassy also introduced SageMaker Notebooks and SageMaker Experiments as part of a wider Studio suite. The former offered one-click notebooks with elastic compute, while the latter allowed users to capture, organise and search every step of building, training, and tuning their models automatically. Jassy said the company’s view of ML ‘continued to evolve’, while CCS Insight VP enterprise Nick McQuire said from the event that these were ‘big improvements’ to AWS’ main machine learning product.

As the Formula 1 season coming to a close at the weekend, the timing was good to put forth the latest in the sporting brand’s relationship with AWS. Last year, Ross Brawn took to the stage to expand on the partnership announced a few months before. This time, the two companies confirmed they had worked on a project called Computational Fluid Dynamics Project; according to the duo more than 12,000 hours of compute time were utilised to help car design for the 2021 season.

Indeed, AWS’ strategy has been to soften the industry watchers up with a few nice customer wins in the preceding weeks before hitting them with a barrage at the event itself. This time round, November saw Western Union come on board with AWS its ‘long-term strategic cloud provider’, while the Seattle Seahawks became the latest sporting brand to move to Amazon’s cloud with machine learning expertise, after NASCAR, Formula 1 and the LA Clippers among others.

At the event itself, the largest customer win was Best Western Hotels, which is going all-in on AWS’ infrastructure. This is not an idle statement, either: the hotel chain is going across the board, from analytics to machine learning, the standard database, compute and storage, as well as consultancy.

This story may be updated as more news breaks. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Alibaba Cloud releases Alink machine learning algorithm to GitHub

Alibaba Cloud has announced it has made the ‘core codes’ of its machine learning algorithm Alink available on GitHub.

The company notes it is one of the top 10 contributors to the GitHub ecosystem, with approximately 20,000 contributors. Alink was built as a self-developed platform to aid batch and stream processing, with applications for machine learning tasks such as online product recommendation and intelligent customer services.

Not surprisingly, Alibaba is targeting data analysts and software developers to build their own software focusing on statistical analysis, real-time prediction, and personalised recommendation.

“As a platform that consists of various algorithms combining learning in various data processing patterns, Alink can be a valuable option for developers looking for robust big data and advanced machine learning tools,” said Jia Yangqing, Alibaba Cloud president and senior fellow of its data platform. “As one of the top 10 contributors to GitHub, we are committed to connecting with the open source community as early as possible in our software development cycles.

“Sharing Alink on GitHub underlines our such long-held commitment,” Jia added.

With the US enjoying a well-earned holiday rest, and the majority of the world hunting out Black Friday deals, Alibaba had a chance to rush the opposition with Singles Day earlier this month. The numbers put out by the company did not disappoint: zero downtime was claimed, with $1 billion of gross merchandise volume achieved within 68 seconds of launch.

A recent report from ThousandEyes aimed to explore benchmark performance of the hyperscalers, noting that Alibaba, alongside Amazon Web Services (AWS), relied more heavily on the public internet rather than Microsoft and Google, who generally prefer private backbone networks. The report also noted that, contrary to opinion, Alibaba suffered packet loss when it came to China’s Great Firewall.

You can take a look at the Alibaba Cloud Alink GitHub by visiting here. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

McAfee notes the gap between cloud-first and cloud-only – yet optimism reigns on success

Two in five large UK organisations expect their operations to be cloud-only by 2021 according to a new report – but the gap between the haves and the have-nots is evident.

The findings appear in a new report from McAfee. The security vendor polled more than 2000 respondents – 1310 senior IT staff and 750 employees – across large businesses in the UK, France, and Germany to assess cloud readiness.

40% of large UK businesses expect to be cloud-only by 2021, yet only 5% surveyed already consider themselves to be at this stage, the research found. 86% of UK-based senior IT staff saw their business as cloud-first today, comparing similarly to France (90%) and Germany (92%), while optimism reigned over becoming cloud-only when given an indeterminate future date. 70% of UK respondents agreed this would occur, albeit lower than their French (75%) and German (86%) counterparts.

The benefits are clear among respondents. 88% of senior IT staff polled in the UK said moving to the cloud had increased productivity among end users. 84% said the move had improved security, while supplying more varied services (85%) and increased innovation (84%) were also cited.

The question of responsibility is an interesting one, and shows where the waters begin to muddy. Never mind the issue around vendor versus customer, consensus does not particularly exist within senior leadership. Ultimately, the majority believe responsibility lies with the head of IT (34%), compared with the CIO (19%), CEO (14%), or CISO (5%). One in five (19%) employees surveyed admitted to using apps which had not been approved by IT.

“The key to security in a cloud-first environment is knowing where and how data is being used, shared and stored by employees, contractors and other third parties,” said Nigel Hawthorn, director of McAfee’s EMEA cloud security business. “When sensitive corporate data is under the IT team’s control – whether in collaboration tools or SaaS and IaaS applications – organisations can ensure the right policies and safeguards are in place to protect data from device to cloud, detect malicious activity and correct any threats quickly as soon as they arise.”

Those wondering ‘whither McAfee?’ with regards to cloud security research will notice the company’s long-standing pivot to this arena. The abovementioned ‘device to cloud’ reference is taken direct from McAfee’s branding as the company looks to gather expertise as a cloud access security broker (CASB).

This is not without success, as McAfee was named for a second year, alongside Bitglass, Netskope and Symantec, as a leader in Gartner’s CASB Magic Quadrant last month. Last year Gartner noted, with the acquisition of Skyhigh Networks, McAfee’s expertise in raising awareness of shadow IT. 2019’s Quadrant sees one new face in the winners’ enclosure in the shape of Microsoft.

In April, McAfee released a special edition of its Cloud and Risk Adoption Report. According to the 1,000 enterprise organisations polled, more than half (52%) said they found security better in the cloud than on-premise, with organisations who adopt a CASB more than 35% likelier to launch new products and gain quicker time to market. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft and AT&T expand upon partnership to deliver Azure services on 5G core

Microsoft and AT&T have beefed up their strategic partnership, announcing a new offering where AT&T’s growing 5G network will be able to run Azure services.

The companies will be opening select preview availability for network edge compute (NEC) technology. The technology ‘weaves Microsoft Azure cloud services into AT&T network edge locations closer to customers,’ as the companies put it.

Microsoft and AT&T first came together earlier this year, with the former somewhat stealing the thunder of IBM, who had announced a similar agreement with AT&T the day before.

While the operator will be using Microsoft’s technology to a certain extent – the press materials noted it was ‘preferred’ for ‘non-network applications’ – the collaborative roadmap, for edge computing and 5G among other technologies – was the more interesting part of the story. The duo noted various opportunities that would be presented through 5G and edge. Mobile gaming is on the priority list, as is utilising drones for augmented and virtual reality.

Regarding AT&T’s personal cloudy journey, the commitment to migrating most non-network workloads to the public cloud by 2024 was noted, while the commitment for the operator to become ‘public-cloud first’ was reaffirmed.

“We are helping AT&T light up a wide range of unique solutions powered by Microsoft’s cloud, both for its business and our mutual customers in a secure and trusted way,” said Corey Sanders, Microsoft corporate vice president in a statement. “The collaboration reaches across AT&T, bringing the hyperscale of Microsoft Azure together with AT&T’s network to innovate with 5G and edge computing across every industry.”

After many false starts – remember Verizon’s ill-fated public cloud product offering? – telco is finding a much surer footing in the cloud ecosystem. As VMware CEO Pat Gelsinger put it in August: “Telcos will play a bigger role in the cloud universe than ever before. The shift from hardware to software is a great opportunity for US industry to step in and play a great role in the development of 5G.”

You can read the full Microsoft and AT&T update here. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Study shows continued cloud maturation in Nordics – with manufacturing a standout

A new report from Nordic IT services provider Tieto has found the region’s cloud landscape has matured significantly since 2015 from both a strategic and operational perspective – with Sweden and Finland fighting for supremacy.

The study, the latest Cloud Maturity Index which was based on responses from almost 300 decision-makers across the public and private sectors in the Nordics, placed almost one in five (18%) organisations as ‘mature’, while a quarter (27%) were seen as ‘proficient’, 42% at a basic level, and 13% ‘immature’.

In other words, it’s a broad church, with just a slight emphasis on the have-nots rather than the haves. Those who are described as mature use cloud services to a larger extent – virtually everything (97%) being cloud-based – and are much likelier to exploit the technology’s advantages compared with their immature cousins. Being classified as a mature cloud business means an approximately 20% lower IT operation costs, and on average 15% more efficiency in increasing business competitiveness.

When it came to specific industries, finance came out on top for Nordic organisations, maintaining its lead previously forged in the 2015 and 2017 surveys. The public sector continues to report the lowest strategic and operational maturity. Yet the gap is closing when it comes to traditionally ‘slower’ verticals, with manufacturing proving particularly effective. Whereas finance scored 6.0 in 2015 and 6.3 this time around, the manufacturing industry has leapt to 6.0 from 4.4.

The report also noted the importance of environmental factors in organisations’ initiatives. This is not entirely surprising given the temperate climate has enabled many data centre providers to set up shop in the Nordics. Approximately half of companies polled said they were already considering issues such as energy consumption or CO2 emission as part of their cloud strategy. Again less than surprisingly, mature cloud organisations were considerably further ahead on environmental initiatives than their immature brethren.

Despite the report’s figures – again ranked out of 10 – which showed Sweden and Finland comfortably ahead of Norway, according to Tieto’s head of cloud migration and automation Timo Ahomaki it is the latter who should be celebrating. Data sovereignty, Ahomaki argues, is an area which is ‘quite polarised’ in Sweden, with Finland’s more advanced cloud security meaning it is ‘at the forefront’ of the Nordic public sector.

Regular readers of this publication will be aware of the various initiatives which have taken place regarding the emerging data centre industry in the Nordics. As far back as 2015, CloudTech reported on a study from the Swedish government – which was later put into legislation – to give tax breaks for data centre providers. Last year, DigiPlex announced a project whereby wasted heat from its data centres would be used to warm up residential homes in Oslo.

You can read the full report here (email required). in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

CircleCI aims to further break down the ‘hornet’s nest’ of continuous delivery with EMEA expansion

Continuous integration and delivery (CI/CD) software provider CircleCI has been acting on its expansion plans following the $56 million (£44.8m) secured in series D funding in July. Now, the company is ready for business in London – and has hired a new head of EMEA to push things along.

Sharp observers looking at the almost 250 faces which comprise the CircleCI team would have noticed a recent addition at the foot of the list. Nick Mills joined the company in September having previously held leading sales roles at Stripe and Facebook, among others, invariably concerned with international expansion.

At CircleCI, Mills will be responsible for EMEA – which the company says represents almost a quarter of its overall business – in everything which is classified as non-engineering. “There’s a huge amount of expansion opportunity,” Mills tells CloudTech. “I’ve already had some interesting conversations in the first few weeks here with companies in fintech and mobility, on-demand services. They really see CircleCI and CI/CD as a fundamental critical enabler that can help their teams increase productivity.”

The company certainly appears to be seeing gains from this bet. Big tech names on the customer roster include Facebook, Spotify and Docker, while investor Scale Venture Partners described the company earlier this year as the ‘DevOps standard for companies looking to accelerate their delivery pipeline while increasing quality.’

For CEO Jim Rose, who has been in London this week for the launch, it is the expansion of a journey which began for him in 2014, first as COO before moving up to the chief executive role a year later.

“When I first got to the company, there were about 30 individual logos in the CI/CD market, and that’s been whittled way down,” Rose tells CloudTech. “Now there is, really, ourselves, a couple of smaller, standalone, very focused CI/CD players, and then you’ve got some of the larger platforms that are trying to go end-to-end.”

Rose cites the ‘peanut butter manifesto’, the now infamous document from Yahoo which used the foodstuff as a metaphor for businesses spreading themselves too thinly across multiple offerings, as evidence for why the larger platforms will struggle.

“We have really gone for the opposite of that strategy,” he explains. “For the vast majority of large customers, you can only move certain systems one at a time. Customers ask us all the time… how do we build that CI/CD system but also the most flexible system so that regardless of what you have in place inside of your overall enterprise or team, it’s really easy and seamless?”

There are various aspects which pull together the company’s strategy. Back in the mid-2000s, if a company built a new application it would hire a bunch of developers, flesh out the spec, write custom code across every line and then package and ship the resultant product. As Rose puts it, any custom code written today takes on the mantle of orchestrating all the pieces together, from the plethora of open source libraries and third-party services.

Continuous delivery is a hornet’s nest – it’s very easy to get to version one, but then the complexity comes as your developers start pushing a lot faster and harder

“What we’re helping customers do is, across all of these hundreds and thousands and millions of projects, start to take a heartbeat of all those different common components and use that to help people build better software,” says Rose. “If you have a version that’s bad or insecure, if you’re trying to pull a library from a certain registry that has stability problems, if you have certain services that are just unavailable… these are all new challenges to software development teams.

“Using the wisdom of the crowd and the wisdom of the platform overall, we’re starting to harness that and use that on behalf of our customers so they can make their build process more stable, more secure, and higher performing.

“Honestly, continuous delivery is a hornet’s nest,” adds Rose. “It’s really complicated to run into one of these systems at scale. It’s very easy to get to version one, but then the complexity comes as you bring it out to more teams, as you add more projects, as your developers start pushing a lot faster and a lot harder.”

For a large part of the company’s history, the individual developer or team of developers was the route in for sales; almost in an infiltrative ‘shadow IT’ context, whether it was the CTO of a small startup or a team lead at a larger company. While this can still be the case at enterprise-level organisations, CircleCI realised it needed more of a top-down, hybrid sales approach.

“One of the biggest changes in our space – not just CI/CD, but the developer space more generally – is developers historically have not been conditioned to pay for things,” says Rose. “If you needed a new tool, a new script, the developers would either go out and create it on their own or they use an open source service.

“What’s changed over the last two or three years is now developers, because their time is so valuable, have the budget and the expectation that they have the opportunity to pay for services that help you move faster. A lot of what we do from a sales perspective is help development teams understand how to procure technology. What’s necessary? What do you think about what you look at? How do we help you through that commercial process?”

Mills will be looking to guide EMEA customers through this process, with the stakes high and the motivation to assist leading tech companies strong. “A lot of companies are successful in and of themselves and can build their businesses, but the space we’re in really has the potential to enable the most successful tech companies today and of the future,” Mills explains.

“Ultimately, the creation they can generate as companies can obviously help them move quickly, increase the scale and pace of product delivery,” he adds. “To me, that feels like incredibly high-level work to be doing and high value.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud plots a stronger course for European customers with London Next event

Google Cloud has taken its Next event to London – and the company stressed its commitment and capabilities to European businesses in the process.

The company took to the ExCeL to announce a variety of product updates and customer news, with Anthos, its hybrid cloud services platform, top of the bill.

The general availability (GA) launch of Migrate for Anthos, announced today, aims to provide a more straightforward path to convert physical servers or VMs from multiple clouds – the reason why Anthos was rebranded at Next in San Francisco back in April – directly into containers in Anthos GKE. The updated service will be available at no additional cost and does not need an Anthos subscription to activate.

“Migrate for Anthos makes it easy to modernise your applications without a lot of manual effort or specialised training,” a blog post from director of product management Jennifer Lin and VP product and design Pali Bhat noted. “After upgrading your on-prem systems to containers with Migrate for Anthos, you’ll benefit from a reduction in OS-level management and maintenance, more efficient resource utilisation, and easy integration with Google Cloud services for data analytics, AI and ML, and more.”

Another new product moving to GA was Cloud Code, which enables developers to write, debug, and deploy code to Google Cloud, or any Kubernetes cluster, through extensions to popular integrated developer environments (IDEs). This was by no means the only developer-centric product launched, with a hybrid version of API management tool Apigee also announced.

In terms of customers, the biggest announcement was John Lewis, which, as a blog attributed to Google Cloud CEO Thomas Kurian notes, is utilising Google Cloud for greater eCommerce benefits, as well as Google’s artificial intelligence and machine learning expertise. The retailer has worked with Google for half a decade, firstly around productivity through G Suite and then to create a centralised data platform with Google Cloud.

Retail has become a major area for the biggest cloud providers over the past year, with particular focus on Microsoft and Google’s clouds given Amazon’s strength. One of Google Cloud’s biggest customer acquisitions this quarter has been fellow UK supermarket Sainsbury’s, with machine learning again cited. Yet at the start of this year, following Albertson’s move to Microsoft, 451 Research told this publication to beware the narrative of retailers ‘fleeing AWS.’

Other customers referenced at the event included Vodafone, using Google Cloud to develop a data analytics platform called Neuron, as well as ride hailing app Kapten.


The timing of the event could be seen as portentous, as some of Europe’s most powerful countries are looking to fight back at what it sees as a monopoly among US cloud providers.

At the end of last month, the German federal ministry for economic affairs and energy, alongside counterparts at the French ministry of economy and finance, issued a press release announcing plans to build a Euro-centric ‘secure and trustworthy data infrastructure.’ Amazon Web Services (AWS) told Bloomberg in a statement at the time that while the idea of a national cloud was ‘interesting’ it ‘in reality… removes many of the fundamental benefits of cloud computing.’

Google Cloud’s data centre map has seven European sites listed; alongside the Netherlands, Finland and Belgium, there are regions located in London, Frankfurt, Zurich and Warsaw. The latter was the most recent to launch, in September. For comparison Microsoft Azure also has presence in seven countries, but swapping France, Ireland and Norway for Belgium, Finland and Poland.

Writing in a blog, Google Cloud EMEA president Chris Ciauri – a high profile signing from Salesforce just a few months before – noted today’s developments were part of an ongoing commitment to ‘make Google Cloud the best place for digital transformation for European organisations.’

“Europe’s ambition for a successful digital transition is something we have always strived to support and enable,” Ciauri wrote. “Our cloud is designed to fully empower European organisations’ strict data security and privacy requirements and preferences. Where data resides, who has access to customers’ data, and protections for the privacy and security of customers’ data is central to our offering.”

This need for privacy and security above all else has been emphasised in research conducted by analyst CCS Insight. The company’s 2019 CIO survey data found that trust for Google Cloud among senior IT executives was going up as EMEA remained a primary growth area.

It is almost a year to the day since Kurian took over Google Cloud after the departure of Diane Greene. At the time, as this publication reported, Kurian’s in-tray would consist of two primary goals: bolster sales and improve industry education and trust around the company’s value proposition. The first has been achieved, or at least plenty of effort has gone towards it, while the second has also seen a lot of endeavour.

“Google will need to continue to push its communications hard over the next 12 months as many still don’t understand the separation of Google Cloud from the consumer business,” said Nick McQuire, VP enterprise at CCS Insight. “Messaging its corporate strategy is imperative especially as ethics, governance and responsibility have now become crucial indicators for investment in not only cloud computing, but its crown jewel machine learning as well.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud acquires VMware workload specialist CloudSimple

Google Cloud has announced the acquisition of CloudSimple, a California-based provider of software to help organisations run VMware workloads in the cloud.

The move expands on the companies’ existing partnership and will enable Google to bring forward a fully integrated VMware migration path, with the promise of improved support.

“Apps [from VMware] can run exactly the same as they have been on-premises, but with all the benefits of the cloud, like performance, elasticity, and integration with key cloud services and technologies,” wrote Rich Sanzi, VP engineering at Google Cloud, in a blog post. “Best of all, customers can do all this without having to rearchitect existing VMware-based applications and workloads, which helps them operate more efficiently and reduce costs, while also allowing IT staff to maintain consistency and use their existing VMware tools, workflows and support.

“To that end, we believe in a multi-cloud world and will continue to provide choice for our customers to use the best technology in their journey to the cloud,” added Sanzi.

This may not go down as the most surprising acquisition of 2019, given the announcement in August of extended entente cordiale between VMware and Google Cloud. The companies’ solution which enabled customers to run VMware workloads on-prem, in the cloud or as part of a hybrid architecture was based on CloudSimple’s technology.

CloudSimple had a similar offering in place with Microsoft, allowing organisations to run native VMware environments at scale on the Azure cloud. Guru Pangal, CloudSimple CEO, noted that while partnerships with the two hyperscalers taught the company how to ‘dance amongst the elephants’, Google Cloud’s ‘innovation prowess’ and ‘clear leadership’ in analytics sealed the deal.

“We saw the incredible potential to transform enterprise workloads to the cloud by partnering more strategically with a cloud provider who could help us with larger investments and tighter integration with the cloud to realise the massive potential of our offering,” Pangal wrote. “Google Cloud’s amazing innovation prowess, modern infrastructure and clear leadership in areas like smart analytics convinced us that joining this incredible team will accelerate our joint vision.”

The move, for which financial terms were not disclosed, represents the fourth Google Cloud acquisition of 2019. CloudSimple follows on from enterprise data pipeline provider Alooma in February, business intelligence platform Looker in June and storage vendor Elastifile in July.

“The acquisition of CloudSimple continues to demonstrate Google Cloud’s commitment to providing enterprise customers a broad suite of solutions to modernise their IT infrastructure,” said Sanzi. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Enterprises risking data disaster by not fully exploring cloud backup timeframes, research says

The issue of shared responsibility in cloud security is an issue which refuses to go away. Yet according to a new report from backup and disaster recovery managed services provider (MSP) 4sl, organisations are risking a data disaster by misunderstanding cloud providers’ backup processes.

The study, which polled 200 UK enterprises, found a majority of respondents believe the backup times for their various cloud products are longer than the advertised standards.

The hyperscale clouds are a primary example. The report notes Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform do not offer backup as standard on its own. Securing such data has long been a booming channel industry for independent MSPs and others – at least until AWS, for instance, launched AWS Backup at the start of this year to take a cut.

Yet the vast majority of those polled believed backup did exist as standard. More than four in five agreed this for AWS (81%) and Azure (84%), while an overwhelming 92% of respondents said so for Google.

Even for products with standard backup included, respondents believed they were getting more than they had – although a difference was evident in how much. For Office 365 SharePoint Online and Teams files, where the backup is 93 and 90 days respectively, around half (55% and 50%) knew where they stood. For products with only a short sprint, such as 14 days for Teams messages and Office 365 Exchange Online, this drops to 22% and 27% respectively.

“With cloud infrastructure services and applications firmly entrenched in 21st century IT strategy, enterprises need to be certain that their cloud and backup strategies are operating in concert – with any change to cloud strategy accompanied by changes in backup policy,” the report notes. “However, this is not consistently the case.”

The one product which came out of the rankings relatively unscathed was Salesforce. The CRM giant promises 90 days as standard backup retention, with more than half of respondents (55%) knowing this and almost four in five whose backups are therefore not at risk as a result.

Yet the findings – perhaps not entirely surprising given 4sl’s line of business – should come as a warning to organisations. “The desire to pass on responsibility for backup to service providers is understandable – backup environments are becoming extremely complex, and the peace of mind that a responsible partner is managing backup can be invaluable,” said Barnaby Mote, 4sl CEO and founder. “However, enterprises need to understand that in the main the standard level of backup provided for infrastructure or software as a service won’t meet their needs.”

Organisations back up data as a matter of course, not least for privacy and compliance but also to garner insights and analysis. Speaking to this publication in August, David Friend, CEO of cloud storage provider Wasabi Technologies, noted his view that storage would become a ‘commodity’, and that issues of cost around backing up what where would simply no longer exist.

“We [shouldn’t] think of data as sort of a scarcity… more a mindset of data abundance,” said Friend. “The idea that data storage gets to be so cheap that it’s not worth deleting anything. We have to think about data as something which has probably got future value in excess of what we think it might have today; we need to think of cloud storage the same way we think of electricity or bandwidth.”

You can read the full 4sl report here (pdf, no opt-in required). in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.