Microsoft rakes in $30.1bn thanks to strong cloud growth yet again


Roland Moore-Colyer

20 Jul, 2018

Microsoft has reported strong revenue results in its 2018 fourth quarter financial results, unsurprisingly thanks to growing its cloud sales.

The Redmond company raked in $30.1 billion in total revenue for the quarter, a 17% increase over the same quarter 12 months earlier, with the company netting more than $8 billion in profit.

A large part of that revenue hike was driven by the 23% growth Microsoft’s cloud business enjoyed, bringing in $9.6 billion for the company.

“Our early investments in the intelligent cloud and intelligent edge are paying off, and we will continue to expand our reach in large and growing markets with differentiated innovation,” said Microsoft chief executive Satya Nadella.

Redmond boasted customers such as Marks & Spencer, General Electric, Starbucks and Telefonica as users of its cloud services, helping fuel its growth.

And that growth is likely set to continue as Microsoft also revealed a strategic partnership with Walmart, which will see the US retail giant make use of Microsoft Azure and the Microsoft 365 suite across its entire enterprise.

With a plethora of large businesses adopting digital transformation doctrines, whereby they shift from legacy IT systems to cloud-based services and make deeper use of digital systems and data, and Microsoft offering the second largest cloud platform in the world, it is no surprise that cloud is driving Redmond’s business success.


Managing your organisation’s data in a hybrid and multicloud world is critical to digital transformation success. Learn more in this whitepaper.

Download now


That being said other areas of Microsoft are also enjoying growth, notably its More Personal Computing arm, which includes the Surface and Xbox hardware and services, that grew by 17% to haul in $10.8 billion.

Refreshed models of the Surface Pro line up and the release of the Surface Go are likely to help keep the More Personal Computing arm ticking along in hardware revenue, especially now that PC sales seem to be growing again for the first time in six years.

Microsoft closed out its entire fiscal year with a record-breaking $110 billion in total revenue, a 14% rise on the year before and an indicator that Microsoft is in rude health. 

Image credit: Microsoft 

Google’s Loon project delivers internet to Kenya – via balloon


Clare Hopping

20 Jul, 2018

Google’s Loon internet service that aims to deliver high-speed internet to rural areas has signed its first commercial agreement, partnering with Kenya’s Telkom network.

The connection will be delivered by high altitude balloons that float 20km above sea level. They’re designed to deliver internet connectivity to low density populations, where it’s just not financially viable to install traditional underground cabling and other permanent lines to properties.

The balloons are essentially floating cell towers, utilising a provider’s 4G/LTE service to a user’s existing device. They’re powered by solar panels, so can just float continuously and will rarely need to be taken out of service.

“We are extremely excited to partner with Telkom for our first engagement in Africa,” said Loon CEO Alastair Westgarth. “Their innovative approach to serving their customers makes this collaboration an excellent fit. Loon’s mission is to connect people everywhere by inventing and integrating audacious technologies. We couldn’t be more pleased to start in Kenya.”

However, some critics have suggested the partnership will lead to a monopoly in Kenya, dominating the internet market and warning those most affected will be the consumers.

“Once these networks are in place, and dependency has reached a critical level, users are at the mercy of changes in business strategy, pricing, terms and conditions and so on,” Ken Banks, an expert in African connectivity, and head of social impact at Yoti told the BBC.

“This would perhaps be less of a problem if there’s more than one provider – you can simply switch network – but if Loon and Telkom have monopolies in these areas, that could be a ticking time bomb.”

Loon and Telkom plan to launch the internet service next year (although this is subject to regulatory approval) and Telkom’s boss Aldo Mareuse explained the telecoms business is committed to rolling out the service as quickly as possible.

“Telkom is focused on bringing innovative products and solutions to the Kenyan market,” he said. “With this association with Loon, we will be partnering with a pioneer in the use of high altitude balloons to provide LTE coverage across larger areas in Kenya. We will work very hard with Loon, to deliver the first commercial mobile service, as quickly as possible, using Loon’s balloon-powered Internet in Africa.”

IBM and SAP’s cloud financials continue to impress – but bigger hitters still to come

IBM has delivered its third consecutive quarter of growth – with cloud revenue up 20% and now representing almost a quarter of the company’s total revenue.

The company posted total revenues of $20 billion (£15.4bn) for the most recent quarter, up from $19.3bn this time last year, with six month revenues of $39.1bn, compared with $37.4bn from the year before.

Alongside cloud – which has hit $18.5bn in revenue over the past 12 months – IBM cited AI, analytics, blockchain and security as key strengths to its ecosystem. On the earnings call, Jim Kavanaugh, SVP and chief financial officer, told analysts that IBM was exiting the quarter with ‘as a service’ annual run rate of more than $11bn.

“This reflects our success in helping enterprise clients with their journey to the cloud and we’re becoming the destination for mission-critical workloads in hybrid environments,” said Kavanaugh. “We’re capturing this high-value growth with our unique differentiation of the innovative technology combined with deep industry expertise underpinned with trust and security, all through our integrated model.”

Among IBM’s cloudy highlights in the past quarter include a partnership with CA Technologies for the mainframe side, as well as European expansion. The latter was a momentum announcement with IBM having secured several Europe-based customers, including those in healthcare, logistics, and energy.

Meanwhile, SAP’s results saw cloud and software revenue going up to €4.94bn (£4.1bn) in Q218, up from €4,76bn this time last year – and the company has raised its ambitions for 2020 as a result.

At the start of this year, the company praised ‘stellar cloud bookings’ in Q417 causing them to reiterate its 2020 vision. By 2020, the company is aiming for non-IFRS cloud subscriptions and support full year revenue at a top point of €8.5bn, and ‘more predictable revenue’ – cloud support and software support revenue – to be between 70% and 75%.

Now, the company expects a top point of €8.7bn, with CEO Bill McDermott saying the company is presenting a ‘clear strategy’ and that raised guidance shows a ‘new wave of growth has been unleashed.’

“The fourth generation of enterprise applications has taken another major step forward with [in memory suite] C/4 HANA. Together with S/4 HANA, SAP customers are finally able to focus their entire business on delivering a personalised experience to their customers,” said McDermott. “The intelligent enterprise is the elixir to bridge silos inside fractured businesses and beyond so CEOs get a single view of the customer.”

Among the company’s highlights in the previous quarter included the launch of SAP’s Digital Manufacturing Cloud, helping manufacturing providers to deploy Industry 4.0 technologies in the cloud.

While these figures are impressive in isolation, it is worth noting that Alphabet, Amazon, and Microsoft are all declaring in the next week. According to Synergy Research, Amazon Web Services (AWS) leads across all geographies, with Microsoft second and Google third. The only exception is in APAC, where Alibaba secured the silver medal position.

You can read the IBM report here and the SAP report here.

The data centre of tomorrow: How the cloud impacts on data centre architectures

As the enterprise world continues speeding towards complete digitization, technologies like cloud and multi-cloud are leading the charge. Yes, cloud offerings like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud are changing the way enterprises consume IT resources. Having cloud-grade infrastructure at an enterprise’s fingertips opens up opportunities that simply did not exist before.

But are the effects of cloud limited to a collection of somewhat ephemeral infrastructure residing in someone else’s data centre? Or does cloud carry with it the power to change owned infrastructure as well?

The cloud’s impact on data centre architectures

Perhaps the most basic impact of the rise of cloud and multi-cloud is the effect on data centre architectures. In years gone by, enterprise data centres were sprawling collections of sometimes eclectic equipment deployed in support of point applications or use cases. With each new turn that the business took, the data centre was forced to bob and weave.

It’s understandable then that devices with robust sets of capabilities dominated. When IT cannot predict the next requirement, there are only two possible paths forward: deploy devices that support as much as possible, and when that fails, deploy snowflakes purpose-built for narrow use. 

But the cloud doesn’t work this way. Amazon, Microsoft, Google, and the others cannot build bespoke infrastructure for the varying application needs of their users. Doing so would utterly destroy the economies of scale that come from shared infrastructure. Rather, they must design the data centres that power their cloud offerings in such a way that they are robust in capability but uniform in design. Without resource fungibility, there simply is no cloud.

And so data centre designs have changed, favoring commonality over uniqueness. Modern data centres are not a mix of different shapes and sizes. They are a uniform fabric of fixed-form-factor devices, deployed explicitly because they are interchangeable. Servers and storage have long been in this mode. 

More recently, even the network devices that provide connectivity have moved this direction. Built on merchant silicon, these “pizza boxes” (so named because they are thin) are deployed in non-blocking architectures. When something fails, traffic is routed around it, and the device is replaced with an identical copy. 

Within the data centre, this means that racks and rows ought to begin to look identical. Where diversity served the legacy data centre well, it is the enemy of efficiency in the cloud era. This simplifies things like deployment and management, allowing for finer-grained grow-as-you-go strategies. It also makes space, power, and cooling a much more straightforward activity. When devices are the same, planning is reduced to understanding capacity requirements and physical constraints. 

Moving from device-led to operations-led

Ultimately, the cloud is probably more about operations than devices. Historically, data centres have been architected from the devices up. That is to say that things like capacity requirements drive what boxes are required, which then determine what operators must do.

The currency of cloud and multicloud, though, is not capacity so much as it is agility. And this means that physical devices must assume a supporting role while operations steps to the front.

As enterprises look to learn from the cloud movement, they should conclude that operations are the starting point. Enterprises that are not efficient in how they manage their infrastructure will be at a perpetual competitive disadvantage to those companies that have adopted cloud practices to drive their business. 

Operations certainly involve technology movements like automation, telemetry, and DevOps. But enterprises looking to become more efficient need to start with their physical infrastructure.  The enemy of fast is complexity, which means that enterprises need to be taking every opportunity to reduce complexity in their operating environments. One of the easiest ways to make progress here? Eliminate infrastructure sprawl. 

Because most data centres evolve organically over time, they are a collection of different devices. The more different they are, the more diverse the operational model must be. Every unique platform running every unique version of software configured for every unique feature is ultimately making the data centre more diverse. That diversity is an efficiency killer. 

Maintaining economic leverage

While one conclusion to draw here is that a single supplier can help drive data centre evolution, the reality is that enterprises will ultimately want to maintain economic leverage. Indeed, there are no benevolent rulers in IT, and a single-vendor approach to the data centre is likely to wreak long-term economic havoc. 

Instead, enterprises should be architecting their data centres for a common set of functionality that can be offered over two or more supplier solutions. By maintaining interchangeability across vendors, enterprises will find that their procurement teams can reap rewards even as their operation centres rejoice. 

This, too, has implications on the physical data centre. Understanding the underlying merchant silicon that drives solutions will allow architects to steer their designs towards common building blocks available across the industry. Adopting white box servers, for instance, enables things like common sparing, which helps improve repair times and maintain consistency of deployment. In the network realm, standardising on connectivity (25GbE to the server, as an example) allows enterprises to settle on common optics and cabling as well. Anything that drives uniformity will ultimately help the bottom line.

Process dominates

It is certainly true that the data centres of the future will converge on a fairly narrow set of architectural principles. But enterprises that really want to ride the wave of cloud and multicloud will need to evolve their overarching processes as well. 

Where most enterprises today are skilled at deploying new equipment, they struggle at decommissioning aging gear. For example, most enterprises have network refresh cycles of seven years or more. This means that a data centre will have seven years’ worth of equipment in it, built with varying components and supporting varying capabilities. 

Compare that with cloud companies that refresh their hardware every two to three years. It is tempting to argue that the cloud properties have more available spend, making this practice more economically palatable. But the driver behind this practice is actually the same efficiencies that enterprises want within their IT environments. 

By reducing operational divergence, cloud companies make themselves dramatically more efficient operationally, allowing them to grow their capacity exponentially while maintaining their current IT teams at or near current staffing levels. This allows them to divert operational spend back into their capital expenditures, helping maintain this aggressive refresh cycle. And as they deploy newer equipment, they can take advantage of increased scale and performance of newer platforms, frequently adding more capacity at lower per-unit prices.

Perhaps more importantly, these operational efficiencies allow teams to spend less time doing break-fix activities and more time driving value to the business. How much is it worth for an enterprise to be more automated? Or to have better documentation? Or to have robust automated testing? None of these happen when teams are maxed out merely maintaining existing infrastructure.

The bottom line

Data centres are at a point where they simply must evolve. The rise of common building blocks built on standard components has changed the way enterprises plan, build, and operate. By combining these principals with important shifts in both operations and refresh cycles, enterprises can apply the principles of cloud to their owned infrastructure, allowing for dramatic improvements in both utility and efficiency.

IT in Education: Challenges from All Sides

ICILS, short for the “International Computer and Information Literacy Study,” is an international survey recording the computer and information literacy of secondary school students. It was first held in 2013. In Germany in particular, the results of the first study caused quite a stir; among all the developed nations, Germany turned out to be country […]

The post IT in Education: Challenges from All Sides appeared first on Parallels Blog.

UKFast CEO: ClearCloud venture offers public cloud without the unknown costs


Bobby Hellard

18 Jul, 2018

UKFast’s CEO has compared its new business ClearCloud, selling AWS and Microsoft Azure support, to a sports car customers never need to refuel.

Lawrence Jones said the business’s new arm will also support UKFast’s eCloud Hybrid and eCloud Private services, and that it was launched with the purpose of broadening the firm’s multi-cloud offering to its 5,000 clients.

What sets it apart from the competition, according to Jones, is quality of service matched with fixed prices.

“Smaller businesses and the medium-sized customers don’t want to give people a blank cheque and [buying] Amazon is like giving someone a blank cheque because you don’t really know how much it is going to cost,” he told Cloud Pro.

“My customers are used to having a fixed fee and as much bandwidth as they want, as much CPU usage as they want, as much storage as they want and all within the agreement that was set out at the beginning of the contract.”

Indeed, outlining the cost from the start is a big selling point for ClearCloud, Jones said.

“It’s like having a sports car and not having to pay for the petrol. I know how much I’m going to pay and I won’t be spending any extra,” he said.

The new venture was born out of conversations with customers. Originally, UKFast targeted small businesses paying between £700 and £800 per month, but over the last few years, it has started attracting larger clients that could pay hundreds of thousands of pounds a month.

Its roster now boasts the likes of Laterooms as well as huge public sector organisations like the Ministry of Defence and the Cabinet Office. These large customers are adopting a multi-cloud strategy where they host with ClearCloud, but they also have workloads in AWS and workloads in Azure.

A key element to ClearCloud’s future success is the appointment of former AWS global architect Matt Bibby as MD. Jones believes that his insight into AWS and understanding of the cloud market gives UKFast a competitive advantage.

“With Matt joining us, it has supported UKFast in another way that is quite unusual because we’ve had a few customers contemplating AWS and they were able to talk to Matt and spin up some clouds and then they realised this was definitely not for them and wanted to go back to UKFast,” he said.

“So we decided, yeah, we will take a couple of these bigger workloads on for some of our larger customers, and it turned out very positive,” Lawrence added.

Picture of Matt Bibby, MD of ClearCloud (left), with UKFast CEO Lawrence Jones/Credit: UKFast 

How the cloud cooled my phone’s meltdown


Barry Collins

24 Jul, 2018

Technology is a pain in the posterior. It waits until you’re at the very precipice of breaking point and then breaks. Hence, last week, in the midst of a deadline cataclysm, my phone decided to have a meltdown. Almost literally.

I first realised something was up when I felt a warm sensation in the trouser region. Given that I’m not quite yet of the age when ‘little accidents’ occur, I concluded it must be the phone in my pocket. And given that phone is a Samsung Galaxy, I got it out pretty sharpish.

I tried all the usual overheating remedies: killed all the open apps, restarted the phone, scoured the settings for battery hogs, but nothing was working. A deep dive into the settings revealed that ‘Google Services’ was thrashing my phone’s processor, but with literally no more information to go on, and a phone that was chomping through battery at a rate of a 10% every 30 minutes, I had no option but to thrash it and start afresh.

This gave me flashbacks to the days of Windows XP. Remember when you used to have to reinstall the operating system every year or two because your computer accumulated so much cruft it took 10 minutes to do anything? Well, smartphones have now reached that stage. Once in a while, you need to manually chuck out the rubbish they’re incapable of clearing out for themselves.

I wasn’t too concerned about factory-resetting my phone because I had two backups of all my data. Google keeps a backup of all Android handsets by default and Samsung practically insists on taking a backup of its own for good measure. The last time I moved handsets, the Google backup reinstalled all my old apps on the new phone within minutes. It was like moving home and finding the removal men had put all your furniture back in the right place and made you a cup of tea to boot.

Sadly, things didn’t go quite so smoothly this time. Google didn’t even offer to restore my data during the phone’s setup. And although Samsung stepped into the breach, offering to restore all my apps, photos, contacts and the like, attempts to restore from its backup where plagued with ‘server errors’. I could only restore parts of my data.

At first, the language in Chez Collins was a tad fruity. I was preparing to rip a branch off a nearby tree, go the full Basil Fawlty and give my obstinate Galaxy S7 a ‘damned good thrashing’. But after I’d calmed down and started reinstalling apps manually, I realised this wasn’t such a disaster after all.

Unlike the days of Windows XP, when all our data was stored on the device and a faulty backup was very bad news indeed, these days everything is stored in the cloud. Email, photos, social-media accounts, documents – all you need do is reinstall the app and enter your login details, and everything is basically back to how it was. We don’t look after our own data these days. We get Dropbox or Google or OneDrive or Facebook or whoever to take care of it for us.

My phone’s now running like new with battery life back to almost two days. That mini-meltdown might be the best thing that ever happened to it.

Image: Shutterstock

The perils of not having disaster recovery – or, why we love a good reserve parachute

One of the most important but often missed steps in having a reliable infrastructure is disaster recovery (DR). Surprisingly, most companies decide either to not implement DR or to implement it halfway. Here, I intend to explore common terms and understandings in disaster recovery; how to leverage the cloud, different types, the plan and important considerations, as well as the economic impact.

Regional vs. zone/domain DR

DR can be implemented at regional or zone/domain level, depending on needs. I advocate and adopt having high availability (HA) at zone/domain level and DR at regional level; the cloud presents itself as a good alternative in terms of cost value for HA and DR – even more so with the plethora of providers that exist nowadays.

Levelling the field

First, some widely used terms:

RTO – recovery time objective. Essentially how long it will take to have the DR site operational and ready to accept traffic.

RPO – recovery point objective. Essentially to which point in time in the past of primary site the secondary site will return. It is also an indicator of data loss; if data is synced every hour, site A crashes at 11:59am, site B has data until 11am, so worst case scenario is about an hour is lost and the secondary site will be operational as primary site was operational at 11am. That is RPO 1h. The smaller the better – alas the more costly the implementation will be.

Regional – how far is too far and how close is too close? When is close too close? Having primary in London region and secondary in Dublin region, an asteroid the size of Kent falling over Wales can make the solution unviable, but the likeliness of that happening is negligible.

Cost – it is always a factor and in this case, it can make a difference since regions such as Ashburn (USA) are usually (significantly) cheaper than regions in Europe. Other than these main reasons, having a secondary site close to primary is priceless. Now, can it be too far? It depends. If the nature of the business depends on milisecond transactions, then analysts and customers in Bangalore cannot use a site in Phoenix. If it does not, the savings of having a secondary site (temporarily) in a different region are worth it. Also, it is not something permanent – the system is in a degraded state.

An alternative approach is having three DR sites – a primary site with a given RTO/RPO in case it is needed, and a secondary site in the form of pilot light only.

Hot, cold, warm standby

In some circles DR is covered on a hot/cold/warm approach. I usually prefer these terms in high availability architectures, although I have seen DR sites referred to as warm. A hot site is usually a site that is up and running and to which I can failover immediately. That is for me something I would relate to HA as mentioned, however a warm site can be a site that has the resources, and only the critical part is running or ready to run. It may take a few minutes until things are in order and can failover into that DR site.

A cold standby is one that, although it receives updates, they are not necessarily frequent, meaning that failing over may mean that the RPO is much larger than desired, and of course, the RTO and RPO are usually numbers bound to the SLA, so they need to be well thought and taken care of.

Domain/zone DR – worth it or not?

DR at zone/domain level is a difficult decision for different reasons; availability zones consist of sites within a region with independent network, power, cooling, and so on i.e. isolated from each other. One or more data centres comprise a zone and one or more zones (usually three) comprise a region. Zones are used frequently for high availability. Network connectivity between zones is usually very low latency – in the order of a few hundred microseconds – and transfer rate is of such orders of magnitude that RPOs can be made almost obsolete, since data is replicated everywhere in an instant.

As sometimes HA within zones is a luxury, a DR solution can be necessary within the zones. In this case, it is usually an active/passive configuration, meaning the secondary site is stopped.

Economic impact

It is a given that the economic impact is a big factor regarding RTO, RPO, compliance, security, and GDPR as well. It is not necessarily true that the more responsive the secondary site is, the more expensive it is as well. It will depend on the architecture, how it is implemented, and how it is carried on when needed. Basically, the economic impact will be given by the amount of information kept in different sites, not so much by the size of the infrastructure, nor the replication of that information; that which can be automated, and nowadays done with enough frequency as to almost have the same data in two or more regions at any given moment.

Also, as long as the infrastructure is stopped, it is possible to resume operations in minutes without a large impact. Of course, this will depend on the cloud provider. Some providers will charge even for stopped VMs or stopped BMs, depending on the shape/family – for instance Oracle Cloud will continue billing if the instance stopped uses NVMe and SSD, meaning any Dense/HighIO machine – so beware of these details.

Within the economic impact is also the automation. It can be automated, semi-automated, or no automation. For the most part, in DR cases I rather prefer semi-automated on a two-man rule fashion. What this means is even when everything indicates there is a massive outage that requires DR, it will take more than one person to say ‘go’ on the failover, and more than one person to actually activate the processes involved. The reason being: once the DR process is started, going back before completion can render a nightmare.

Pilot light

Although it is strange to see pilot light within economic impact, there is a reason for it; pilot light allows a DR site with the minimum of infrastructure. Although a data replica must exist, the DR site needs only one or two VMs, and those VMs, when needed, will take care of spawning the necessary resources. As an engineer, I sometimes steer towards pilot light with an orchestration tool, such as Terraform.

Having a virtual machine online that contains all the IaaC (infrastructure as code) files necessary to spin up an entire infrastructure is convenient, and usually it is a matter of a few minutes until the last version of the infrastructure is back and running, connected to all the necessary block devices. Remember, nowadays, it is possible to even handle load balancers with IaaC, so there are no boundaries to this.

The DR plan

This is a critical part, not only because it describes the processes that will become active when the failover is a reality, but also because all the stakeholders have a part in the plan and all of them must know what to do when it is time to execute it. The plan must be of course not only written and forgotten, but tested, not only once, but in a continuous improvement manner.

Anything and everything necessary to measure the efficacy and efficiency of it. It is adequate to test the plan with and without the stakeholders aware, in order to see how they will behave in a real situation – and it is also advisable to repeat every six months, since infrastructure and processes can change.

Leaving the degraded state

Sometimes, the plan does not cover going back to the primary site, and this is important, since the infrastructure is at the moment in a degraded state, it is necessary to bring the systems to the normal state so as to have DR again. Since going back to the normal state of things takes time as well, and all the data needs to be replicated back, this is something that needs to be done under a maintenance window, and surely all the customers will understand the need to do so; but just in case, when setting up SLO and SLA, bear in mind that this maintenance window may be necessary. It is possible to add them as a ‘fine print’, of which I am not a fan, or consider them within the calculations.

Conclusion

There are some considerations with regard to DR in different regions, specifically but not only for Europe, and these come in the form of data, security, compliance and GDPR. The new GDPR requires companies to have any personal data available in the event of any technical or physical incident, so DR is no more a wish list item – it is required. What this basically means is that under GDPR legislation, data held of a person must be available for deletion or freed up for transfer upon request. For those legally inclined, more information can be found in article 32 of GDPR. In case DR is found to be daunting, there are nowadays multiple vendors that offer DraaS as well.

Google links US and Europe clouds with transatlantic subsea cable


Keumars Afifi-Sabet

18 Jul, 2018

Google is about to embark on building a massive subsea cable spanning the length of the Atlantic Ocean – from the French coast to Virginia Beach in the United States.

Claimed to be the first private transatlantic subsea cable, named ‘Dunant’ after the Nobel Peace Prize winner Henri Dunant, the latest addition to Google’s infrastructure network will aim to increase high-bandwidth ability, and create highly secure cloud connections between the US and Europe.

Google claims the new connection – which will support the growth of Google Cloud – will also serve its business customers by guaranteeing a degree of connectivity that will help them plan for the future.

Explaining the project in a blog post, Google’s strategic negotiator, Jayne Stowell, said the decision to build the cable privately, as opposed to purchasing capacity from an existing cable provider or building it through a consortium of partners, took several factors into account, including latency, capacity and guaranteed bandwidth for the lifetime of the cable.

Dunant follows Google’s plans to build another massive private cable spanning 10,000km between Los Angeles, California and Chile, dubbed Curie, one of three cables comprising a $30 billion push to expand its cloud network across the Nordics, Asia and the US.

Both Curie and Dunant originated in the success of relatively short pilot cables, dubbed Alpha and Beta as a nod to their software development process.

“Our investments in both private and consortium cables meet the same objectives: helping people and businesses can take advantage of all the cloud has to offer,” Stowell said.

“We’ll continue to look for more ways to improve and expand our network, and will share more on this work in the coming months.”

Google’s efforts to build a transatlantic cable follows the completion of a joint project by fellow tech giants Microsoft and Facebook in September last year, named Marea, that connected Spain with the east coast of the US.

The cable stretches more than approximately 6,600km, and weighs 4.65 million kg or, as Microsoft put it at the time, the equivalent of the weight of 34 blue whales.

Picture: Virginia Beach, US/Credit: Shutterstock

Major League Baseball expands AWS partnership for AI and machine learning capabilities

Twas the week before earnings, and all in the cloud, vendors announced new customers, and took off the shroud.

That's certainly the case with Amazon Web Services (AWS), with Major League Baseball (MLB) extending its partnership with the Seattle cloud giant for its machine learning, artificial intelligence, and deep learning expertise.

MLB already runs various workloads, including its facts and figures base, Statcast, on AWS. The new initiatives aims to improve the experience for armchair fans as well as those in the stadia – the Amazon ML Solutions Lab is being utilised to beef up in-game statistics within broadcasts, including on MLB Network.

The system's success is such that MLB will utilise Amazon SageMaker, the company's product to build, train and deploy machine learning models, to be able to accurately predict the direction of the next pitch crunching statistics on the pitcher, batter and catcher, as well as the game situation.

On a more eyebrow-raising level, MLB also says it will also utilise SageMaker, as well as Amazon Comprehend, the natural language processing service, to "build a language model that would create analysis for live games in the tone and style of iconic announcers to capture that distinct broadcast essence baseball fans know and revere."

"Incorporating machine learning into our systems and practices is a great way to take understanding of the game to a whole new level for our fans and the 30 clubs," said Jason Gaedtke, MLB chief technology officer in a statement. "We chose AWS because of their strength, depth, and proven expertise in delivering machine learning services and are looking forward to working with the Amazon ML Solutions Lab on a number of exciting projects, including detecting and automating key events, as well as creating new opportunities to share never-before-seen metrics."

The baseball arbiter is not the only new or improved customer AWS has announced in recent weeks. 21st Century Fox has expanded its relationship with the company – again with machine learning and data analytics services at the forefront – for the 'vast majority' of its platforms and workloads. The media giant said it had reduced its data centre needs by half and moved more than 30 million assets – or 10 petabytes of data – to Amazon storage.

Earlier this month, Formula 1 selected AWS as its official cloud and machine learning provider, moving the majority of its infrastructure to Amazon from on-prem data centres, while earlier this week Walmart and Microsoft announced a major five year tie-up collaborating on moving hundreds of existing applications to cloud-native architectures.

This time next week all the major players will have reported their latest quarterly earnings. Watch this space for more – but for the time being the position is still one of dominance for AWS. With high levels of capex shoring them up, the growth of the hyperscalers continues, with Synergy Research describing the growth of the last two quarters as 'quite exceptional'.

With a steady stream of high value customers continuing to filter through, the next week's reports should be fascinating to explore.