All posts by James

Melissa Di Donato, CEO, SUSE: On cloud journeys, hyperscaler complexity, and daring to be different

When Melissa Di Donato joined SAP in 2017, having counted Salesforce, IBM and Oracle among her previous employers, she told this publication it was like ‘coming home.’ Now, as chief executive of Linux enterprise software provider SUSE, it is more a step into the unknown.

Yet it is not a complete step. Working with a proprietary software company means your experience is primarily in selling it, implementing it and aligning it to others’ business needs. With SUSE, Di Donato knows far more acutely what customers want.

“Though I don’t have a whole load of open source experience per se, I’ve got all the detail and all of the understanding of what it is to be a SUSE customer,” Di Donato tells CloudTech. “That’s really important because, as we go to market and look at who we want to be, who we are for, who we are now and who we want to become, it’s really about embracing this collaborative nature of creating software in a community, but pivoting around customer needs.”

Having that portability, regardless of what platform you choose, is becoming very important – what’s good on Azure today may be better on Google tomorrow

Since being named as SUSE CEO in July, Di Donato has been visiting customers and crafting the company’s message around two themes. For the enterprise market, while the buzzwords around containers, software-defined storage, multi-cloud and hybrid cloud remain, solid progress is harder to come by. Add in that every customer has different needs, after all, and the strategy needed to be boiled down somewhat.

Not unlike other organisations, SUSE’s customer base is split into various buckets. You have traditionalists, which comprise about 80% of customers, hybrid beginners, cloud adopters and cloud-native; the latter three all moving in ever decreasing circles. Regardless of where you are in your cloud journey, SUSE argues, the journey itself is the same. You have to simplify, before you modernise, and then accelerate.

Di Donato argues that cloud and containers are ‘very, very overused words’, and that getting to grips with the technology which holds the containers is key – but all journey paths are valid. “Whether cloud means modernising, or container means modernising, VMs, open source… [customers’] version of modernising is really important, and they want to simply and modernise to then get to a point where they can accelerate,” she says. “Regardless of what persona you are, what customer type you are, everyone wants to accelerate.”

These days, pretty much everyone is on one of the hyperscale cloud providers as well. SUSE has healthy relationships with all the major clouds – including AWS, which is a shot in the arm for its occasionally-criticised stance on open source – aiming to offer partnerships and value-adds aplenty.

The problem is that the hyperscalers do little to assist the simplification process. At re:Invent in December, AWS said that at last count it offered more than 175 services. “We’re overwhelmed by the number of services and tools being offered to our customer base,” says Di Donato. “There’s a whole different conversation [before] about how, as a complex enterprise, [you] get to public cloud. If you have one small application on public cloud, [you can] choose from hundreds more.”

So what can SUSE do? Ironically enough, by offering as many options as possible to its customers. “How do I get our natives and traditionalists, at two opposite ends of the spectrum, into an environment where they can move and continually modernise?” says Di Donato. “It’s going to need to be flexible, it’s going to need to be agnostic, it’s going to need no lock-in and be able to simplify the complexity around the various components that hyperscalers have.

“Having that portability, regardless of what platform [customers] choose, is becoming very important,” Di Donato adds. “What’s good on Azure today may be better on Google tomorrow – and we have to have the flexibility and simplicity to be able to move our customers over, the easiest way forward.”

The other message – and arguably an even more important one – is around ‘daring to be different.’ It was the title of an article Di Donato wrote on LinkedIn when she joined SUSE, which focuses on the wider community and message. “I’m fortunate to have a platform from which I can be an activist and an advocate for openness, diversity and inclusion,” Di Donato wrote at the time, adding she particularly advocated opportunities for girls to move into STEM. “I believe we can all give back more than we take.”

The ‘openness’, given SUSE’s heritage, should be a given – but it isn’t across every area. “The company is inherently open and collaborative – in an open source environment you can contribute literally from anywhere – however you don’t see a load of diversity in open source as an industry right now,” says Di Donato. “The need to always have diversity and inclusion on the agenda is really important.”

More than anything else, this was priority number one when Di Donato took over the reins at SUSE. Among other initiatives, an employee network around women in tech was launched. It is now more than 150-strong, for men and women “to ensure we evangelise the brand and get out there and show the world just how diverse we are and can be in open source at SUSE,” as Di Donato puts it.

Di Donato has been discussing greater representation for women in STEM for almost as long as she has been a senior executive. At her first role, at SAP as an R3 developer, she was the only female in her cohort. “10 years ago we stopped talking about women in tech because we were getting bored of it in the UK, right?” she says.

The headlines keep on coming, however. As CloudTech reported in October, the Forbes Cloud 100, an influential list of top privately-held cloud companies, featured a grand total of three firms led by women. The month before, another Forbes list, of America’s 100 most innovative leaders, featured just one woman, drawing opprobrium.

Even in her current role, Di Donato has gone to certain geographies to find herself the only female in the room. “We haven’t come very far,” she says, laughing ironically. “I tend to think that for any network of people that talk this passionately about a particular topic, over decades, you would think the dial would move. Yet we still struggle.”

As a mother of three and newlywed – having previously been widowed when her youngest child was 18 months old – Di Donato is especially concerned at getting more focus for parents, such as not penalising them if a family crisis meant they could not make the office.

Women need to be role models to show young executives the importance of being capable of juggling more than just one or two things

Rachel Keane, co-founder of the Women in Data series of events, previously told this publication of another danger for women: in an industry as fast as cloud computing, a year away means a huge gap of knowledge. Nothing is insurmountable, however. “For most people, they can’t imagine naturally what they’re capable of,” says Di Donato. “You can only understand what you’re capable of at a turning point in your life.

“We need to be role models to show other executives, particularly young talent, the importance of being capable of juggling more than just one or two things.”

More than anything else, however, wherever you are today, it is about being true to one’s self. Di Donato wants to exemplify this with her platform as CEO. “I’m faced with maybe exacerbating the fact that I’m quite loud, I’m quite vocal, I speak to a lot of employees, a lot of customers, and I’m a woman, right?” she says. “That’s a learning experience for me too – try and communicate in a way that people understand and are used to, but at the same time don’t lose myself either.

“I am enthusiastic and I am passionate and focused, and I am opinionated and I am driven,” adds Di Donato. “I don’t want to lose that purely because I’m different than everybody else right now.”

Picture credit: SUSE

Read more: Building confidence and power: Exploring greater female leadership and participation in cloud and data analytics

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Goodbye 2019, hello 2020: The year in cloud reviewed – and what is on the horizon

To say 2019 was a busy year for cloud would be no surprise. Yet the past 12 months has seen innovation, expansion, and drama which represents a poke in the eye for those who dismiss the industry as being consolidated and saturated.

Without any further ado, here is CloudTech’s traditional look back on the year in focus – and what the following 12 months will have in store for industry players and watchers alike.

The new hybrid cloud: Outposts leads and others follow

It was the hottest topic as 2018 drew to a close; the launch of AWS Outposts, with VMware as partner in crime, which promised to deliver a ‘truly consistent hybrid experience’ to ‘virtually any’ on-premises facility.

Naturally, as Microsoft and Google’s big events rolled around in 2019, attention turned to this area above all others. Google Cloud Next in April saw the launch of cloud services platform Anthos. Or, rather, it was a relaunch: to ‘build and manage modern hybrid applications across environments’, as well as accommodate AWS and Azure. In November, Microsoft launched Azure Arc, and outlined its theory of ‘hybrid 2.0’. Hybrid capabilities ‘must enable apps to run seamlessly across on-premises, multi-cloud and edge devices’, as Azure CVP Julia White noted at the time.

This means that the three largest cloud vendors are in a state of ‘collaborative détente’ right now, in the words of Pivot3 CMO Bruce Milne. Speaking to CloudTech at VMworld in August, Milne noted of the ‘obvious strategic tension’ to “watch this space because that friction is sure to generate sparks eventually.”

Kurian’s busy year as Google’s cloud chief

Another area industry watchers pencilled in for 2019 was how Thomas Kurian would take over the mantle left behind by Diane Greene as Google Cloud’s CEO. Among the in-tray items for the new boss were expansion and enterprise sales – or in the case of the latter, at least talking a good game.

This was exemplified in Kurian’s debut speaking slot as chief executive. At a Goldman Sachs conference in February, the former Oracle executive told delegates that old-school sales tactics were key to pushing Google’s message of differentiation. How has that gone? It’s hard to say; while Google still remains shy when it comes to revealing specific cloud figures, the overall soundbites have been solid around customer momentum and partnerships, while Anthos was well received.

As far as 2019 went, Google was the busiest hyperscaler in terms of acquisitions. While AWS had CloudEndure at the start of the year and Microsoft moved for Movere in September, Google’s shopping list had four items in total. Alongside Looker, the biggest deal, were moves for enterprise data pipeline provider Alooma, storage firm Elastifile, and VMware workload runner CloudSimple. New expansion areas included Poland, Switzerland, and Japan.

Open source providers open a can of worms

Maintaining your open ethos and turning a profit is frequently an area of tension, particularly for companies who deal in cloud and big data software. Indeed, 2019 was set to be a vital year for open source development in this context; the IBM-Red Hat acquisition hinted at it.

So it proved. In February, Redis Labs modified its licensing terms with this in mind, before having to change them again to appease open source and developer communities. The change meant developers were free to use the software, modify the source code et al – which they of course were always allowed to do – stipulating that the end result could not be a database, caching, search, indexing or stream processing engine, or anything to do with machine learning, deep learning, or artificial intelligence.

The month before, Confluent secured a $2.5 billion valuation having undergone a license change of its own. At the time, co-founder Jay Kreps maintained that the way forward building fundamental infrastructure layers was with open code. “As workloads move to the cloud we need a mechanism for preserving that freedom while also enabling a cycle of investment, and this is our motivation for the licensing change,” he wrote.

Speaking to CloudTech at the time of the change, Redis CEO Ofer Bengal said that, aside from AWS – frequently cited as the primary culprit for this behaviour – ‘the mood [was] trying to change’ between the big clouds and open source software providers. The proof of this theory was borne out in April, when Google Cloud announced at Next what it described the ‘first integrated open source ecosystem’ with seven vendors. Confluent and Redis Labs were both part of this group, with Bengal joining Kurian on stage to announce the deal.

Trump’s turn: Microsoft pips AWS to JEDI contract in shock decision

For the vast majority of 2019, the common consensus was that Amazon Web Services would secure the $10 billion JEDI (Joint Enterprise Defense Infrastructure) cloud contract. Yet many were not banking on the intervention of the US President. In July, around the time Oracle’s legal challenge was running out of gas, President Trump announced he was looking at the procurement process, citing complaints from multiple rivals.

In October, the Department of Defense announced Microsoft had won the contract. Many industry pundits took to link the stormy relationship between the president and Jeff Bezos to explain the decision; a book from former secretary of defence James Mattis alleged that President Trump told him to ‘screw Amazon’ out of the contract.

AWS has, not surprisingly, sought to appeal the ruling. Aside from the procurement process itself, this publication explored the definition of multi-cloud in a federal context. The award was, and always had been, for a single provider – a bone of contention in itself. Yet given AWS has been running the CIA’s cloud for years, what does this mean? “The DoD argued [it needed] one supplier simply because [they] need the level of tight integration and security,” cloud pundit Bill Mew told CloudTech in October. “I totally buy that if that’s their argument – but then why are they not going to the same supplier the CIA has?”

2019’s major cloud mergers and acquisitions

October: Digital Realty acquires Interxion for $8.4bn in biggest data centre deal of them all
August: VMware moves in for Pivotal and Carbon Black at combined $4.8bn
June: Google Cloud looks to Looker in $2.6bn all-cash deal for greater multi-cloud analytics
June: Salesforce to acquire Tableau for $15.7bn to combine AI with BI bulk (link to MarketingTech)

2020 outlook

Henrik Nilsson, vice president EMEA, Apptio

As seen in other software industries, overly aggressive price wars would likely upset the cloud market. As a result, AWS, Azure and Google Cloud Platform will all continue to enhance their specialities in 2020 – for instance focusing on scale, or a specific sector, or AI capabilities – to provide differentiation.

This will have a knock-on effect on costs. Apples to apples comparisons of pricings is already difficult, but moving forward businesses will have to do a much better job of tying value to cloud to make the right decisions for their business needs. In 2020 companies will need to establish a cloud centre of excellence and a ‘FinOps’ mindset, whereby all areas of the business have greater understanding of, and accountability for, cloud spend.

Dave Chapman, head of customer transformations, Cloudreach

The new era of enterprise cloud adoption will be among ‘cloud-native businesses’; organisations built directly with the cloud’s scalability and efficiency in mind. We’ve already seen how valuable it can be to have this agility built into a company’s DNA – look no further than the behemoth that is Netflix – so in 2020, expect to see more organisations migrating workloads to the cloud in an effort to meet the growing demands of today’s digital businesses.

Sanjay Castelino, chief product officer, Snow Software

Today’s cloud infrastructure is relatively easy to understand compared to what it will look like in 2020 and beyond. For example, when provisioning a cloud instance today, a user only needs a basic idea of how it operates and what it will cost. But when new cloud approaches like serverless become more popular, cloud usage will be managed by the people writing code.

In serverless computing, the code drives the cost to deliver the service, and businesses are not yet prepared to deal with these new consumption models. In the next year, companies need to priorities understanding consumption models because those models will have a significant impact on their business.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

One third of data centre spend goes into hyperscalers’ pockets through Q3, finds Synergy

While good technology analysis revolves around exploring new markets, conducting research and publishing authoritative market share scores, sometimes insights can be gleaned by just freshening up current figures. Long-time cloud infrastructure Synergy Research has done just that in its latest note which focuses on continued hyperscaler dominance.

The latest data from Synergy has shown that data centre hardware and software spending from hyperscale operators in the first three quarters of the year has gradually risen, and this year represented a third of total spending.

While data centre spending from enterprises and service providers is now at 67% of total outlay – compared with 85% in 2014 – overall spend from this sector has risen 6% in five years, albeit in line with overall market expansion of 34%.

As continues to be the case, moving enterprise workloads to the cloud means the squeeze continues to be put under enterprise spending. Finding recent tales of large organisations moving their infrastructure to a major cloud vendor is like shooting fish in a barrel; to pick just a few, Best Western Hotels is in the process of going all-in on Amazon Web Services (AWS) as evinced at re:Invent earlier this month, while Salesforce and Sainsbury’s were recent client wins for Microsoft Azure and Google Cloud Platform respectively.

Synergy also noted ‘continue growth in social networking’ as a primary indicator for increased hyperscaler spend. Total data centre infrastructure equipment revenues – including cloud and on-prem, hardware and software – were at $38 billion for Q319.

John Dinsdale, a chief analyst at Synergy, argued the trend around flat enterprise spend is not going away any time soon. “We are seeing very different scenarios play out in terms of data centre spending by hyperscale operators and enterprises,” said Dinsdale. “On the one hand revenues at the hyperscale operators continue to grow strongly, driving increased demand for data centres and data centre hardware. On the other hand, we see a continued decline in the volume of servers being bought by enterprises.

“The impact of those declines is balanced by steady increases in server average selling prices, as IT operations demand ever-more sophisticated server configurations, but overall spending by enterprises remains almost flat,” added Dinsdale. “These trends will continue into the future.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How the AWS and Verizon re:Invent partnership shows the way forward for telcos and cloud providers

At the main AWS re:Invent keynote, the biggest headline is usually saved until the end. Last year, it was the announcement of AWS Outposts, with VMware CEO Pat Gelsinger taking to the stage to join AWS chief Andy Jassy.

This time around it was Verizon, whose CEO Hans Vestberg joined Jassy to announce a partnership to deliver cloud and edge computing souped up with 5G connectivity. The move is also a showcase for AWS Wavelength, which is a major edge play: embedding compute and storage services on the edge of operators’ 5G networks, enabling the delivery of ultra-low latency applications.

Vestberg noted Verizon’s ‘eight currencies’ it believed in for 5G; a message first put out at the start of this year at CES and which went far beyond speed and throughput, the only primary capabilities for 4G. “The most important [aspect] is when you can slice this and give them to individuals and applications; you have a transformative technology that’s going to transform consumer behaviour, transform businesses, transform society,” he said.

For the ‘builders’ – developers who form such a key part of the re:Invent audience base – this promise of 5G, encapsulating lower latency, mobility and connectivity, is vital for the applications they are creating. Yet the journey for the data being transmitted is arduous; going from the device to the cell tower, to the aggregation site, to the Internet, and to the cloud provider, before going back.

As Jassy noted, the most exciting applications to be ushered in, such as autonomous industrial equipment, or applications for smart cities, can’t wait that long. “If you want to have the types of applications that have that last mile connectivity, but actually do something meaningful, those applications need a certain amount of compute and a certain amount of storage,” he said. “What [developers] really want is AWS to be embedded somehow in these 5G edge locations.”

Hence this AWS and Verizon collaboration – which Jassy noted had been in the works for around 18 months. “In placing AWS compute and storage services at the edge of Verizon’s 5G Ultra Wideband network with AWS Wavelength, AWS and Verizon bring processing power and storage physically closer to 5G mobile users and wireless devices, and enable developers to build applications that can deliver enhanced user experiences like near real-time analytics for instant decision making, immersive game streaming, and automated robotic systems in manufacturing facilities,” the companies noted in the press materials.

The move also evidently strengthens the relationship between Verizon and AWS, for whom the lines of business are now clearly demarcated.

As industry watchers will recall, in 2011, when cloud infrastructure was still nascent, Verizon acquired early pioneer Terremark. The company said at the time the move would ‘clear the way for Verizon to lead the rapidly evolving global managed IT infrastructure and cloud services market.’ The telco’s efforts to become a cloud provider in its own right fell flat, eventually being sold up to IBM. As Synergy Research’s John Dinsdale put it to this reporter back in 2016, ‘the speed of cloud market development and the aggressiveness of the leading cloud providers largely left [telcos] behind.’

The thinking has since changed. 18 months ago – around the time the two companies started consulting on the edge and 5G partnership – Verizon moved to AWS as its preferred public cloud provider, migrating more than 1,000 of its business-critical applications and backend systems.

Today, the much-derided ‘telco cloud’ is now about partnerships and making the most out of both sides’ assets. AT&T announced deals with IBM and Microsoft in successive days in July in a move which raised eyebrows in the industry – and according to Nick McQuire, VP enterprise at CCS Insight, this is an idea finally beginning to bear fruit.

“The announcements, above all, are about developers,” said McQuire. “For 5G to meet the enormous hype and expectation surrounding it this year, operators are now desperate to woo developers to the platform to create 5G applications which at the moment are very thin on the ground.

“AWS has the cloud, edge computing and IoT assets – some of the best in the market – and it also has developers, so it’s no surprise it’s pushing into this area and partnering with leading telcos.”

Read more: AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

Picture credit: Amazon Web Services/Screenshot

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS re:Invent 2019 keynote: ML and quantum moves amid modernisation and transformation message

“If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates,” wrote The Atlantic’s Derek Thompson in October, “you’ve interacted with seven companies that will collectively lose nearly $14 billion this year.”

It is a well-worn line, and as WeWork’s collapse showed, there is plenty of pushback when it comes to the gig economy champions. Yet at the start of his re:Invent keynote today, Amazon Web Services (AWS) CEO Andy Jassy cited Uber, Lyft and Postmates, as well as Airbnb, as examples of the overall keynote theme around transformation. “These startups have disrupted longstanding industries that have been around for a long time from a standing start,” said Jassy.

An eyebrow-raising opening, perhaps. Yet, backed by the re:Invent band once more with half a dozen songs ranging from Van Halen to Queen – AWS has heard of the former even if Billie Eilish hasn’t – the rationale was straightforward. If you’re making a major transformation, then you need to get your ducks in a row; senior leadership needs to be on board, with top-down aggressive goals and sufficient training.

“Once you decide as a company that you’re going to make this transition to the cloud, your developers want to move as fast as possible,” said Jassy. This beget the now-standard discussion around the sheer breadth of services available to AWS customers – more than 175 at the most recent count – with Jassy noting that certain unnamed competitors were ‘good at being checkbox heroes’ but little else.

This was not the only jibe the AWS chief exec landed on the opposition. From transformation, another key element for discussion was around modernisation. This was illustrated by a ‘moving house’ slide (below) which was self-explanatory in its message. Jassy took extra time to point out the mainframe and audit notices. While IBM and particularly Oracle have been long-term targets, the Microsoft box is an interesting addition. Jassy again noted AWS’ supremacy with regard to Gartner’s IaaS Magic Quadrant – adding the gap between AWS and Microsoft was getting bigger.

Last year, the two big headlines were around blockchain and hybrid cloud. Amazon Managed Blockchain did what it said on the tin, but AWS Outposts aimed to deliver a ‘truly consistent experience’ by bringing AWS services, infrastructure and operating models to ‘virtually any’ on-prem facility. Google Cloud’s launch – or relaunch – of Anthos was seen as a move in the same vein, while Azure Arc was seen by industry watchers as Microsoft’s response.

This is prescient as plenty of the product updates could be seen as an evolution of 2018’s re:Invent announcements. Instead of storage, Jassy this time focused on compute; instances and containers.

One piece of news did leak out last week around AWS building a second-generation custom server chip – and this was the first announcement which Jassy confirmed. The M6g, R6g, and C6g Instances for EC2 were launched based on the AWS Graviton 2 processors. “These are pretty exciting, and they provide a significant improvement over the first instance of the Graviton chips,” said Jassy. Another instance launch was seen as another upgrade. While AWS Inferentia was launched last year as a high-performance machine learning inference chip, this year saw Inf1 Instances for EC2, powered by Inferentia chips.

On the container side, AWS expanded its offering with Amazon Fargate for Amazon EKS. Again, the breadth of options to customers was emphasised; Elastic Container Services (ECS) and EKS, or Fargate, or a mix of both. “Your developers don’t want to be held back,” said Jassy. “If you look across the platform, this is the bar for what people want. If you look at compute, [users] want the most number of instances, the most powerful machine learning inference instances, GPU… biggest in-memory… access to all the different processor options. They want multiple containers at the managed level as well as the serverless level.

“That is the bar for what people want with compute – and the only ones who can give you that is AWS.”

Jassy then moved to storage and database, but did not stray too far from his original topic. Amazon Redshift RA3 Instances with Managed Storage enables customers to separate storage from compute, while AQUA (Advanced Query Accelerator) for Amazon Redshift flips the equation entirely. Instead of moving the storage to the compute, users can now move compute to the storage. “What we’ve built with AQUA is a big high-speed cache architecture on top of S3,” said Jassy, noting it ran on a souped-up Nitro chip and custom-designed FGPAs to speed up aggregations and filtering. “You can actually do the compute on the raw data without having to move it,” he added.

Summing up the database side, the message was not simply one of breadth, but one that noted how a Swiss Army knife approach would not work. “If you want the right tool for the right job, that gives you different productivity and experience, you want the right purpose-built database for that job,” explained Jassy. “We have a very strong belief inside AWS that there is not one tool to rule the world. You should have the right tool for the right job to help you spend less money, be more productive, and improve the customer experience.”

While various emerging technologies were announced and mentioned in the second half of last year’s keynote, the big gotcha arrived the day before. Amazon Braket, in preview today, is a fully managed AWS service which enables developers to begin experimenting with computers from quantum hardware providers in one place, while a partnership has been put in place between Amazon and the California Institute of Technology (Caltech) to collaborate on the research and development of new quantum technologies.

On the machine learning front, AWS noted that 85% of TensorFlow running in the cloud runs on its platform. Again, the theme remained: not just every tool for the job, but the right tool. AWS research noted that 90% of data scientists use multiple frameworks, including PyTorch and MXNet. AWS subsequently has distinct teams working on each framework.

For the pre-keynote products, as sister publication AI News reported, health was a key area. Transcribe Medical is set to be utilised to move doctors’ notes from the barely legible script to the cloud, and is aware of medical speech as well as standard conversation. Brent Shafer, the CEO of Cerner, took to the stage to elaborate on ML’s applications for healthcare.

With regard to SageMaker, SageMaker Operators for Kubernetes was previously launched to let data scientists using Kubernetes train, tune, and deploy AI models. In the keynote, Jassy also introduced SageMaker Notebooks and SageMaker Experiments as part of a wider Studio suite. The former offered one-click notebooks with elastic compute, while the latter allowed users to capture, organise and search every step of building, training, and tuning their models automatically. Jassy said the company’s view of ML ‘continued to evolve’, while CCS Insight VP enterprise Nick McQuire said from the event that these were ‘big improvements’ to AWS’ main machine learning product.

As the Formula 1 season coming to a close at the weekend, the timing was good to put forth the latest in the sporting brand’s relationship with AWS. Last year, Ross Brawn took to the stage to expand on the partnership announced a few months before. This time, the two companies confirmed they had worked on a project called Computational Fluid Dynamics Project; according to the duo more than 12,000 hours of compute time were utilised to help car design for the 2021 season.

Indeed, AWS’ strategy has been to soften the industry watchers up with a few nice customer wins in the preceding weeks before hitting them with a barrage at the event itself. This time round, November saw Western Union come on board with AWS its ‘long-term strategic cloud provider’, while the Seattle Seahawks became the latest sporting brand to move to Amazon’s cloud with machine learning expertise, after NASCAR, Formula 1 and the LA Clippers among others.

At the event itself, the largest customer win was Best Western Hotels, which is going all-in on AWS’ infrastructure. This is not an idle statement, either: the hotel chain is going across the board, from analytics to machine learning, the standard database, compute and storage, as well as consultancy.

This story may be updated as more news breaks.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Alibaba Cloud releases Alink machine learning algorithm to GitHub

Alibaba Cloud has announced it has made the ‘core codes’ of its machine learning algorithm Alink available on GitHub.

The company notes it is one of the top 10 contributors to the GitHub ecosystem, with approximately 20,000 contributors. Alink was built as a self-developed platform to aid batch and stream processing, with applications for machine learning tasks such as online product recommendation and intelligent customer services.

Not surprisingly, Alibaba is targeting data analysts and software developers to build their own software focusing on statistical analysis, real-time prediction, and personalised recommendation.

“As a platform that consists of various algorithms combining learning in various data processing patterns, Alink can be a valuable option for developers looking for robust big data and advanced machine learning tools,” said Jia Yangqing, Alibaba Cloud president and senior fellow of its data platform. “As one of the top 10 contributors to GitHub, we are committed to connecting with the open source community as early as possible in our software development cycles.

“Sharing Alink on GitHub underlines our such long-held commitment,” Jia added.

With the US enjoying a well-earned holiday rest, and the majority of the world hunting out Black Friday deals, Alibaba had a chance to rush the opposition with Singles Day earlier this month. The numbers put out by the company did not disappoint: zero downtime was claimed, with $1 billion of gross merchandise volume achieved within 68 seconds of launch.

A recent report from ThousandEyes aimed to explore benchmark performance of the hyperscalers, noting that Alibaba, alongside Amazon Web Services (AWS), relied more heavily on the public internet rather than Microsoft and Google, who generally prefer private backbone networks. The report also noted that, contrary to opinion, Alibaba suffered packet loss when it came to China’s Great Firewall.

You can take a look at the Alibaba Cloud Alink GitHub by visiting here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

McAfee notes the gap between cloud-first and cloud-only – yet optimism reigns on success

Two in five large UK organisations expect their operations to be cloud-only by 2021 according to a new report – but the gap between the haves and the have-nots is evident.

The findings appear in a new report from McAfee. The security vendor polled more than 2000 respondents – 1310 senior IT staff and 750 employees – across large businesses in the UK, France, and Germany to assess cloud readiness.

40% of large UK businesses expect to be cloud-only by 2021, yet only 5% surveyed already consider themselves to be at this stage, the research found. 86% of UK-based senior IT staff saw their business as cloud-first today, comparing similarly to France (90%) and Germany (92%), while optimism reigned over becoming cloud-only when given an indeterminate future date. 70% of UK respondents agreed this would occur, albeit lower than their French (75%) and German (86%) counterparts.

The benefits are clear among respondents. 88% of senior IT staff polled in the UK said moving to the cloud had increased productivity among end users. 84% said the move had improved security, while supplying more varied services (85%) and increased innovation (84%) were also cited.

The question of responsibility is an interesting one, and shows where the waters begin to muddy. Never mind the issue around vendor versus customer, consensus does not particularly exist within senior leadership. Ultimately, the majority believe responsibility lies with the head of IT (34%), compared with the CIO (19%), CEO (14%), or CISO (5%). One in five (19%) employees surveyed admitted to using apps which had not been approved by IT.

“The key to security in a cloud-first environment is knowing where and how data is being used, shared and stored by employees, contractors and other third parties,” said Nigel Hawthorn, director of McAfee’s EMEA cloud security business. “When sensitive corporate data is under the IT team’s control – whether in collaboration tools or SaaS and IaaS applications – organisations can ensure the right policies and safeguards are in place to protect data from device to cloud, detect malicious activity and correct any threats quickly as soon as they arise.”

Those wondering ‘whither McAfee?’ with regards to cloud security research will notice the company’s long-standing pivot to this arena. The abovementioned ‘device to cloud’ reference is taken direct from McAfee’s branding as the company looks to gather expertise as a cloud access security broker (CASB).

This is not without success, as McAfee was named for a second year, alongside Bitglass, Netskope and Symantec, as a leader in Gartner’s CASB Magic Quadrant last month. Last year Gartner noted, with the acquisition of Skyhigh Networks, McAfee’s expertise in raising awareness of shadow IT. 2019’s Quadrant sees one new face in the winners’ enclosure in the shape of Microsoft.

In April, McAfee released a special edition of its Cloud and Risk Adoption Report. According to the 1,000 enterprise organisations polled, more than half (52%) said they found security better in the cloud than on-premise, with organisations who adopt a CASB more than 35% likelier to launch new products and gain quicker time to market.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft and AT&T expand upon partnership to deliver Azure services on 5G core

Microsoft and AT&T have beefed up their strategic partnership, announcing a new offering where AT&T’s growing 5G network will be able to run Azure services.

The companies will be opening select preview availability for network edge compute (NEC) technology. The technology ‘weaves Microsoft Azure cloud services into AT&T network edge locations closer to customers,’ as the companies put it.

Microsoft and AT&T first came together earlier this year, with the former somewhat stealing the thunder of IBM, who had announced a similar agreement with AT&T the day before.

While the operator will be using Microsoft’s technology to a certain extent – the press materials noted it was ‘preferred’ for ‘non-network applications’ – the collaborative roadmap, for edge computing and 5G among other technologies – was the more interesting part of the story. The duo noted various opportunities that would be presented through 5G and edge. Mobile gaming is on the priority list, as is utilising drones for augmented and virtual reality.

Regarding AT&T’s personal cloudy journey, the commitment to migrating most non-network workloads to the public cloud by 2024 was noted, while the commitment for the operator to become ‘public-cloud first’ was reaffirmed.

“We are helping AT&T light up a wide range of unique solutions powered by Microsoft’s cloud, both for its business and our mutual customers in a secure and trusted way,” said Corey Sanders, Microsoft corporate vice president in a statement. “The collaboration reaches across AT&T, bringing the hyperscale of Microsoft Azure together with AT&T’s network to innovate with 5G and edge computing across every industry.”

After many false starts – remember Verizon’s ill-fated public cloud product offering? – telco is finding a much surer footing in the cloud ecosystem. As VMware CEO Pat Gelsinger put it in August: “Telcos will play a bigger role in the cloud universe than ever before. The shift from hardware to software is a great opportunity for US industry to step in and play a great role in the development of 5G.”

You can read the full Microsoft and AT&T update here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Study shows continued cloud maturation in Nordics – with manufacturing a standout

A new report from Nordic IT services provider Tieto has found the region’s cloud landscape has matured significantly since 2015 from both a strategic and operational perspective – with Sweden and Finland fighting for supremacy.

The study, the latest Cloud Maturity Index which was based on responses from almost 300 decision-makers across the public and private sectors in the Nordics, placed almost one in five (18%) organisations as ‘mature’, while a quarter (27%) were seen as ‘proficient’, 42% at a basic level, and 13% ‘immature’.

In other words, it’s a broad church, with just a slight emphasis on the have-nots rather than the haves. Those who are described as mature use cloud services to a larger extent – virtually everything (97%) being cloud-based – and are much likelier to exploit the technology’s advantages compared with their immature cousins. Being classified as a mature cloud business means an approximately 20% lower IT operation costs, and on average 15% more efficiency in increasing business competitiveness.

When it came to specific industries, finance came out on top for Nordic organisations, maintaining its lead previously forged in the 2015 and 2017 surveys. The public sector continues to report the lowest strategic and operational maturity. Yet the gap is closing when it comes to traditionally ‘slower’ verticals, with manufacturing proving particularly effective. Whereas finance scored 6.0 in 2015 and 6.3 this time around, the manufacturing industry has leapt to 6.0 from 4.4.

The report also noted the importance of environmental factors in organisations’ initiatives. This is not entirely surprising given the temperate climate has enabled many data centre providers to set up shop in the Nordics. Approximately half of companies polled said they were already considering issues such as energy consumption or CO2 emission as part of their cloud strategy. Again less than surprisingly, mature cloud organisations were considerably further ahead on environmental initiatives than their immature brethren.

Despite the report’s figures – again ranked out of 10 – which showed Sweden and Finland comfortably ahead of Norway, according to Tieto’s head of cloud migration and automation Timo Ahomaki it is the latter who should be celebrating. Data sovereignty, Ahomaki argues, is an area which is ‘quite polarised’ in Sweden, with Finland’s more advanced cloud security meaning it is ‘at the forefront’ of the Nordic public sector.

Regular readers of this publication will be aware of the various initiatives which have taken place regarding the emerging data centre industry in the Nordics. As far back as 2015, CloudTech reported on a study from the Swedish government – which was later put into legislation – to give tax breaks for data centre providers. Last year, DigiPlex announced a project whereby wasted heat from its data centres would be used to warm up residential homes in Oslo.

You can read the full report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

CircleCI aims to further break down the ‘hornet’s nest’ of continuous delivery with EMEA expansion

Continuous integration and delivery (CI/CD) software provider CircleCI has been acting on its expansion plans following the $56 million (£44.8m) secured in series D funding in July. Now, the company is ready for business in London – and has hired a new head of EMEA to push things along.

Sharp observers looking at the almost 250 faces which comprise the CircleCI team would have noticed a recent addition at the foot of the list. Nick Mills joined the company in September having previously held leading sales roles at Stripe and Facebook, among others, invariably concerned with international expansion.

At CircleCI, Mills will be responsible for EMEA – which the company says represents almost a quarter of its overall business – in everything which is classified as non-engineering. “There’s a huge amount of expansion opportunity,” Mills tells CloudTech. “I’ve already had some interesting conversations in the first few weeks here with companies in fintech and mobility, on-demand services. They really see CircleCI and CI/CD as a fundamental critical enabler that can help their teams increase productivity.”

The company certainly appears to be seeing gains from this bet. Big tech names on the customer roster include Facebook, Spotify and Docker, while investor Scale Venture Partners described the company earlier this year as the ‘DevOps standard for companies looking to accelerate their delivery pipeline while increasing quality.’

For CEO Jim Rose, who has been in London this week for the launch, it is the expansion of a journey which began for him in 2014, first as COO before moving up to the chief executive role a year later.

“When I first got to the company, there were about 30 individual logos in the CI/CD market, and that’s been whittled way down,” Rose tells CloudTech. “Now there is, really, ourselves, a couple of smaller, standalone, very focused CI/CD players, and then you’ve got some of the larger platforms that are trying to go end-to-end.”

Rose cites the ‘peanut butter manifesto’, the now infamous document from Yahoo which used the foodstuff as a metaphor for businesses spreading themselves too thinly across multiple offerings, as evidence for why the larger platforms will struggle.

“We have really gone for the opposite of that strategy,” he explains. “For the vast majority of large customers, you can only move certain systems one at a time. Customers ask us all the time… how do we build that CI/CD system but also the most flexible system so that regardless of what you have in place inside of your overall enterprise or team, it’s really easy and seamless?”

There are various aspects which pull together the company’s strategy. Back in the mid-2000s, if a company built a new application it would hire a bunch of developers, flesh out the spec, write custom code across every line and then package and ship the resultant product. As Rose puts it, any custom code written today takes on the mantle of orchestrating all the pieces together, from the plethora of open source libraries and third-party services.

Continuous delivery is a hornet’s nest – it’s very easy to get to version one, but then the complexity comes as your developers start pushing a lot faster and harder

“What we’re helping customers do is, across all of these hundreds and thousands and millions of projects, start to take a heartbeat of all those different common components and use that to help people build better software,” says Rose. “If you have a version that’s bad or insecure, if you’re trying to pull a library from a certain registry that has stability problems, if you have certain services that are just unavailable… these are all new challenges to software development teams.

“Using the wisdom of the crowd and the wisdom of the platform overall, we’re starting to harness that and use that on behalf of our customers so they can make their build process more stable, more secure, and higher performing.

“Honestly, continuous delivery is a hornet’s nest,” adds Rose. “It’s really complicated to run into one of these systems at scale. It’s very easy to get to version one, but then the complexity comes as you bring it out to more teams, as you add more projects, as your developers start pushing a lot faster and a lot harder.”

For a large part of the company’s history, the individual developer or team of developers was the route in for sales; almost in an infiltrative ‘shadow IT’ context, whether it was the CTO of a small startup or a team lead at a larger company. While this can still be the case at enterprise-level organisations, CircleCI realised it needed more of a top-down, hybrid sales approach.

“One of the biggest changes in our space – not just CI/CD, but the developer space more generally – is developers historically have not been conditioned to pay for things,” says Rose. “If you needed a new tool, a new script, the developers would either go out and create it on their own or they use an open source service.

“What’s changed over the last two or three years is now developers, because their time is so valuable, have the budget and the expectation that they have the opportunity to pay for services that help you move faster. A lot of what we do from a sales perspective is help development teams understand how to procure technology. What’s necessary? What do you think about what you look at? How do we help you through that commercial process?”

Mills will be looking to guide EMEA customers through this process, with the stakes high and the motivation to assist leading tech companies strong. “A lot of companies are successful in and of themselves and can build their businesses, but the space we’re in really has the potential to enable the most successful tech companies today and of the future,” Mills explains.

“Ultimately, the creation they can generate as companies can obviously help them move quickly, increase the scale and pace of product delivery,” he adds. “To me, that feels like incredibly high-level work to be doing and high value.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.