All posts by James

Flexera acquiring RightScale points to need for cost and complexity optimisation across the IT stack

Cloud cost optimisation and management continues to be a hot area – and Flexera’s acquisition of RightScale, announced last week, plays into that theme even further.

RightScale may be well known for its authoritative yearly State of the Cloud reports, but its bread and butter is focused around reducing the cost headaches and complexity of cloud deployments. This can be through dashboards which collate performance and can identify wasted cloud spend, or being able to access multiple clouds from a single portal.

Flexera’s focus is not dissimilar; the Illinois-headquartered firm offers products around IT and software asset management (SAM). Together, the two companies will aim to provide an end-to-end set of tools to manage the entire IT stack, from hardware, to software, to SaaS.

“In today’s IT environment, a strong technology asset management strategy is not a nice-to-have – it’s required,” wrote Michael Crandell, CEO of RightScale in a blog post. “Enterprises spend approximately 60% of their IT budgets on software, hardware, SaaS, and cloud technology. With the RightScale solutions under the Flexera product umbrella, you’ll have the most comprehensive set of tools to help you manage your IT spend.”

That is the rationale therefore – but what is backing this up? As regular readers of this publication will testify, multi-cloud is becoming an essential part of organisations’ IT operations in 2018. Indeed, according to this year’s State of the Cloud, more than four in five enterprises have a multi-cloud strategy in place. Not only does it mitigate against the dreaded vendor lock-in, but companies are seeing the benefits of different clouds for different workloads; take Netflix for instance, and the furore around the company – a long-time AWS house – saying it used Google for certain disaster recovery workloads.

This approach has in some cases metamorphosed into something even more specific. Take the partnership announced last week by Microsoft and Volkswagen to put together what the companies are calling an ‘automotive cloud.’ From 2020, the duo claim, more than five million new Volkswagen brand vehicles per year will be fully connected, aiming for ‘a future fleet of cars which will behave as mobile ‘Internet of Things’ hubs linked by Microsoft Azure.’

Cloud solutions which are specific to certain industries are becoming more commonplace, such as the SAP Digital Manufacturing Cloud, announced back in April. The product is tailored for manufacturers of all sizes, with features such as integration between business process systems and the shop floor, and connecting manufacturers to suppliers. At the time, the company said the move would help customers “take advantage of the Industrial Internet of Things by connecting equipment, people and operations across the extended digital supply chain and tightly integrating manufacturing with business operations.”

This may be the streamlined future for organisations – but for the time being the vast majority of companies out there will have a mix of cloud-based and legacy systems, in need of de-cluttering their infrastructure. “As the migration to cloud continues, our clients are telling us that cloud costs are escalating at a rate much faster than they envisioned or planned for, and that multi-cloud management complexities are becoming the norm – not the exception,” said Michael Adams, KPMG managing director.

“They want to be able to control and reduce spend across all of their cloud environments with one solution.”

You can find out more about the Flexera/RightScale acquisition here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Mark van Rijmenam: On the ‘gestalt shift’ of big data, blockchain and AI convergence

When emerging technologies, such as blockchain, artificial intelligence (AI) and the Internet of Things converge, a ‘gestalt shift’ will occur, according to a new book. “The character of the experience will drastically change,” write Mark van Rijmenam and Dr. Philippa Ryan in Blockchain: Transforming Your Business and Our World. “All of a sudden, we can see the world through a different, more technologically advanced, lens and this opens up a completely new perspective.

“The convergence of multiple disruptive technologies will offer us new possibilities and solutions to improve our lives and create better organisations and societies, as well as build a better world all together.”

Organisations are increasingly taking the approach of exploring these technologies in tandem rather than in silos. Put simply, they all feed into each other. Pat Gelsinger, CEO of VMware, had it nailed down at the recent VMworld event. “Each [technology] is a superpower in [its] own right, but they’re making each other more powerful,” he told attendees. “Cloud enables mobile connectivity; mobile creates more data; more data makes the AI better; AI enables more edge use cases; and more edge requires more cloud to store the data and do the computing.”

For van Rijmenam, already a well-established big data thought leader, it was a natural trend. “The convergence of emerging technologies is the true paradigm shift organisations have to face,” he tells CloudTech. “Big data and blockchain have a lot in common and it will actually make data governance more importance – after all, blockchain makes data immutable, verifiable and traceable, but it does not magically turn low-quality data into high-quality data.”

This feeds into the central problem, that of data – what to do with it and how to utilise it best. But ‘twas ever thus. “When initiating your business intelligence project, you’re likely to be surprised at how bad your raw material – data – really is,” wrote Dan Pratte in TechRepublic. “You’ll discover that if you’re going to be serious about business intelligence, you’re going to have to get very serious (their emphasis) about data quality as well.” The article publication date? May 30 2001.

Today, artificial intelligence is redefining business intelligence at a rapid rate. Take the recent analysis from Work-Bench around the future of enterprise technologies. “Expect all modern BI vendors to release an [automated machine learning] product or buy a startup by [the] end of next year,” the report explained.

This will move down to the rank and file organisations who, ultimately, have to see themselves as a data-centric company going forward. “Organisations need to completely rethink the customer touchpoints and processes to be ready for the convergence of emerging technologies,” says van Rijmenam. “Only those organisations who are capable of seeing themselves as a data company will stand a chance to survive.”

Blockchain: Transforming Your Business and Our World focuses only its last chapter – 14 pages – on convergence. The remaining 180-odd pages explore blockchain’s potential in a variety of scenarios, from poverty, to voting, to climate change. The book describes these throughout as ‘wicked problems.’ Yet the third chapter, on identity, is the ultimate banker.

“We believe that we first and foremost need to solve the identity problem [with blockchain],” says van Rijmenam. “Once we have a self-sovereign identity, it will help make it easier to solve the other issues. That is why we first discussed that problem in our book before discussing the other wicked problems – thus a self-sovereign identity will be the biggest long-term change as it will empower individuals, but also organisations and even connected devices.”

Identity is not the only problem the industry needs to solve before blockchain can make its way truly into the mainstream. While a recent study from Juniper Research found that business leaders’ understanding of the technology is going up solidly, van Rijmenam categorises the issues into three buckets; technological, people, and culture. “Consumers will need to get used to a society where they have to control their own private keys,” he says. “That might be the biggest challenge of them all as it requires a culture shift.”

With this intersection in mind, van Rijmenam is currently working on a new book, focused on ‘the organisation of tomorrow’ and exploring how big data analytics, blockchain and AI will be transformative. “Organisations need to ‘datafy’ their processes, make data available using the cloud, collaborate with industry partners to optimise the supply chain, analyse their data for insights, and automate their business processes using AI,” says van Rijmenam.

Blockchain: Transforming Your Business and Our World is published by Routledge and is available for purchase here.

Main picture credit: https://vanrijmenam.nl/

Ignore multi-cloud today and risk becoming irrelevant in five years, report warns

Multi-cloud initiatives continue to be of great importance to European organisations – and those who aren’t heeding the warning signs today will feel the pinch in five years’ time.

That’s according to a new study from research firm Foresight Factory, alongside application network technology provider F5 Networks. The study, titled ‘The Future of Multi-Cloud’, drew on contributions from Deloitte, CloudSpectator, Ovum and more, having been based on a discussion guide combining publicly available research and Foresight’s proprietary bank of more than 100 trends.

In short – delaying multi-cloud adoption will mean your organisation will become increasingly irrelevant. Yet many organisations will surely be aware of this. Take the study from Virtustream in July, which found the vast majority of organisations (86%) confirming their cloud strategy was a multi-cloud one. Or take how many of the leading cloud vendors are pushing their acquisition and product strategies towards the trend; Cisco acquiring Duo Security, Nutanix buying Netsil, Juniper Networks offering new data centre, campus and branch network offerings.

Everyone is at it. One of the primary drivers for multi-cloud, as the report notes, is fear surrounding vendor lock-in. But the report makes an interesting point: there is a sense of constant change underpinning these initiatives, with the hyperscale vendors more than willing to outspend rivals to keep their market share.

Take machine learning as an example. According to the RightScale 2018 State of the Cloud report, machine learning is the most popular public cloud service with regards to future interest. AWS, Microsoft and Google are all taking big strides in this area, from Google’s pre-packaged AI services, to the various AWS clients citing the technology as key to their success – Major League Baseball, Formula 1, and more. From Microsoft’s perspective, the report notes that its ML focus has led it to invest in new server technologies, with workloads on the edge also contributing.

Yet there are various issues which still need to be overcome. The report cites the well-known skills gap organisations are facing. With multiple cloud services, containers, APIs and more, visibility and management is vital. Plenty of companies have sprung up to help organisations with this – CloudCheckr, CloudHealth Technologies and so on – but ultimately it’s all about service delivery. Consumers may not be interested in the technical intricacies of the multiverse, but they will care if their service becomes inflexible or goes down.

So what can companies do? Their technological landscape is continually changing, driven from the top by initiatives from the largest cloud vendors, and they have more plates spinning than ever. There are a couple of things which can be done, according to the report. Firstly, organisations should focus more on security. Consumers will eventually only be interested in those who have the most watertight systems built in. What’s more, there needs to be an increased focus on nurturing young IT talent – or ‘tapping into the kaleidoscopic potential of youth and promoting industry diversity’, as the report puts it.

In other words, organisations need multi-cloud. With developments in edge computing and artificial intelligence starting to take place driving greater insights and quicker decision making, they need to get on that train as soon as possible. But the skills gap won’t be overcome overnight.

“The multi-cloud ramp-up is one of the ultimate wake-up calls in internal IT to get their act together,” said Eric Marks, VP of cloud consulting at CloudSpectator. “One of the biggest transformative changes is the realisation of what a high performing IT organisation is and how it compares to what they have. Most are finding their IT organisations are sadly underperforming.”

Microsoft makes Azure Data Box generally available for heavy duty data migration

More and more enterprise data is being transferred to the cloud, but sometimes the journey can break the network's back – which calls for less virtual and more physical solutions.

Microsoft has announced the general availability of Azure Data Box, a physical box which organisations can order, fill up, and then return to Redmond for it to be uploaded to an Azure environment. 

Companies and users can store up to 100 TB per standard box, with variables either way. The newly announced Data Box Heavy can handle up to 1 PB of data, while Data Box Disks go up to 40 TB.

For those who may consider this a decidedly low-tech method of cloudy data transfer, it is worth noting Amazon Web Services (AWS) has long since had Snowball, a petabyte-scale data migration tool which carries similar bulk as Azure Data Box. AWS also has the Snowmobile, a 45-foot long shipping container, for data loads up to 100 PB.

The customers who really  benefit from these types of tools are those organisations either with reams of offline data from legacy tools, or those collecting data in hard to access places. For instance, moving an exabyte of data across a 10 gigabit per second line would take the better part of two and a half decades to complete.

Oceaneering International was one of the first customers of Azure Data Box last year. Its underwater vehicles generate 2TB of data per day, with the vessel itself generating up to 10TB per day. "We're trying to get the data to the decision maker quicker," explained Mark Stevens, director of global data solutions, adding it is aiming for a seven day turnaround from the field anywhere in the world.

The other addition to the product family is Azure Data Box Edge, which combines on-premises with AI-enabled edge compute capabilities. With increasing amounts of data being created at the edge, the Edge hardware enables data analysis and filtering at the edge of the network, as well as being a storage gateway.

You can find out more about the Azure Data Box family here.

Picture credit: Microsoft

AWS launches into Accenture and Capgemini partnerships

A couple of partnerships involving Amazon Web Services (AWS) with Accenture and Capgemini have been unveiled, around healthcare and enterprise migrations respectively.

The first partnership, announced with Accenture and Merck, aims to launch a cloud-based informatics research platform which is designed to help life sciences organisations in the early stages of drug development.

The platform will enable healthcare professionals to analyse and aggregate data from multiple applications and a single set of interfaces. The platform is being developed by Accenture and AWS with Merck being the first pharmaceutical company to use it.

Elsewhere, Capgemini and AWS are coming together to build a platform focusing on value-added cloud services for their customers, ranging from SAP migrations, to data centre modernisation and artificial intelligence (AI).

With Capgemini's partnerships with both SAP and AWS in mind, the company is able to migrate the former to the latter as part of the first focus of the initiative. The data centre modernisation will come through leveraging VMware Cloud on AWS – which regular readers of this publication will know all about – to deliver end-to-end hybrid cloud.

"Our clients look for global scale and excellence in digital transformation, enabled by cloud technologies," said Aiman Ezzat, chief operating officer at Capgemini. "With a commitment to scaling our AWS capabilities we can bring to our clients' digital jourenys both operational efficiencies and the power of new technologies, such as artificial intelligence and machine learning."

Intel partners with Alibaba for edge computing platform

Intel has noted a $200 billion addressable opportunity for the 'data-centric' economy combining cloud, AI and edge – and the latter has come into force following a new partnership with Alibaba.

The collaboration, revealed at Alibaba's Computing Conference in Hangzhou, will see the two companies launch a joint edge computing platform.

The primary use case is for industrial manufacturing and smart buildings, integrating Intel's hardware, software and AI with Alibaba Cloud's IoT products. "The platform utilises computer vision and AI to convert data at the edge into business insights," the companies note.

The companies are partnering in other ways – deploying the latest Intel technology in Alibaba to prepare for the 11/11 shopping festival, as well as helping provide content for the Tokyo Olympic Games in 2020.

"Alibaba's highly innovative data-centric computing infrastructure supported by Intel technology enables real-time insight for customers from the cloud to the edge," said Navin Shenoy, Intel EVP and data centre group general manager in a statement. "Our close collaboration with Alibaba from silicon to software to market adoption enables customers to benefit from a broad set of workload-optimised solutions."

Last month, Shenoy told attendees at the Data-Centric Innovation Summit in Santa Clara of Intel's plans to address the 'biggest opportunity' in the company's history. In a subsequent company editorial, he further outlined his vision. "The proliferation of the cloud beyond hyperscale and into the network and out to the edge, the impending transition to 5G, and the growth of AI and analytics have driven a profound shift in the market, creating massive amounts of largely untapped data," he wrote.

This is not the only cloudy partnership Alibaba has tapped into in recent weeks – the company also struck a deal with SAP to launch in China, with the two companies jointly offering ERP suite S/4HANA Cloud in the country.

Google Cloud launches container security tool and more at Tokyo jamboree

Google has rolled out a series of cloudy updates in time for its Cloud Next Tokyo event – around container security, in-memory data, and artificial intelligence (AI).

Container Registry vulnerability scanning, launched in beta, looks to prevent the deployment of vulnerable images by automatically detecting known security vulnerabilities during the continuous integration and delivery (CI/CD) processes.

Regular readers of this publication will certainly be aware of the importance of security in containerisation and DevOps. Indeed, back in June this publication wrote about the various pieces of research around unsecured consoles and dashboards, with companies including Tesla and Weight Watchers affected.

This is where Google wants to shore things up (below). All container images built using its fully managed CI/CD platform, Cloud Build, will now be automatically scanned for OS package vulnerabilities. What's more, vulnerability scanning will also be integrated with Binary Authorization, which ensures only trusted container images can be deployed without the need for manual intervention.

"When we set out to build vulnerability scanning for container images, we started from the premise that security needs to be built into CI/CD from the very beginning, to cut down on time spent remediating downstream security issues, and to reduce risk exposure," Google wrote in a blog announcing the launch. "Furthermore, security controls need to happen atuomatically, not as part of some manual, ad-hoc process.

"The system must be able to automatically block vulnerable images based on policies set by the DevSecOps team," the blog adds. "In other words, CI/CD security needs to be comprehensive, from scanning images, to enforcing validation, as part of every CI/CD pipeline."

Cloud Memorystore for Redis, made generally available with these updates, is  based on the open source Redis database and automates tasks such as provisioning, scaling, failover and monitoring. New regions which support the service are Tokyo – as one would expect – Singapore and the Netherlands, taking the total number of supported regions to eight.

The AI-focused announcement was specific to Japan; Google said that it was offering two courses, the Machine Learning with TensorFlow on Google Cloud Platform specialisation, and the Associate Cloud Engineer certification, in Japanese. A new Advanced Solutions Lab (ASL) is also being launched in Tokyo. "In the coming months, the ASL will offer an immersive training experience so that Japanese businesses can learn directly from Google Cloud ML engineers in a classroom setting," the company wrote. "With this training, businesses can build the skills they need to create and deploy machine learning at scale, using the full power of Google Cloud."

Another new feature is around more effective code search. Cloud Source Repositories, whose revamped product is now available in beta, is aimed around privately hosting, tracking, and managing changes to large codebases on Google Cloud Platform. The code search capabilities are based on document indexing and retrieval techniques used on Google Search. 

The company is in the midst of its Next world tour – with London on the agenda in October. 

NetApp acquires StackPointCloud for multi-cloud Kubernetes service offering

Another piece of cloud M&A but this time with a Kubernetes feel; hybrid cloud provider NetApp has announced the acquisition of StackPointCloud with the claim of providing the industry's first complete Kubernetes platform for multi-cloud deployments.

As the company's hosting page put it, the proposed hook up is 'the simplest way to deploy a Kubernetes cluster to the clouds'. The NetApp Kubernetes Service is compatible with Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform with more than 7500 clusters deployed thus far – 5785 on AWS, 1286 on Google and 596 at Azure. 

If multi-cloud is making more sense for organisations with greater efficiency for different workloads, and Kubernetes and containers makes sense for application development and deployment, why not both? 

StackPointCloud's technology offers 'zero to Kubernetes in three clicks' which comprises the key feature of easy upgrade. Other features include Istio support, volume support and dashboard capability.

Kubernetes has clearly been the leader in container orchestration – but as Ronald Sens, director EMEA marketing at A10 Networks noted in this publication earlier this week, there is more that can be done. "The key point here is that enterprise organisations are starting to take note and there are signs that the market for Kubernetes is growing very rapidly," wrote Sens.

"This acquisition will benefit customers looking to simplify the delivery of data and applications in clouds, across clouds and hybrid clouds," said Anthony Lye, SVP and general manager of NetApp's cloud data services business unit. "The StackPointCloud Kubernetes as a service platform combined with NetApp's Cloud Data Services creates a complete DevOps solution, so customers can focus on innovation, not administration."

Financial terms of the deal were not disclosed.

Almost a third of key enterprise IT spending to be cloud based by 2022, says Gartner

Cisco may have said that cloud traffic would represent 95% of total data centre traffic by 2021 – but how much of that will be driven by the enterprise? New figures from Gartner give an intriguing picture.

The analyst firm has forecast that by 2022 more than a quarter (28%) of spending within key enterprise IT markets will be cloud-based, up from 19% this year. The findings, which have been announced in the run up to Gartner’s Symposium/ITxpo in Australia, show an interesting shift with growth in enterprise IT cloud spending now moving more quickly than more traditional non-cloud markets.

Today, application software – such as customer relationship management (CRM) is driving the majority of enterprise IT spending. It will still be the largest market by 2022, but at considerably slower growth, given the saturation of the market, than system infrastructure.

By 2022, Gartner argues, almost half of addressable revenue will be in system infrastructure and infrastructure hardware. This is down to the legacy stack – data centre hardware, operating systems and IT services – being especially inflexible and difficult to change over. The coming few years will be critical for traditional infrastructure providers therefore.

“The shift of enterprise IT spending to new, cloud-based alternatives is relentless, although it’s occurring over the course of many years due to the nature of traditional enterprise IT,” said Michael Warrilow, Gartner research vice president. “Cloud shift highlights the appeal of greater flexibility and agility, which is perceived as a benefit of on-demand capacity and pay as you go pricing in cloud.

“As cloud becomes increasingly mainstream, it will influence even greater portions of enterprise IT decisions, particularly in system infrastructure as increasing tension becomes apparent between on- and off-premises solutions,” added Warrilow.

The figures come after a previous forecast from the company argued the global public cloud services market would grow 17.3% in 2019 to break the $200 billion mark. Cloud system infrastructure services, or infrastructure as a service, represented the fastest growing segment.

The key to ‘elite’ DevOps success in 2018: Culture, cloud infrastructure, and abandoning caution

If you think your organisation has gotten serious about DevOps, the bad news is that a small group of companies are raising the bar higher than ever. But things can change depending on how you implement cloud infrastructure.

That is the primary finding from DORA (DevOps Research and Assessment Team) in its most recent State of DevOps report, which polled almost 1,900 global professionals and was put together primarily alongside Google Cloud, with a stellar cast list of secondary sponsors including Microsoft Azure, Amazon Web Services (AWS) and Deloitte.

There are various comparisons which can be made between this and Puppet’s State of DevOps report, released at similar times. Both reports are framed around comparing high and low performers; in this case, almost half (48%) are considered high performers compared with 37% and 15% for medium and low respectively.

Yet the DORA report for the first time issues a new group – ‘elite’ performers (7%). These companies naturally deploy multiple times per day, but they also take less than an hour to go from code commit to code running in production, as well as restoring their service in the event of failure. For comparison, those in the previous high performer category could take up to a week for changes, and up to a day for service restoration.

That gulping sound you just heard? IT engineers wondering how to push their companies to these super-high performance levels.

But fear not – there are steps organisations can take. First of all, try if you can to remove the shackles and become less cautious. For instance, CapitalOne says it deploys 50 times per day, while Google and Netflix – albeit throughout their hundreds of services across production environments – go into the thousands.

The report dives into a typically ‘conservative’ organisation’s mindset. “Releasing code infrequently can be an effective strategy as they use the extra time between deployments for testing and quality checks to minimise the likelihood of failure,” the report notes. “[Yet] developing software in increasingly complex systems is difficult and failure is inevitable.

“When failures occur, it can be difficult to understand what caused the problem and then restore service. Worse, deployments can cause cascading failures throughout the system. Those failures take a remarkably long time to fully recover from.

“While many organisations insist this common failure scenario won’t happen to them, when we look at the data, we see 5% of teams doing exactly this – and suffering the consequences.”

Examining further drivers of improvement, the report assessed cloud and multi-cloud usage. While AWS (52%) was the most popular provider, ahead of Azure (34%) and Google Cloud Platform (18%), two in five said they were using multiple cloud providers.

The key here is that companies who exhibit all signs of cloud readiness – on-demand self-service, broad network access, resource pooling, elasticity and measured service – are significantly more likely to be in the elite group instead of the lowest performers.

Being keen on implementing platform as a service to developers, as well as adopting infrastructure as code and open source policies, are also more likely to help. Take Capital One again as an example. It is not just the embracing of open source that is vital, but the culture which goes with it. This was an important part of the Puppet analysis. While it appears to be a slower process, for the highest performers, one in five said they had strong DevOps culture across multiple departments.

The report assesses this in comparison with Ron Westrum’s model of organisational cultures – the power, rule, and performance-oriented. “When teams have a good dynamic, their work benefits at the technology and organisational level,” the report notes. “Our research has confirmed this for several years and we caution organisations not to ignore the importance of their people and their culture in technology transformations.”

You can read the full report here (email required).

Read more: Puppet State of DevOps 2018: DevOps continues to evolve – but resist temptation to skip steps