All posts by James

Forbes Cloud 100 2018: Stripe holds off Slack to retain top private cloud title

Payments provider Stripe remains the number one privately owned cloud company ahead of social messaging firm Slack, according to the latest rankings from Forbes.

The media firm’s latest Cloud 100 list, celebrating the best private cloud firms – as in, cloud companies who are private – saw Stripe retain top spot with what Forbes calls ‘the online tool kit for digital payments, helping billions in transactions flow back into the economy.’

After Stripe and Slack however – which was third last year – there are significant changes at the top table. Dropbox, DocuSign and Adyen, which all made the top five in 2017’s list, have all since gone public. This publication noted in March, when Dropbox had filed for IPO, that the company had moved away from Amazon Web Services to its own infrastructure – a particularly long process.

The Cloud 100 was put together alongside Salesforce Ventures and Bessemer Venture Partners. The latter, perhaps not uncoincidentally, also produces a yearly report focusing on cloud and enterprise M&A trends. 2017’s most prominent IPO-ers were Cloudera, MongoDB, and Okta – an improvement on the previous year but still below historical averages.

It is too early however to see the VC firm’s primary prediction for this year to bear fruit. The keynote of Bessemer’s State of the Cloud report was that serverless, APIs, and blockchain would shape the cloud landscape in 2018 and beyond. It will take a while however for those biggest players to infiltrate the wider landscape – as the Forbes 100 list continues to be dominated by SaaS firms.

Yet the rise of artificial intelligence (AI) is notable. Among the more interesting companies in this year’s crop are UiPath, Darktrace and Checkr. UiPath, a new entry at #14, is a robotic process automation vendor based in New York, with Forbes admitting the company, with 1,350 companies, “absolutely came out of left field.” San Francisco-based Checkr (#47), meanwhile aims to provide a solution for background checks utilising AI, to better classify records without threatening compliance.

Cybersecurity provider Darktrace (#36), whose team is led by former US and UK researchers and government agents, is one of only two companies holding Britain’s end up in the list. The launch of cyber-AI tool Antigena last year was met with reasonable fanfare; as sister publication IoT News put it at the time, using AI for threat monitoring offers “tangible benefits”, picking up on threats and reacting to them without the need for manual action.

It is worth noting the influence being named on the Cloud 100 holds. Over the past three years, from MuleSoft to Cloudera and many more in between, every major cloud IPO or acquisition has meant a company on the 100 list departs. The publishing of this year’s list will lead some in the industry to ponder over future trajectories. Slack, who topped the list in 2016 before being usurped by Stripe for the past two editions, recently announced series H funding – to put this in perspective, the ‘record’ is series J, which big data firm Palantir Technologies took in 2014 – of $427 million, taking the company’s value to more than $7 billion.

“The 2018 Cloud 100 represents well over $135 billion in private shareholder value – an astonishing figure that reminds us yet again of the power of the cloud,” said Byron Deeter, partner at Bessemer Venture Partners. “The way we do business will be dramatically different as a result of these companies and I am honoured to celebrate the remarkable accomplishments of the founders and teams behind each company on the 2018 Cloud 100.”

The top 10 companies, in descending order, are Stripe, Slack, Zoom Video Communications, Tanium, Procore Technologies, CrowdStrike, Qualtrics, Squarespace, Elastic, and Eventbrite. Take a look at the full Forbes Cloud 100 list here and compare with 2016 and 2017’s verdicts.

Datrium secures $60m series D funding to go beyond hyperconverged infrastructure

Datrium, a California-based hybrid cloud infrastructure provider, has raised $60 million (£45.8m) in series D funding, with the aim of helping enterprises ‘overcome major obstacles in data analysis and storage.’

The round was led by Samsung Catalyst Fund, as well as featuring new participation from Icon Ventures. NEA and Lightspeed Venture Partners – who regular readers of this publication would recognise as investors in Netskope, CloudBees and Zscaler among others over the years – also participated in the oversubscribed round.

The company’s primary offerings are based around its DVX product, for cloud and on-premises which promises 10 times the speed and scale of legacy hyperconverged infrastructure, as well as cloud backup and cloud disaster recovery orchestration.

Datium claims it is pioneering the area of 2-layer infrastructure, which represents a step up from traditional hyperconverged infrastructure. As CEO Tim Page recently put it to The Silicon Review, the company provides ‘a single management interface across enterprise data centres and public cloud so IT can administer the hybrid cloud at the virtual machine level supported by real-time analytics and without all the detailed configuration time of traditional data centre infrastructure.’

Customers include Fortune 100 companies across industries such as financial services, healthcare, manufacturing and entertainment.

“We are thrilled to partner with Samsung and Icon Ventures to expand our technical and geographical momentum,” Page said in a statement. “Enterprises globally have the same problems in simplifying compute and data management across on-prem and cloud. Where SANs don’t even have a path to cloud, traditional HCI has too many trade-offs for core data centres – backup requires separate purchasing and administration, and cloud DR automation is seldom guaranteed. Larger enterprises are realising that Datrium software offers them a simpler path.”

The data centre landscape continues to change. Hyperscalers are ruling the roost, with capex continuing to rise. Cloud leads the way – Cisco said in February that cloud traffic will represent 95% of total data centre traffic by 2021 – so it’s a race against time for organisations trying to build through their legacy stacks with one hand while driving towards cloud with the other.

Total funding for Datrium now stands at $170 million.

Puppet State of DevOps 2018: DevOps continues to evolve – but resist temptation to skip steps

There are many paths to success in DevOps, but many more which lead to failure – so it’s important to get the evolution right.

That’s the key finding from Puppet’s recently released 2018 State of DevOps report. The study, which quizzed more than 3,000 global technology professionals, argues there are five key stages for good DevOps practices; having built the foundation, normalise the technology stack, standardise and reduce variability, before expanding DevOps practices, automating infrastructure delivery, and provide self-service capabilities.

Sounds simple, doesn’t it? Yet comparatively few of the companies surveyed were hitting the heights of DevOps-friendliness. The report’s results were based on organisations’ responses to various practices, scored between one and five. These were then grouped into low, medium and highly evolved. Four in five (79%) respondents were categorised as medium, with low and high (10% and 11% respectively) on similar levels.

Despite the desire to get to a higher level of DevOps zen, it is a slow evolutionary process. For the majority of companies polled, in the medium bracket, 14% said they had strong DevOps culture across multiple departments or across a single department. For higher level players, these numbers change to 19% and 9%.

It’s a similar process with automation – indeed, the same number of low-level and high-level companies surveyed (8%) said most of their services were available via self-service. Yet while only 15% of low players said their teams collaborated to automate services for broad use, this number rises for higher players to 37%. “Past experience has shown us that the path from a low degree of IT automation to a high degree isn’t neat or linear,” the report notes.

The report argues that automation is a reasonable yardstick on the CAMS – culture, automation, measurement and sharing – DevOps framework model as it is easily understood by the technical side and has a relatively predictable path. Culture, meanwhile, is more difficult to pin down.

Assuming the foundations have been built around setting company culture, automation et al, step one for teams looking to drive DevOps forward is to reduce the complexity of their tech stack. This means, for new projects, building on set standards, as well as making source code available to other teams. Standardisation follows, which again advocates building on a standard set of technology, as well as a standard operating system, while expansion explores reusing deployment patterns for building apps and services.

The report advises against skipping a few of the earlier steps. “Anecdotally speaking, we have seen organisations start with stage four automation, without having been through normalisation, standardisation and expansion,” it explains. “These organisations do not achieve success – and we believe it’s because they lack a foundation of collaboration and sharing across team boundaries.

“That sharing is critical to defining the problems an organisation faces and coming up with solutions that work for all teams.”

Ultimately, it’s about working at one’s own pace, and getting the building blocks firmly in place for many organisations reading the report.

“While DevOps practices have become far more well known across our industry, organisations continue to struggle to scale pockets of DevOps success more broadly across multiple teams and departments,” said Nigel Kersten, Puppet VP of ecosystem engineering. “This year’s report explores the foundational practices that need to be in place in order to scale DevOps success, and proves that success can only scale when teams are enabled to work across functional boundaries.”

You can read the full report here (email required).

Data centre infrastructure figures continue to rise – driven by public cloud and enterprise servers

As cloud usage continues to skyrocket, getting prime data centre real estate is a bigger priority than ever. According to the latest figures from analyst firm Synergy Research, over the past two years quarterly spend on data centre hardware and software has grown by 28%.

Total data centre infrastructure equipment revenues, taking into account cloud, non-cloud, hardware and software, hit $38 billion in the second quarter of 2018. Public cloud has gone up 54%, with private cloud going up 45% and the traditional non-cloud base declining 3%.

Original design manufacturers (ODMs) lead the way in the public cloud space, which may not come as much of a surprise. As this publication – and indeed, Synergy – has frequently reported, capital expenditure of the hyperscalers in public cloud continues to rise, building out their data centre empires and speculating to keep accumulating. Aside from the ODMs, Dell EMC leads Cisco and HPE in the public cloud market.

For private cloud, Dell EMC is again on top – the company leads in both server and storage revenues – ahead of Microsoft, HPE and Cisco, while Microsoft leads the declining non-cloud market, ahead of Dell EMC, HPE and Cisco in that order.

“We are seeing cloud service revenues continuing to grow by 50% per year, enterprise SaaS revenues growing by over 30%, search [and] social networking revenues growing by over 25%, and eCommerce revenues growing by over 40%, all of which are driving big increases in spending on public cloud infrastructure,” said John Dinsdale, a chief analyst at Synergy. “That is not a new phenomenon.

“But what has been different over the last three quarters is that enterprise spending on data centre infrastructure has really jumped, driven primarily by hybrid cloud requirements, increased server functionality and higher component costs.”

Microsoft digs down on Azure outage, explores data loss and failover question

Microsoft has put together a post-mortem on what it described as an 'unprecedented' Azure outage – exploring an interesting question of data loss and failover capability.

The outage, which affected customers on the VSTS – or Azure DevOps – service in the South Central US region, required more than 21 hours to recover all facilities, as well as an additional incident regarding a database which went offline taking another two hours to resolve.

As the status page – which originally went down with the rest of the service – noted at the time, the cause was blamed on high storms in the Texas area. With the power swells that resulted, the data centres were able to maintain temperature through a thermal buffer – but when that was depleted, automated shutdown took place after temperatures exceeded safe levels.

At the time, users queried Microsoft's claims that South Central US was the only region affected – but as the company explained, customers globally were affected due to cross-service dependencies.

Writing in a blog post, Buck Hodges, director of engineering for Azure DevOps, apologised to customers and said the company was exploring the feasibility of asynchronous replication. With asynchronous replication, data which did not have time to be copied across the network on the second server is lost if the first server fails. As Hodges explained: "If the asynchronous copy is fast, then under normal conditions, the effect is essentially the same as synchronous replication." Synchronous replication, where data loss is less of an issue, has problems particularly across regions, Hodges added, as the time it takes does not equate to performance, particularly across mission-critical applications.

For the customers themselves, it's not an either-or question. Hodges said that some customers would be happy to take a certain loss of data if it meant getting a large team up and running again, while others would prefer to wait for a full recovery however long it took.

"The only way to satisfy both is to provide customers the ability to choose to fail over their organisations in the event of a region being unavailable," Hodges wrote. "We've started to explore how we might be able to give customers that choice, including an indication of whether the secondary is up to date and possibly provide manual reconciliation once the primary data centre recovers.

"This is really the key to whether or not we should implement asynchronous cross-region fail over," Hodges added. "Since it's something we've only begun to look into, it's too early to know if it will be feasible."

Regardless of the problems outages cause and the frustration they cause to users, whether they be down to natural causes or otherwise, it is interesting to see an introspective exploration from Microsoft here.

ParkMyCloud and CloudHealth team up for greater multi-cloud optimisation tools

It’s certainly a sign that the cloud industry is seriously mature – when we’re not just talking about multiple clouds, but multiple cloud management providers.

ParkMyCloud and CloudHealth Technologies, two companies in the cloud optimisation and management space, have announced an extension of their partnership with multi-cloud in mind.

The integrated product aims to offer the best of both companies’ offerings. SmartParkingTM, part of ParkMyCloud which offers recommendations to optimise the ‘on’ and ‘off’ time of resources, is now manageable through the CloudHealth platform, alongside the latter’s recommendations to optimise public and private cloud resources.

The partnership was first announced at the start of this year with automation being the name of the game in terms of the contribution ParkMyCloud brought. One early customer who was utilising both successfully was Connotate, an AI startup that automates web data collection and monitoring, who was able to reduce costs by up to 65% automatically, as well as automated AWS, Azure, and Google Cloud Platform scheduling in 15 minutes.

Writing exclusively for this publication in July, Jay Chapel, co-founder and CEO of ParkMyCloud, cited on-demand instances and VMs, relational databases, load balancers, and containers as the four cloud resources most likely to squeeze budgets without due care and attention.

“Most non-production resources can be parked about 65% of the time – that is, parked 12 hours per day and all day on weekends,” wrote Chapel. “Many of the companies I talk to are paying their cloud providers an average list price of $220 per month for their instances. If you’re currently paying $220 per month for an instance and leaving it running all the time, that means you’re wasting $143 per instance per month.

“Maybe that doesn’t sound like much – but if that’s the case for 10 instances, you’re wasting $1.430 per month,” added Chapel. “One hundred instances? You’re up to a bill of $14,300 for time you’re not using.

“That’s just a simple micro example – at a macro level, that’s literally billions of dollars in wasted cloud spend.”

The move also marks the first business CloudHealth has announced since it was acquired by VMware at the end of last month.

Alibaba Cloud looks to launch London data centre – furthering European push

Alibaba Cloud has confirmed it is setting up a data centre in the UK – after setting up a landing page with ‘London is calling’ as its headline.

The data centre, whose details can be found here, is set to have high availability of 99.99%, a cooling system configured with N+1 redundancy, as well as dual availability zones to avail stronger disaster recovery capabilities.

Prices for space are available at 5% off to early adopters, with 1 core CPU, 512 MB of memory and a 20 GB disk, at the lower end of the scale, costing $3.96 per instance per month with discount, and at the higher end 8 core CPU, 16 GB memory and 40 GB disk setting you back $153.86 per month. Instances are also available with MySQL 5.6/5.7.

Among the 15 products available to London customers are ECS Bare Metal Instances, first announced on the European market at this year’s Mobile World Congress. The rest are a mix of the usual suspects, alongside an Elastic GPU Service, and two container services – one focusing on Docker and the other on Kubernetes.

The company’s most recent momentum announcements had been around the Asia Pacific (APAC) market, with no fewer than nine products launched for the region last month, alongside a second infrastructure zone in Malaysia. As is to be expected, Alibaba’s presence in the region is strong, albeit with a predominant focus in China. According to figures from Synergy Research in June, Alibaba ranks second, behind AWS, in APAC, breaking the AWS-Microsoft-Google oligopoly worldwide.

Yet moves to take European market share have been similarly important for the company. Speaking to CloudTech in May, Yeming Wang, general manager of Alibaba Cloud Europe, said ‘going global’ was a strategy which mirrored the whole of the Alibaba Group. Wang added that for many customers, Alibaba was being seen as a second or third cloud option, with the rise of multi-cloud strategies gaining prominence.

The launch of the London data centre comes amidst a report in The Information which alleges that Alibaba Cloud was scaling back its plans for US expansion. According to MarketWatch, the company has since rebuffed those claims. “Alibaba Cloud’s US strategy has always been primarily focused on working with US companies who need cloud services in China and Asia and helping Chinese companies with cloud services in the US, not competing head to head with local players,” the company said in a statement. “Our commitment to this market remains unchanged.”

Find out more about Alibaba’s London expansion here.

Tresorit raises €11.5 million in series B funding to help promote secure cloud collaboration

Tresorit, a European provider of cloud security and collaboration software, has announced it has raised €11.5 million (£10.4m) in series B funding to help accelerate growth and scale marketing and sales operations.

The company, which sits in the enterprise file and sync space, offers products focused at the legal, healthcare and HR departments around encrypted storage and secure file sharing, as well as GDPR-compliant solutions. Tresorit already has more than 17,000 customers, with recurring revenue growing on average by three times each year for the past three years.

Funding for the series B, which takes the company’s total funding to €15m, included contributions from 3TS Capital Partners, who led the round, and PorfoLion.

Like others in the space, such as Egnyte, Tresorit’s particular focus on the enterprise side of the market – and with one eye looking at the continually rising number of data breaches – has stood the company in good stead.

The company said it saw growing interest in its service particularly in the months leading up to GDPR. “More and more businesses realise that the cloud is a convenient way to store and share files, but are afraid to make the switch due to security and compliance concerns,” Tresorit spokesperson Katalin Jakucs told CloudTech. “With security guaranteed by Tresorit’s end-to-end encryption and various data control features, businesses don’t have to worry about achieving compliance in the cloud.”

Writing in a blog post following the announcement, Tresorit CEO Istvan Lam said future plans included product enhancements, such as control features and password recovery, as well as the launch of Tresorit Send, a standalone file sharing offering. “With the help of the new investment, we aim to enable many more organisations to keep control over their data online,” wrote Lam.

“Tresorit’s service is critically important for customers in light of the growing number of data breaches reported on a daily basis,” said Jozsef Kover, partner at 3TS in a statement. “The management team has a clear vision on how the company will further expand its reach, especially among enterprise and SMB clients.

“The company has already established itself as a leader in its market and is experiencing strong, consistent growth,” added Kover. “We look forward to support the management on their journey to further expansion and global scale.”

Kover will join the board of Tresorit as part of the move.

Michael Yamnitsky, Work-Bench: On enterprise machine learning and why ‘it’s a good time to be a mega cloud’

The future of enterprise software will be in some part automated, with machine learning (ML) and artificial intelligence (AI) technologies really starting to come to the fore. For all the actors cast in this fascinating drama – from the largest cloud vendors to startups, and from business analysts to data scientists – it’s time to either start learning their lines or, in some cases, rip up the script altogether.

The script in question? The Empire Strikes Back.

Work-Bench, a New York-based venture capital firm focusing on enterprise technologies, released its 2018 Enterprise Almanac report last month with that very title. The reason relates to the culmination of a long-standing trend. 10 years ago, it was a clear fight between the on-prem empire and the ‘cloud rebel alliance’, as the report puts it. Today’s rebel alliances have to fight not just the on-prem overlords, but the cloud hypervendors – Amazon, Microsoft, Google et al.

This is a trend that is not going away any time soon. Michael Yamnitsky, venture partner at Work-Bench and author of the report (left), jokes that next year’s report will most likely be titled Return of the Jedi. Yet as the report asserts, large technology companies are ‘#winning’ – the report’s hashtag – at AI. Not only are the largest cloud vendors releasing various toolkits – Amazon with SageMaker and Lex, Azure with Machine Learning Studio – they’re also hoovering up the best AI talent.

Work-Bench’s vision is ‘hoping that new talent gets excited about the enterprise’ – and as this publication put it when covering the original report, the promises of AI and ML will give plenty of reason to get excited in the coming years.

In an email conversation with CloudTech, Yamnitsky gives his verdict on what has changed in the industry over the past 12 months, the rise of Salesforce as an AI force, and what the biggest cloud players and BI vendors will do from here.

CloudTech: How much has changed in the enterprise software industry between this year’s and last year’s reports?

Michael Yamnitsky: A lot! The industry is constantly evolving. That’s what makes early stage venture so much fun. Building a new company in a highly dynamic, competitive market means you always need to play mental chess to figure out the right moats and pockets of value you can monetise.

CT: The report touches on the shift of moving natural language processing to business reports. There are companies looking to do this, but is this ‘democratisation of data’ really going to change things at the executive level?

MY: It will – but it will take time. The promise of products like Salesforce Einstein are to allow anyone to find insights in data without prior knowledge of the underlying data structures. Executives are certainly not precluded from this shift.

CT: Is it wanted from all sides – and what does this mean for data scientists? Is it similar to citizen developer initiatives from a few years ago, or will this take food off their table?

MY: That’s an interesting question and it comes down to culture. Some data scientists embrace democratisation, while others want to keep the lid shut so their work – and position of power – in the company remains stable.

CT: You focus on Salesforce as someone to keep an eye on for AI with its Einstein suite – could you elaborate a bit more on why that is, compared with other companies?

MY: Salesforce Einstein is based on a product built by BeyondCore, a startup Salesforce bought a few years ago. The product is very impressive. Salesforce just doesn’t have mindshare in the BI space. People do not know much about it. Salesforce has a good eye for marketing and I’m [sure] will have no problem catching up.

There are some stealth early-stage companies trying to emulate Salesforce Einstein functionality with standalone products and Tableau seems eager to compete in this area – but otherwise Salesforce has a highly differentiated product in the market.

CT: If you are a more traditional BI vendor reading this report, what do you have to do?

MY: Traditional BI vendors certainly understand this shift and seem to know what to do about it given the recent developments and M&A we see in the market.

CT: What do the next 18 months or so hold for the ‘mega clouds’, as you call them in the report? Market share remains stellar and capex continues to climb – and they seem to be leaning on their huge growing shares in infrastructure to particularly explore ML tools. Will that last?

MY: The mega clouds continue to surprise us. We assumed last year they would stick to building developer tools. That’s certainly the case for Microsoft and Amazon, but Google seems eager to build vertical AI applications starting with customer service.

I would not disqualify the other two from pursuing a similar strategy, or from pursuing any other product-market extension for that matter. It’s a good time to be a mega cloud.

Cloud Native Computing Foundation to fully operate Kubernetes – with help of Google Cloud grant

Google Cloud is cutting the umbilical cord further when it comes to Kubernetes. The company is helping fund the move to transfer ownership and management of the technology’s resources to the Cloud Native Computing Foundation (CNCF) with the help of a $9 million grant.

The move will see the CNCF, as well as Kubernetes community members, taking responsibility for all day-to-day project operations. This will include testing and builds, as well as maintenance and operations for Kubernetes’ distribution.

Kubernetes was first released by Google in 2014, and was moved over to the CNCF, a neutral arbiter of cloud development technologies, shortly after its inception in 2015. The technology officially became the first to ‘graduate’ from the foundation in March, a sign that it had reached mature levels of governance and adoption. Last month Prometheus, an open source systems monitoring technology, became the second to graduate.

“With the rapid growth of Kubernetes, and broad participation from organisations, cloud providers and users alike, we’re thrilled to see Google Cloud hand over ownership of Kubernetes CI/CD to the community that helped build it into one of the highest velocity projects of all times,” said Dan Kohn, CNCF executive director.

“Google Cloud’s generous contribution is an important step in empowering the Kubernetes community to take ownership of its management and sustainability – all for the benefit of the project’s ever-growing user base,” Kohn added.

“Developing Kubernetes in the open with a community of contributors has resulted in a much stronger and more feature-rich project,” wrote William Denniss, product manager for Google Kubernetes Engine in a blog post. “By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations.”

At the Open Source Summit in Vancouver last week, the CNCF announced 38 new members had joined the foundation. Among the companies readers of this publication will recognise include hosting firm OVH, SQL database provider Cockroach Labs, and consulting firm InfraCloud Technologies.