All posts by James

CloudBees acquires Electric Cloud to further CI/CD mission

CloudBees has announced the acquisition of San Jose-based software veteran Electric Cloud, with the aim to become the ‘first provider of end-to-end continuous integration, continuous delivery, continuous deployment and ARA (application-release automation).

The acquisition will marry two leaders in their respective fields. Electric Cloud was named as a leader in reports from both Gartner and Forrester around application release orchestration and continuous delivery and release automation respectively.

CloudBees’ most famous contribution to the world of software delivery is of course Jenkins, an open source automation used for continuous delivery. The original architects of Jenkins are housed at CloudBees, including CTO Kohsuke Kawaguchi.

Last month, CloudBees announced the launch of the Continuous Delivery Foundation (CDF), leading the initiative alongside the Jenkins Community, Google, and the Linux Foundation. At the time, Kawaguchi said: “The time has come for a robust, vendor-neutral organisation dedicated to advancing continuous delivery. The CDF represents an opportunity to raise the awareness of CD beyond the technology people.”

From Electric Cloud’s side, the company bows to CloudBees as the ‘dominant CI vendor, CD thought leader and innovator’. “CloudBees recognised the enormous value of adding release automation and orchestration to its portfolio,” the company wrote on its acquisition page. “With Electric Cloud, CloudBees integrates the market’s highest-powered release management, orchestration and automation tools into the CloudBees suite, giving organisations the ability to accelerate CD adoption.”

“As of today, we provide customers with best-of-breed CI/CD software from a single vendor, establishing CloudBees as a continuous delivery powerhouse,” said Sacha Labourey, CloudBees CEO and co-founder in a statement. “By combining the strength of CloudBees, Electric Cloud, Jenkins and Jenkins X, CloudBees offers the best CI/CD solution for any application, from classic to Kubernetes, on-premise to cloud, self-managed to self-service.”

Financial terms of the acquisition were not disclosed.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

VMware’s blockchain now integrates with DAML smart contract language

VMware’s move into the blockchain space represents the latest cloud vendor getting involved with the technology – and it has been enhanced with the announcement of an integration with Digital Asset.

Digital Asset, which operates DAML, an open source language for constructing smart contracts, is integrating the latter with the VMware Blockchain platform. The move to open source DAML was relatively recent, and the company noted this importance when combining with an enterprise-flavoured blockchain offering.

“DAML has been proven to be one of the few smart contract languages capable of modelling truly complex workflows at scale. VMware is delighted to be working together on customer deployments to layer VMware Blockchain alongside DAML,” said Michael DiPetrillo, senior director of blockchain at VMware. “Customers demand choice of language execution environments from their blockchain and DAML adds a truly robust and enterprise-focused language set to a blockchain platform with multi-language support.”

The timeline of the biggest cloud players and their interest in blockchain technologies is an interesting one. Microsoft’s initiatives have been long-standing, as have IBM’s, while Amazon Web Services (AWS) went back on its word to launch a blockchain service last year. VMware launched its own project, Project Concord, at VMworld in Las Vegas last year but followed this up with VMware Blockchain in beta in November.

Despite the interest around blockchain as a whole, energy consumption has been a target for VMware CEO Pat Gelsinger, who at a press conference in November described the technology’s computational complexity as ‘almost criminal.’

VMware was named by Forbes earlier this week in its inaugural Blockchain 50. The report – which carries similarities to its annual Cloud 100 rankings – aimed to provide analysis on those with the most exciting initiatives based in the US and who had a minimum valuation of sales of $1 billion.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Qualcomm sees a key part of the market up for grabs with Cloud AI 100 chip launch

The race for artificial intelligence (AI) and the cloud continues to be well and truly joined. AI, alongside the cloud, can be seen as two of the technologies that, in tandem, will power the next cycle of business. As VMware CEO Pat Gelsinger put it last year: cloud enables mobile connectivity, mobile creates more data, more data makes AI better, AI enables more edge use cases, and more edge means more cloud is needed to store the data and do the computing.

As this publication has frequently argued, for those at the sharp end of the cloud infrastructure market, AI, along with blockchain, quantum and edge to name three more, are the next wave of cloud services and where the new battle lines are being drawn. Yet there is a new paradigm afoot.

Qualcomm went into the fray last week with the launch of the Qualcomm Cloud AI 100. “Built from the ground up to meet the explosive demand for AI inference processing in the cloud, the Qualcomm Cloud AI 100 utilises the company’s heritage in advanced signal processing and power efficiency,” the press materials blazed. “With this introduction, Qualcomm Technologies facilitates distributed intelligence from the cloud to the client edge and all points in between.”

While the last dozen or so words in that statement may have seemed like the key takeaway, it is the power efficiency side which makes most sense. Where that is Qualcomm’s heritage, in terms of using its technology to power millions of smartphones, it does not have the same impact when it comes to the data centre. In December, the company announced it would lay off almost 270 staff, confirming it was ‘reducing investments’ in the data centre business.

Its competition in this field, chiefly Intel but also NVIDIA, is particularly strong. Yet Kevin Krewell, principal analyst at Tirias Research, told Light Reading last week that “to fit more easily into existing rack servers, new inference cards need to be low power and compact in size.” This, therefore, is where Qualcomm sees its opportunity.

With Cloud AI 100, Qualcomm promises a more than 10 times greater performance per watt over the industry’s most advanced AI inference solutions deployed today, and a chip ‘specifically designed for processing AI inference workloads.’

“Our all-new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s data centres,” said Keith Kressin, Qualcomm SVP product management. “Furthermore, Qualcomm Technologies is now well positioned to support complete cloud-to-edge AI solutions all connected with high speed and low-latency 5G connectivity.”

Crucially, this is an area where cooperation, rather than competition, with the big cloud infrastructure providers may be key. Microsoft was unveiled as a partner, with the two companies’ visions similar and collaboration continuing ‘in many areas.’

Writing for this publication in November, Dr. Wanli Min, chief machine intelligence scientist at Alibaba Cloud, noted how this rise was evolutionary rather than revolutionary. “For many organisations it has been a seamless integration from existing systems, with AI investment gathering pace quickly,” he wrote. “Over the next few years we can expect to see the industry continue to boom, with AI driving cloud computing to new heights, while the cloud industry helps bring the benefits of AI to the mainstream.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

HPE secures Nutanix and Google Cloud hybrid cloud partnerships

Hewlett Packard Enterprise (HPE) has been busy on the partnership front of late. The company has announced deals with Nutanix, to deliver an integrated ‘hybrid cloud as a service’ to market, as well as with Google Cloud around simplifying hybrid cloud adoption.

Both partnerships will utilise HPE GreenLake, the company’s consumption-based IT model. Nutanix’s Enterprise Cloud OS software will be delivered through GreenLake ‘to provide customers with a fully HPE-managed hybrid cloud that dramatically lowers total cost of ownership and accelerates time to value’, as the companies put it.

The deal with Google Cloud, the next step in the companies’ collaboration, will also offer a migration path for Anthos, Google’s newly-rebadged cloud services platform. HPE customers can use Anthos to manage public cloud and on-premises resources, with the overall aim by the company of providing customers with a consistent experience across all environments.

“By partnering with Google Cloud and leveraging a container-based approach, HPE can offer a seamless hybrid cloud experience with the unique option to do it all as-a-service,” said Phil Davis, president of hybrid IT and chief sales officer at HPE. “This approach, powered by Anthos and HPE GreenLake, gives our customers the freedom to modernise at their own pace with the HPE infrastructure of their choice.”

The Nutanix deal is of interest primarily because the two companies were previously vehement opponents. In 2017 Nutanix announced a string of partner agreements, notably with IBM around aligning Nutanix’s enterprise cloud with IBM’s Power Systems server line. Alongside this, the company separately said the same software would be available on HPE ProLiant systems, as well as Cisco UCB B-series blade servers, as reported by ZDNet.

This came as news to HPE, who two days later put out a since-deleted blog post (Wayback link here, screenshot here), attributed to VP marketing Paul Miller, titled ‘Don’t be misled… HPE and Nutanix are not partners.”

“HPE values support with unambiguous accountability. Something you won’t get from Nutanix software on third party infrastructure,” wrote Miller. “They’ve set up a three-vendor decision tree – hardware, software and hypervisor – with a third party agency to deliver support SLAs. This model requires formal agreements from all parties. HPE has not entered to this agreement and we do not support their software on our hardware.”

All change now, however – although one could potentially infer the balance of power based on the canned quotes in the press materials. HPE CEO Antonio Neri said the company was “expanding its leadership in [the as-a-service consumption market] by providing an additional choice to customers seeking a hybrid cloud alternative that promises greater agility at lower costs.” Nutanix chief executive Dheeraj Pandey said: “We are delighted to partner with HPE for the benefit of enterprises looking for the right hybrid cloud solution for their business.”

As far as Google is concerned, this is one of many partnerships coming out during the company’s Next event in San Francisco this week. Of most interest during the keynote yesterday, as this publication explored, were proposed deals with seven leading open source software vendors.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google Cloud stresses hybrid and multi-cloud at Next – as well as sealing a major open source deal

Analysis Google Cloud means open, hybrid and multi-cloud. The company took the keynote at Next in San Francisco to offer more flexibility with using other vendors such as Amazon Web Services (AWS) – but that’s where the familiarity ended.

The biggest product news to come out of the session was moving its cloud services platform, rebranded as Anthos, to accommodate AWS and Microsoft Azure. Anthos lets users “build and manage modern hybrid applications across environments” through Kubernetes, as the official page puts it. With a cast list as long as one’s arm – more than 35 partners were cited on a slide, with Cisco and VMware, more on whom shortly, among the highlights – the goal was for Anthos to be ‘simple, flexible and secure.’

Google Cloud chief exec Thomas Kurian – who has only been in the role for 10 weeks but, as boss Sundar Pichai put it, whose productivity was stretching Calendar and G Suite – noted the multi-cloud move came through listening to customers. Customers wanted three things; firstly hybrid, secondly multi-cloud, and finally a platform that “allows them to operate this infrastructure without complexity and to secure and manage across multiple clouds in a consistent way”, he added. The live demonstration came with a twist; the workload was being run on AWS.

In terms of partner news, the best was saved until last. Google Cloud announced partnerships with seven open source vendors (below) in what Kurian described as the ‘first integrated open source ecosystem.’ “What this allows you to do, as a developer or customer, is to use the best of breed open source technology, procure them using standard Google Cloud credits, and get a single console [and] bill from Google,” said Kurian. “We support these along with partners and, as you grow, as you use these technologies, you share this success with our partners.”

The CEOs of six of these companies – Confluent, DataStax, Elastic, InfluxData, MongoDB and Neo4j – appeared in a video exhorting Google’s approach to OS. The seventh, Redis Labs chief executive Ofer Bengal, appeared on stage with Kurian. “This is great for us because, as you know, monetising open source was always a very big challenge for open source vendors and more so in the cloud era,” he said, adding that Google had taken a ‘different approach’ from other cloud vendors.

Why this matters is, as regular readers of this publication will know, because of a long-running grumble between the open source companies and cloud vendors. Late last year, Confluent announced it was changing certain aspects of its license. Users could still download, modify and redistribute the code, but not – with one eye on the big cloud vendors – use it to build software as a service.

In February, Redis announced a further modification to its license. Speaking to CloudTech at the time, Bengal noted that, AWS aside, ‘the mood [was] trying to change’ and, as this publication put it, ‘inferred that partnerships between some cloud providers and companies behind open source projects may not be too far away.’

With that question now solved, it was interesting to note the way Google approached discussing its customer base – and it is here where another potential flashpoint could be analysed.

Google frequently cited three industries as key to its customer base; retail, healthcare, and financial services. More than once did the company note it worked with seven of the top 10 retailers. This is noteworthy because, as many industry watchers will recall, retail organisations have made noises about moving away from AWS for fears over Amazon’s retail arm. This has ranged from a full-throated roar in the case of Walmart, to more of a mid-range mew from Albertson’s after the latter signed a deal with Microsoft in January.

Kurian cited this ‘industry cloud’ capability as one of Google’s three bulwarks with regards to its strategy. Building out its global infrastructure was seen as key, with Google CEO Sundar Pichai announcing two new data centre locations in Seoul and Salt Lake City. Pichai added, to illustrate the scale of Google’s expansion, that in 2013 the company’s planned footprint amounted to two Eiffel Towers in terms of steel. Today, this has been expanded to at least 20. The other aspect was around offering a digital transformation path augmented by Google’s AI and machine learning expertise.

From the partner side David Goeckeler, EVP and GM of Cisco’s networking and security business, noted how the two companies had a similar forward-looking feel to cloud deployments. Cloud had traditionally been very application-centric, which was a fair strategy, he noted. But the move has gone from there to having apps in the data centre, at the edge and more – and connecting all these users means enterprises have had to rearchitect for the demands of cloud.

“We start with the premise of hybrid and multi-cloud – the realities of the environment where all of our customers are living today,” he said. Sanjay Poonen, chief operating officer at VMware, noted VMware and Google had ‘embraced Kubernetes big time’, particularly through the acquisition of Heptio, and that alongside the deal for VeloCloud, there was a rosy future for the two companies in network. Poonen added many of the benefits of Anthos will extend to hyperconverged infrastructure – an area he had been recently grandstanding in typically ebullient style.

Various new customers were also announced, from retail in the shape of Kohl’s, to healthcare in the form of Philips, and Chase and ANZ Bank from finance. Philips group CIO Alpna Doshi took to the stage to say it had put 2000 apps on Google’s cloud.

Kurian made his speaking debut as Google Cloud boss in February at a Goldman Sachs conference in San Francisco. The talk focused predominantly around Google’s enterprise-laden focus, with Kurian citing out larger, more traditional companies – a continual weakness for the company’s cloud arm – as well as exploring deals with systems integrators.

In November, when it was announced that Diane Greene would step down and Kurian would replace her, consensus at the time predominantly revolved around Google’s lack of penetration to the top two in cloud infrastructure – namely Azure and AWS. However, this wasn’t an exclusive view. Speaking to this publication at the time Nick McQuire, VP enterprise at CCS Insight, argued that Greene had “laid some pretty good foundations for Kurian to come in…we’ll see where they go from there.”

It would seem from today’s keynote that a much clearer path has been set. “Thomas Kurian’s message from day one is loud and clear: Google Cloud is taking hybrid and now multi-cloud very seriously,” said McQuire. “Enterprises continue to question whether to fully embrace a single public cloud – which workloads are best to ‘lift and shift’ from a cost, security and compliance perspective – or how to avoid supplier lock-in, one of their biggest concerns at the moment.

“With the arrival of Anthos and in particular its support of open source, particularly Kubernetes, Google is now taking a much more realistic path in meeting customers where they are on their cloud journeys and is aiming to become the standard in hybrid, multi-cloud services in this next phase of the cloud market,” McQuire added.

You can find out more about Google Next 19 here.

Picture credit: Google Next/Screenshot

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Bitglass secures $70m series D funding to further enhance CASB space

Cloud access security broker (CASB) Bitglass has announced a $70 million (£53.6m) funding round aimed at consolidating its leadership of the CASB and cloud security market.

The round, a series D, included a new investor in the shape of Quadrille Capital, as well as existing investors Future Fund, New Enterprise Associates (NEA), Norwest, and Singtel Innov8. NEA, as regular readers of this publication will be aware, is a regular investor in the cloud space, with previous bets including Cloudflare, Databricks and Datrium among others.

The role of CASBs is to essentially sit between an organisation’s on-premises infrastructure and a cloud provider’s infrastructure, thereby taking the strain of cloud security away from the client. As TechTarget puts it, it ‘acts as a gatekeeper, allowing the organisation to extend the reach of their security policies beyond their own infrastructure.’

The need for an CASB has significantly increased as organisations continue to not hold up their end of the ‘shared responsibility’ bargain for cloud security. The oft-repeated – yet not oft-heeded – mantra is that cloud vendors were responsible for security of the cloud, while the customer is responsible for security in the cloud, such as data, applications, and identity and access management. Only last week the disclosure by UpGuard of Facebook user data being exposed to the public internet led to more questions.

As a result, last November saw analyst firm Gartner issue its first Magic Quadrant for the area. Bitglass, alongside McAfee, Netskope and Symantec – the latter all worth noting as much wider-purpose security providers – was placed as a leader. This was a point Gartner alluded to in its analysis; while Bitglass’ technical expertise was widely praised, the company did not come up as often as the other leaders in clients’ enquiries.

Writing for this publication in August, Hatem Naguib, SVP security at Barracuda Networks, noted his belief that many organisations continued to misunderstand the shared responsibility model. “The organisations benefiting the most from public cloud are those that understand their public cloud provider is not responsible for securing data or applications, and are augmenting security with support from third party vendors,” Naguib wrote.

“Cloud adoption is disruptive of incumbents securing networks, servers and other infrastructure,” said Nat Kausik, CEO of Bitglass in a statement. “Our next-gen CASB uniquely secures against data leakage and threats without installing more hardware and software.”

Total funding for the company now stands at just over $150 million.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why Africa’s cloud and data centre ecosystem will – eventually – be a land of serious opportunity

Take a look at the data centre footprint of the three largest cloud infrastructure vendors – Amazon Web Services (AWS), Microsoft Azure and Google Cloud – and you are met with a breathless marketing message.

Azure (middle) promises 54 regions worldwide – the terminology differs for each one – promising ‘more global regions than any other cloud provider.’ AWS’ cloud (top) ‘spans 61 availability zones within 20 geographic regions around the world’, while Google (bottom) promises 58 zones, 134 network edge locations and ‘availability in 200+ countries and territories’.

If you look at the maps, however, two continents stand out. South America, population approximately 420 million, has only Sao Paulo – albeit the largest city on the continent representing 3.4% of the entire South American populace – as designated data centre bases for all three providers. Yet Africa meanwhile – population approximately 1.22 billion – is even barer.

It does appear to be something of an oversight. Yet things are changing.

A recent report from Xalam Analytics, a US-based research and analysis firm, had explored the ‘rise of the African cloud.’ The study went live at the time Microsoft’s Johannesburg and Cape Town data centre sites were switched on, and has explored Africa’s ‘cloud readiness’, industry expectations, as well as its associated services.

Ultimately, the fact remains that, thus far, Africa has not been worth investing in. According to Xalam’s estimates, less than 1% of global public cloud services revenue was generated in the continent. This figure, the company adds, is lower than mobile operators’ revenue for SMS.

The report argued that only five of the 25 African countries analysed were considered ‘cloud-ready’; South Africa far at the top, Mauritius, Kenya, Tunisia and Morocco. 11 nations, including Egypt, Nigeria and Ivory Coast, were considered on the cusp of cloud-readiness.

This makes for an interesting comparison when put against the bi-annual cloud studies from the Asia Cloud Computing Association (ACCA). The overall landscape in Asia is one of contrasts. Those at the top, such as Singapore and Hong Kong, are considered global leaders. Those at the bottom, such as India and China, arguably share many of the failings of African countries; large land masses and poor connectivity. South Africa, ACCA estimated, would have placed just under halfway in the 14-nation Asian ranking, between Malaysia and The Philippines.

These issues are widely noted in the Xalam analysis. “Africa is a tricky place for cloud services,” the company wrote in a blog post. “Many countries don’t have broadband speeds adequate and affordable enough to support reliable cloud service usage. Where cloud services are built upon a reliance on third party providers, provider distrust is deeply ingrained in many African enterprises, having been nurtured by decades of failing underlying infrastructure and promises not kept.

“Where the public thrives on an open, decentralised Internet, many African governments profess a preference for a more centralised, monitored model – and some are prone to shutting down the Internet altogether,” the company added.

Guy Zibi, founder and principal of Xalam Analytics, said there were two other concerns uncovered in the research. “Data sovereignty concerns are prevalent; in most African markets, the sectors that typically drive the uptake of cloud services – financial, healthcare, and even the public sector – are not allowed to store user data out of the country,” he told CloudTech. “Given that there was no public cloud data centre in the region and local options were not entirely trusted, this naturally impacted uptake.

“There is [also] still a fair amount of distrust in the ability of third-party providers to manage mission-critical enterprise workloads,” added Zibi. “In many countries, the underlying supporting infrastructure – power primarily – has historically been shaky, making it difficult for providers to provide quality services. That lack of trust has long held up the expansion of third-party managed services, including cloud services.”

Yet the winds of change – “trust levels are improving but there’s some way to go, and from a low base”, as Zibi put it – are prevalent. Alongside Microsoft’s newly-opened data centre complexes in South Africa, AWS is planning a region in Cape Town. Zibi said the launch was a ‘game-changer’ and a ‘seminal event for African cloud services.’

“It did four things in our view,” he said. “It offered validation from one of the world’s largest cloud providers that there was deep enough cloud demand in the region and a credible economic case to support this type of investment. It suggested that the underlying supporting infrastructure – South Africa’s at least – was solid enough to support a hyperscale data centre, and it probably accelerated local service deployment timelines by competitors like AWS and Huawei.

“It [also] gave more confidence to local corporate customers that they could start moving to the cloud more aggressively,” Zibi added.

Naturally, as these things are announced, customers are asked if they could tell their stories. Microsoft gave three examples of companies utilising cloud services. Nedbank, Microsoft said, had ‘adopted a hybrid and multi-vendor cloud strategy’ with Microsoft an ‘integral’ partner, eThekwini Water was using Azure for critical application monitoring and failover among other use cases, while The Peace Parks Foundation was utilising rapid deployment of infrastructure as well as offering radio over Internet protocol – ‘a high-tech solution to a low-tech problem’ – to improve radio communication over remote and isolated areas.

Zibi noted that the facets binding the cloud-ready African nations together was strong adoption of high speed broadband, adequate power supply, good data infrastructure and solid regulations fostering adoption of cloud services. While Azure and AWS are the leaders, Google is seen by Xalam only as a challenger. In a move that is perhaps indicative of the state of play in African IT more generally, Oracle and IBM – alongside VMware – are the next strongest providers.

The report predicted that top line annual cloud services revenue in Africa is set to double between now and 2023, and public cloud services revenue to triple in that time. The barriers understandably remain in such a vast area, but the upside is considerable. “Few other segments in the African ICT space are as likely to generate an incremental $2bn in top line revenue over the next five years, and at least as much in adjacent enabling ecosystem revenue,” the company noted.

“But the broader upside is unmistakable, and the battle for the African cloud is only beginning.”

You can find out more about the report here (customer access only).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Financial services moving to hybrid cloud – but rearchitecting legacy systems remains a challenge

The move to hybrid cloud is one which virtually every industry is undertaking – but the financial services industry is getting there ahead of most.

According to the latest data issued by Nutanix for its Enterprise Cloud Index Report, more than one in five financial organisations polled (21%) are deploying a hybrid cloud model today. This is up from the global average of 18.5%. 91% of those polled said hybrid cloud was their ‘ideal’ IT model.

Yet while the push to hybrid is still an important one, there are plenty of areas where financial firms are struggling. Like many other industries – insurance being another one, as sister publication AI News found out when speaking to LV= earlier this week – a serious concern remains over rearchitecting and organising legacy systems.

88% of respondents to the Nutanix survey found that while they expected hybrid cloud to positively impact their businesses, hybrid cloud skills themselves were scarce. According to the data, financial services firms run more traditional data centres than other industries, with 46% penetration, as well as a lower average usage of private clouds; 29% compared to 33% overall.

Naturally, this is the part of the wider report which covers many more industries. Yet Nutanix wanted to shed a light on the financial side because of its apparent highs and lows.

“Increased competitive pressure, combined with higher security risks and new regulations, will require all of the industry to look at modernising their IT infrastructure,” said Chris Kozup, Nutanix SVP of global marketing in a statement. “The current relatively high adoption of hybrid cloud in the financial services industry shows that financial firms recognise the benefits of a hybrid cloud infrastructure for increased agility, security and performance.

“However, the reality is that financial services firms still struggle to enable IT transformation, even though it is critical for their future,” added Kozup.

Writing for this publication last month, Rob Tribe, regional SE director for Western Europe at Nutanix, noted how organisations across industry were waking up to the need for hybrid, but noted the need for up-to-date tools to help expedite the process. Tribe noted how expert analytical tools, cloud-based disaster recovery (DR) and cross-cloud networking tools were key for performance, availability and security.

“Delivering these and other hybrid cloud management tools will be far from easy, and will require a lot more cooperation between cloud vendors and service providers than we’re seeing at present,” wrote Tribe. “However, with growing numbers of enterprise customers moving to hybrid, it’s very much in everyone’s best interests to work together.

“It’s time to join up the dots between clouds and deliver the visibility, technologies and tools needed to make it easier to exploit this exciting – and soon to be de facto – way of provisioning and managing enterprise IT,” Tribe added.

You can analyse the full Nutanix survey data here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Facebook records exposed on AWS cloud server lead to more navel-gazing over shared responsibility

Researchers at security firm UpGuard have disclosed two separate instances of Facebook user data being exposed to the public internet – and it again asks questions of the strategy regarding cloud security and shared responsibility.

The story, initially broken by Bloomberg, noted how one dataset, originating from Mexico-based Cultura Colectiva, contained more than 540 million records detailing comments, likes, reactions, and account names among others. The second, from a now-defunct Facebook-integrated app called ‘At the Pool’, contained plaintext passwords for 22,000 users.

UpGuard said that the Cultura Colectiva data was of greater concern in terms of disclosure and response. The company sent out its first notification email to the company on January 10 this year, with a follow-up email being sent four days later – to no response. Amazon Web Services (AWS), on which the data was stored, was contacted on January 28, with a reply arriving on February 1 informing that the bucket’s owner was made aware of the exposure.

Three weeks later, however, the data was still not secured. A further email from UpGuard to AWS was immediately responded to. Yet according to the security researchers, it was ‘not until the morning of April 3, after Facebook was contacted by Bloomberg for comment, that the database backup, inside an AWS S3 storage bucket titled ‘cc-datalake’, was finally secured.’

So whither both parties? For Facebook, this can be seen as another blow, as UpGuard explained. “As Facebook faces scrutiny over its data stewardship practices, they have made efforts to reduce third party access. But as these exposures show, the data genie cannot be put back in the bottle,” the company said.

“Data about Facebook users has been spread far beyond the bounds of what Facebook can control today,” UpGuard added. “Combine that plenitude of personal data with storage technologies that are often misconfigured for public access, and the result is a long tail of data about Facebook users that continues to leak.”

As far as AWS is concerned, this is again not their first rodeo in this department. But the question of responsibility, as this publication has covered on various occasions, remains a particularly thorny one.

Stefan Dyckerhoff, CEO at Lacework, a provider of automated end-to-end security across the biggest cloud providers, noted that organisations needed to be more vigilant. “Storing user data in S3 buckets is commonplace for every organisation operating workloads and accounts in AWS,” said Dyckerhoff. “But as the Facebook issue highlights, they can inadvertently be accessible, and without visibility and context around the behaviour in those storage repositories, security teams simply won’t know when there’s a potential vulnerability.”

This admittedly may be a stance easier said than done given the sheer number of partners either building apps on the biggest companies’ platforms or using their APIs – many of whom may no longer exist. Yet it could be argued that of a shared responsibility, both parties may be missing the mark. “At issue is not [the] S3 bucket, but how it’s configured, and the awareness around configuration changes – some of which could end up being disastrous,” added Dyckerhoff.

In February, Check Point Software found that three in 10 respondents to its most recent security report still affirmed that security was the primary responsibility of the cloud service provider. This concerning issue is one that the providers have tried to remediate. In November AWS announced extra steps to ensure customers’ S3 buckets didn’t become misconfigured, having previously revamped its design to give bright orange warning indicators as to which buckets were public.

Writing for this publication in August, Hatem Naguib, senior vice president for security at Barracuda Networks, outlined the rationale. “Public cloud customers need to become clearer on what their responsibility is for securing their data and applications hosted by public cloud providers,” Naguib wrote. “Their platforms are definitely secure and migrating workloads into the cloud can be much more secure than on-premise data centres – however organisations do have a responsibility in securing their workloads, applications, and operating systems.”

You can read the full UpGuard post here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Chef aims to cook on gas with newly unveiled ‘100% open always’ mentality

Software and DevOps provider Chef has announced a commitment to move to open source for all of its software going forward, saying it “welcomes any use of its open source projects for any purpose.”

The move will see Chef use the Apache 2.0 license and allows open source collaboration under the four essential freedoms of FOSS (free and open source software); to run the program as wished for any purpose, to study how the program works and change it so it does computing as wished, the freedom to redistribute copies to help others, and the freedom to distribute copies of modified versions to others.

CEO Barry Crist wrote in a blog post explaining the move that the company was ‘not making this change lightly.’ “Over the years we have experimented with and learned from a variety of different open source, community and commercial models, in search of the right balance,” Crist wrote.

“We believe that this change, and the way we have made it, best aligns the objectives of our communities with our own business objectives,” he added. “Now we can focus all of our investment and energy on building the best possible products in the best possible way for our community without having to choose between what is ‘proprietary’ and what is ‘in the commons.’

“Most importantly, we can do that, with each of you, completely in the open.”

In other words, future software produced will be created in public repos, while the company also promised greater visibility to the public in terms of its product development process and roadmap.

This is being extended out to Chef’s enterprise customers as well with the launch of Chef Enterprise Automation Stack, which looks at ‘expressing infrastructure, security policies and application lifecycle as code’. Previously distinct products, Chef Automate, Infra, InSpec, Habitat and Workstation, will be unified ‘to enable a seamless transition from establishing compliance through application automation.’

“Enterprises demand a more curated and streamlined way to deploy and update our software and content,” added Crist. “They want a relationship with us as the leading experts in DevOps, automation, and Chef products…and beyond just technical innovations, these companies require assurance in the form of warranties, indemnifications, and support. We will make our distributions freely available for non-commercial use, experimentation, and individuals so anyone can get started with ease.”

The strategy seems to make sense from a user perspective with many IT organisations eschewing proprietary software or adopting an ‘open-first’ mentality. As Holger Mueller, VP at Constellation Research, put it, drawing clear lines between open source and commercial while ensuring they were the same ‘serves both vendors’ and their users’ most pressing needs.’

It may be interesting to explore this in the context of other moves made in the market. As many open source players have found in recent months, their previous licensing models have felt not quite restrictive enough. Redis Labs was especially vocal, explaining in February that it was clarifying its conditions – under Apache2 modified with commons clause – because developers were left unsure whether they were in the right of wrong. Other companies looking at licensing changes include MongoDB and Confluent.

You can read the full Chef blog post here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.