Todas las entradas hechas por James

Joyent bids farewell to the public cloud in ‘difficult’ decision

It was one of the most innovative early-stage cloud vendors – but Joyent’s public cloud offering will be no more.

The company announced its departure from the public cloud space in a blog post today, scaling back its availability to customers of its single-tenant cloud offering.

Affected customers have five months to find a new home; a documentation page confirmed the Joyent Triton public cloud will reach end of life on November 9, while the company has separately put together a list of available partners, including Microsoft Azure and OVH.

Steve Tuck, Joyent president and chief operating officer (COO), cited strained resources in developing both its public cloud and single-tenant cloud as the reason behind a ‘difficult’ decision.

“To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home,” Tuck wrote. “We are truly grateful for your business and the commitment that you have shown us over the years; thank you.”

Joyent had been acquired by Samsung in 2016 after the Korean giant had explored Manta, the company’s object storage system, for implementation. Samsung liked the product so much that it outright bought it; as Bryan Cantrill, CTO of Joyent, explained at the time, Samsung offered hardware to Joyent after its proposal proved too much heft for the startup to cope with.

Prior to the days of public cloud and infrastructure as a service (IaaS) domination from Amazon Web Services (AWS), Microsoft, Google, and other hyperscalers with frighteningly deep pockets, Joyent enjoyed a stellar reputation. The company was praised by Gartner, in its 2014 IaaS Magic Quadrant, for having a “unique vision”, as well as previously being the corporate steward of Node.js, growing it into a key standard for web, mobile, and Internet of Things (IoT) architectures.

“By providing [an] easy on-ramp to on-demand cloud infrastructure, we have had the good fortune to work with an amazing array of individuals and companies, big and small,” added Tuck.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Organisations need to ‘acknowledge challenges’ in not keeping 100% uptime, argues Veeam

It’s the big downtime downturn; according to a new study from Veeam, three in four organisations admit they are not able to meet users’ demands for uninterrupted access to applications and data.

The findings appear in the company’s latest Cloud Data Management Report, which surveyed more than 1,500 senior business and IT leaders across 13 countries. Ultimately, the need for more sophisticated data management is something that Veeam feels as though it is an expert in – the company cites itself as the leader in ‘cloud data management’ – yet the stats are interesting.

In particular, the research found that lost data from mission-critical application downtime costs organisations more than $100,000 per hour on average, while app downtime translates to a cost of $20.1 million globally in lost revenue and productivity.

Evidently, the research has noted how organisations are struggling with their current data management methods. 44% of those polled said more sophisticated data management was critical to their organisation’s success in the coming two years. Four in five respondents said better data management strategies led to greater productivity, while two thirds found greater stability.

Perhaps surprisingly, of those polled, software as a service (SaaS) was not completely saturated; just over three quarters (77%) said they were already using it, with this number set to rise to 93% by the end of 2019. The golden nugget comes from when organisations see the dividend of adopting new technologies; financial benefits come along after nine months on average, with operational benefits arriving after approximately seven months.

“We are living in a data-driven age, and organisations need to wake up and take action to protect their data,” said Ratmir Timashev, Veeam co-founder and EVP sales and marketing. “Businesses must manage their data in a way that always delivers availability and leverage its value to drive performance. This is no longer a luxury, but a business necessity.

“There is a significant opportunity and competitive advantage for those who effectively manage their data,” Timashev added. “Ask yourself – are you confident that your business data will always be available? If you are unsure it’s time to act – and our study shows that many are not acting fast enough.”

You can find out more about the Veeam report here (email required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft and Oracle partner up to interconnect clouds – with retail customers cited

Here’s proof that cloudy collaboration can happen even at the highest levels: Microsoft and Oracle have announced an ‘interoperability partnership’ aimed at helping customers migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud.

Organisations who are customers of both vendors will be able to connect Azure and Oracle Cloud seamlessly. The Oracle Ashburn data centre and the Azure US East facilities are the only ones available for connection at this stage, however both companies have plans to expand to additional regions.

The two companies will also offer unified identity and access management to manage resources across Azure and Oracle Cloud, while Oracle’s enterprise applications, such as JD Edwards EnterpriseOne and Hyperion, can be deployed on Azure with Oracle databases running in Oracle’s cloud.

“As the cloud of choice for the enterprise, with over 95% of the Fortune 500 using Azure, we have always been first and foremost focused on helping our customers thrive on their digital transformation journeys,” said Scott Guthrie, executive vice president for Microsoft’s cloud and AI division in a statement. “With Oracle’s enterprise expertise, this alliance is a natural choice for us as we help our joint customers accelerate the migration of enterprise applications and databases to the public cloud.”

This move may be seen as a surprise to some who may see Microsoft and Oracle as competitors in public cloud, but it is by no means the most surprising – that honour still goes to Oracle and Salesforce’s doomed romance in 2013 – cloud partnership.

Indeed, the rationale is a potentially interesting one. The press materials gave mention to three customers. Aside from energy supplier Halliburton, the other two – Albertsons and Gap Inc – are worth considering. Albertsons, as regular readers of this publication will know, moved over to Microsoft earlier this year. At the time, CIO Anuj Dhanda told CNBC the company went with Azure because of its ‘experience with big companies, history with large retailers and strong technical capabilities, and because it [wasn’t] a competitor.’

Gap was announced as a Microsoft customer in a five-year deal back in November. Again speaking with CNBC – and as reported by CIO Dive – Shelley Branston, Microsoft corporate VP for global retail and consumer goods, said retailers shied away from Amazon Web Services (AWS) because they want ‘a partner that is not going to be a competitor of theirs in any other parts of their businesses.’

Albertsons said in a statement that the Microsoft/Oracle alliance would allow the company ‘to create cross-cloud solutions that optimise many current investments while maximising the agility, scalability and efficiency of the public cloud’, while Gap noted the move would help ‘bring [its] omnichannel experience closer together and transform the technology platform that powers the Gap Inc. brands’.

Yet it’s worth noting that the retail cloud ‘war’ may be a little overplayed. Following the Albertsons move Jean Atelsek, digital economics unit analyst at 451 Research, told CloudTech: “It’s easy to get the impression that retailers are fleeing AWS. Microsoft’s big partnership with Walmart seems to be the example that everyone wants to universalise the entire cloud space. However since a lot of retailers also sell through/on AWS, they’re less likely than Walmart to see Amazon (and by extension AWS) as the devil.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

NASCAR moves onto AWS to uncover and analyse its racing archive

As sporting teams and franchises continue to realise the value of their archive – and balk at how much data it commands – many are in the process of migrating their operations to the cloud. NASCAR is the latest, announcing it will utilise Amazon Web Services (AWS) for archiving purposes.

The motor racing governing body is set to launch new content from its archive, titled ‘This Moment in NASCAR History’, on its website, with the service powered by AWS. NASCAR is also using image and video analysis tool Amazon Rekognition – otherwise known for its facial recognition capabilities – to automatically tag specific video frames with metadata for easier search.

“We are pleased to welcome AWS to the NASCAR family,” said Jon Tuck, NASCAR chief revenue officer in a statement. “This relationship underscores our commitment to accelerate innovation and the adoption of cutting-edge technology across our sport.

“NASCAR continues to be a powerful marketing vehicle and will position AWS’s cutting-edge cloud technology in front of industry stakeholders, corporate sponsors, broadcast partners, and ultimately our fans,” Tuck added.

The move marks another key sporting client in AWS’ roster. In July, Formula 1 was unveiled as an Amazon customer, with the company moving the majority of its infrastructure from on-premises data centres to AWS. Formula 1 is also using various AWS products, from Amazon SageMaker to apply machine learning models to more than 65 years of race data, to AWS Lambda for serverless computing.

Ross Brawn, Formula 1 managing director of motor sports, took to the stage at AWS re:Invent in November to tell attendees more of the company’s initiatives. The resultant product, ‘F1 Insights Powered By AWS’, was soft-launched last season giving fans race insights, and Brawn noted plans for further integrating telemetry data, as well as using high performance computing (HPC) to simulate environments which led to closer racing.

Two weeks after Formula 1 was unveiled, Major League Baseball (MLB) extended its partnership with AWS citing machine learning (ML), artificial intelligence, and deep learning as a key part of its strategy. The baseball arbiter already used Amazon for various workloads, including Statcast, its facts and figures base, but added SageMaker for ML use cases. Among the most interesting was its plan to use SageMaker, alongside Amazon Comprehend, to “build a language model that would create analysis for live games in the tone and style of iconic announcers.”

NASCAR is also keen to utilise these aspects of Amazon’s cloud. The company said AWS was its preferred ‘cloud computing, cloud machine learning and cloud artificial intelligence’ provider.

It’s worth noting however that AWS is not the only game in town. The Football Association (FA) announced it was partnering with Google as its official cloud and data analytics partner last week, while the Golden State Warriors are another confirmed customer of Google’s cloud.

You can read more about the NASCAR move here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Enterprises not seeing total fulfilment with cloud strategies – but hybrid remains the way to go

For enterprises looking to migrate to the cloud, with sprawling workloads and data, it can be a long, arduous journey. According to a new survey, more than two thirds of large enterprises are not getting the full benefits of their cloud migration journeys.

The study from Accenture, titled ‘Perspectives on Cloud Outcomes: Expectation vs. Reality” – polled 200 senior IT professionals from large global businesses and identified security and complexity of business and operational change as key barriers to cloud success.

This doesn’t mean enterprises struggle to see any benefits of the cloud – overall satisfaction was at above 90% on average – but when it came to cost, speed, business enablement and service levels, only one in three companies said they were fully satisfied on those metrics.

This breaks down further when looking at specific rollouts (below). Overall, enterprises are seeing greater benefits the more chips they put in; satisfaction levels climb to almost 50% among those with heavy investments, compared to less than 30% for those just starting their journeys.

When it came to public and hybrid cloud, the results showed an evident cost versus speed trade-off. More than half of those with public cloud workloads said they had fully achieved their cost objectives, while for speed it dropped below 30%. Hybrid cloud initiatives, the research noted, saw much more consistent results across the board, if not quite the same cost savings.

This makes for interesting reading when compared with similar research. According to a study from Turbonomic in March, the vast majority of companies ‘expect workloads to move freely across clouds’, with multi-cloud becoming the de facto deployment model for organisations of all sizes.

Yet the Accenture study argued this would not be plain sailing. 42% of those polled said a lack of skills within their organisation hampered their initiatives. Securing cloud skills is of course a subject which continues to concern – but according to Accenture, managed service providers (MSPs) may provide the answer. 87% of those polled said they would be interested in pursuing this initiative.

“Like most new technologies, capturing the intended benefits of cloud takes time; there is a learning curve influenced by many variables and barriers,” said Kishore Durg, senior managing director of Accenture Cloud for Technology Services. “Taking your cloud program to the next level isn’t something anyone can do overnight – clients need to approach it strategically with a trusted partner to access deep expertise, show measurable business value, and expedite digital transformation.

“If IT departments fail to showcase direct business outcomes from their cloud journeys, they risk becoming less relevant and losing out to emerging business functions, like the office of the chief data officer, that are better able to use cloud technologies to enable rapid innovation,” added Durg.

You can read the full report here (pdf, no opt-in required).

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Google confirms network congestion as contributor to four-hour cloud outage

Google has confirmed a ‘network congestion’ issue which affected various services for more than four hours on Sunday has since been resolved.

A status update at 1225 PT noted the company was investigating an issue with Google Compute Engine, later diagnosed as high levels of network congestion across eastern USA sites. A further update arrived at 1458 to confirm engineering teams were working on the issue before the all-clear was sounded at 1709.

“We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimise future recurrence,” the company wrote in a statement. “We will provide a detailed report of this incident once we have completed our internal investigation.”

The outage predominantly affected users in the US, with some European users also seeing issues. While various Google services, including Google Cloud, YouTube, and G Suite were affected, many companies who run on Google’s cloud also experienced problems. Snapchat – a long-serving Google Cloud customer and considered a flagship client before the company’s major enterprise push – saw downtime, as did gaming messaging service Discord.

According to security provider ThousandEyes, network congestion is a ‘likely root cause’ of the outage. The company spotted services behaving out of sync as early at 1200 PT at sites including Ashburn, Atlanta and Chicago, only beginning to come back at approximately 1530 (below). “For the majority of the duration of the 4+ hour outage, ThousandEyes detected 100% packet loss for certain Google services from 249 of our global vantage points in 170 cities around the world,” said Angelique Medina, product marketing director at ThousandEyes.

Previous Google cloud snafus have shown the company can learn lessons. In November 2015 Google Compute Engine went down for approximately 70 minutes, with the result being the removal of manual link activation for safety checks. The following April, services went down for 18 minutes following a bug in Google Cloud’s network configuration management software.  

According to research from Gartner and Krystallize Technologies published last month, Microsoft is the poor relation among the biggest three cloud providers when it comes to reliability. As reported by GeekWire, 2018 saw Amazon and Google achieve almost identical uptime statistics, at 99.9987% and 99.9982% respectively. Microsoft, meanwhile, trailed with 99.9792% – a ‘small but significant’ amount.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Hyperscaler cloud capex declines – but ‘enormous barriers’ remain to reach the top table

The spending of the cloud hyperscalers has come to a comparative halt, according to the latest note from Synergy Research.

The analyst firm found that in the first quarter of 2019 total capex across the largest cloud vendors totalled just over $26 billion, representing a 2% downturn year on year. This excludes Google’s $2.4bn outlay on Manhattan real estate which pushed the Q118 figures even further, and not including exceptional items, represents the first quarterly downturn since the beginning of 2017.

In terms of launches in 2019, Google was the most dominant vendor, opening its doors in Zurich in March and Osaka earlier this month. At the very beginning of this year, Equinix and Alibaba Cloud focused on Asia Pacific data centre launches, in Singapore and Indonesia respectively.

Last month Synergy argued that global spend on data centre hardware and software had grown by 17% compared with the previous year. This is naturally driven by the continued demand for public cloud spend; more extensive server configurations ensured more expensive enterprise selling prices.

In order, the top five hyperscale spenders in the most recent quarter were Amazon, Google, Facebook, Microsoft and Apple.

“After racing to new capex highs in 2018 the hyperscale operators did take a little breather in the first quarter. However, though Q1 capex was down a little from 2018, to put it into context it was still up 56% from Q1 of 2017 and up 81% from 2016; and nine of the 20 hyperscale operators did grow their Q1 capex by double-digit growth rates year on year,” said John Dinsdale, a chief analyst at Synergy Research. “We do expect to see overall capex levels bounce back over the remainder of 2019.

“This remains a game of massive scale with enormous barriers for those companies wishing to meaningfully compete with the hyperscale firms,” Dinsdale added.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Calculating the Kube roots: Why 2019’s KubeCon represented a milestone for the industry

The latest iteration of KubeCon and CloudNativeCon, which took place in Barcelona last week, felt like something of a milestone – and not one shoehorned in for marketing purposes, either.

It is true however that Kubernetes came into being five years ago this June, so for those at Google Cloud, it was a time of reflection. From the acorn which was Eric Brewer’s presentation at Dockercon 2014, a veritable forest has grown. “We’re delighted to see Kubernetes become core to the creation and operation of modern software, and thereby a key part of the global economy,” wrote Brian Grant and Jaice Singer DuMars of Google Cloud in a blog post.

“Like any important technology, Kubernetes has become about more than just itself; it has positively affected the environment in which it arose, changing how software is deployed at scale, how work is done, and how corporations engage with big open source projects,” Grant and DuMars added.

2019’s KubeCon saw a smattering of news which represented a sense of continued maturation. In other words, various cloud providers queued up to boast about how advanced their Kubernetes offerings were. OVH claimed it was the only European cloud provider to offer Kubernetes deployment on multiple services, for instance, while DigitalOcean unveiled its managed Kubernetes service, now generally available. From Google’s side, with its Google Kubernetes Engine (GKE) managed service toolset, new products were made available, from greater control over releases, to experimentation with Windows Server containers – of which more later.

These are all clues which point to how Kubernetes has evolved since 2014 – and will continue to do so.

Analysis: The past, present and future

At the start of this year, for this publication’s 2019 outlook, Lee James, CTO EMEA at Rackspace, put it simply: “I will call it and say that Kubernetes has officially won the battle for containers orchestration.”

If 2018 was the year that the battle had been truly won, 2017 was where most of the groundwork took place. At the start of 2017, Google and IBM were the primary stakeholders; Google of course developing the original technology, and IBM holding a close relationship with the Cloud Native Computing Foundation (CNCF) – VP Todd Moore holding the CNCF chair of the governing board. By the year’s end, Amazon Web Services, Microsoft, Salesforce and more had all signed up with the CNCF. Managed services duly followed.

Last year saw Kubernetes become the first technology to ‘graduate’ from the CNCF. While monitoring tool Prometheus has since joined it, it was a key milestone. The award was a recognition that Kubernetes had achieved business-grade competency, with an explicitly defined project governance and committer process and solid customer credentials. According to Redmonk at the time, almost three quarters (71%) of the Fortune 100 were using containers in some capacity.

One of the key reasons why this convergence occurred was due to the business case associated with the technology becoming much more palatable. Docker first appeared on the scene in 2013 with containerised applications promising easier management and scalability for developers. Many enterprises back then were merely dipping their toes into the cloud ecosystem, agonising between public and private deployments, cloud-first eventually moving to cloud-only.

As the infrastructure became better equipped to support it, the realisation dawned that businesses needed to become cloud-native, with hybrid cloud offering the best of both worlds. More sophisticated approaches followed, as multiple cloud providers were deployed across an organisation’s IT stack for different workloads, be they identity, databases, or disaster recovery.

This need for speed was, of course, catnip for container technologies – and as Ali Golshan, co-founder and CTO at StackRox wrote for this publication in January: “Once we started using containers in great volume, we needed a way to automate the setup, tear down, and management of containers. That’s what Kubernetes does.”

The Docker story is an interesting one to tie up. The company had a presence at this year’s KubeCon, announcing an extension of its partnership with Tigera around support for Kubernetes on Windows in Docker Enterprise. Consensus across many in the industry was that Docker had simply run its course. At the end of 2017, Chris Short, ambassador at the CNCF – though he was swift to point out this was not the foundation’s opinion – wrote a piece headlined simply “Docker, Inc is Dead.” In October of that year, Docker announced it was supporting Kubernetes orchestration. Short added that ‘Docker’s doom [had] been accelerated by the rise of Kubernetes.’

One area of potential however is through Windows. In December Docker announced a collaboration with Microsoft in what was dubbed a ‘container for containers’; a cloud-agnostic tool aimed at packaging and running distributed applications and enabling a single all-in-one packaging format. Kubernetes 1.14 brought about support for Windows nodes, and Google referenced this in its Windows Server offering for GKE. “We heard you – being able to easily deploy Windows containers is critical for enterprises looking to modernise existing applications and move them towards cloud-native technology,” the company wrote.  

Docker secured $92 million in new funding in October. As TechCrunch put it, “while Docker may have lost its race with Kubernetes over whose toolkit would be the most widely adopted, the company has become the champion for businesses that want to move to the modern hybrid application development and information technology operations model of programming.”

This is where things stand right now. As for the future, more use cases will come along and, much like cloud has become, Kubernetes will stop being spoken of and just ‘be’. “Kubernetes may be most successful if it becomes an invisible essential of daily life,” wrote Grant and DuMars. “True standards are dramatic, but they are also taken for granted… Kubernetes is going to become boring, and that’s a good thing, at least for the majority of people who don’t have to care about container management.”

“In other ways, it is just the start,” the two added. “New applications such as machine learning, edge computing, and the Internet of Things are finding their way into the cloud-native ecosystem. Kubernetes is almost certain to be at the heart of their success.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

New figures show increasing Chinese influence across Asia Pacific cloud markets

Amazon may still have an iron grip on cloud infrastructure across all geographies – but in Asia Pacific (APAC) at least, Chinese cloud providers are closing the gap.

That’s according to a new study from Synergy Research, which argues the top three players in China are now in the top six across APAC as a whole. Alibaba is ranked at #2, while Tencent is at #4 and Sinnet at #6, with Amazon, Microsoft and Google filling the odd numbers in that order.

Across China, where local business reigns supreme, it may not be a surprise to note that the top six players are all local. Baidu sits just outside the medals, with China Telecom and China Unicom rounding off the six. For the rest of APAC, however, positions four to six are held by Asian vendors, but only one of them Chinese; Alibaba (#4) is followed by Japanese firms Fujitsu and NTT.

The analysis makes for an interesting exploration of market drivers across the Asia Pacific region. As far as budgets go, Synergy notes that China is ‘by far’ the largest country market and is growing ‘much faster’ than the rest of the region. Tencent, while not in the top six for the rest of APAC, is noted to be ‘moving beyond its home market.’ Synergy rates Alibaba as the seventh largest player taking into account both public infrastructure as a service (IaaS) and platform as a service (PaaS). For the former, it would evidently be higher, if Gartner and other Synergy research is anything to go by.

According to the most recent analysis from the Asia Cloud Computing Association (ACCA), this time last year, China ranked a lowly #13 in cloud readiness, with only Vietnam stopping it from propping up the table altogether. Naturally, issues such as connectivity and sustainability scored poorly given the vastness and disparity of the country. Freedom of information was another weakness.

As Synergy noted this time last year, Alibaba had moved into second place across APAC. Yet while the potential is there, a bumpy road lies ahead. IDC argued in July that the vast majority of Asia Pacific organisations remained early in their cloud maturity with either ‘ad hoc’ or ‘opportunistic’ initiatives most likely.

“While China remains a very tough proposition for the world’s largest cloud providers, the Chinese cloud providers are riding on the back of huge growth in their local cloud market,” said John Dinsdale, a chief analyst and research director at Synergy. “Language, cultural and business barriers will cause some of those Chinese companies to remain tightly focused on their home market, but others are determined to become major players on the global stage.”

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

HSBC focuses cloud and DevOps vision with $10 million investment

HSBC has been moving towards a cloud-first world – and the bank's latest endeavour has shed light on how it is pushing ahead in the DevOps sphere.

The company is investing $10 million (£7.8m) in capital investment to CloudBees, the continuous delivery software provider and arbiter of open source automation server Jenkins. 

This is by no means an entirely altruistic act, with HSBC using CloudBees significantly since 2015 in order to bolster its software delivery system. The companies had previously gone public about their relationship; HSBC was at a CloudBees event in April, as reported by Computerworld UK.

Regular readers of this publication will be aware of the bank's cloudy aspirations, in particular its relationship with Google Cloud. In 2017 Darryl West, HSBC CIO, took to the stage at Google Next in San Francisco to discuss the companies' collaboration. West noted that the total amount of data the company held at the time was more than 100 petabytes, and that, having dipped their toes into the Hadoop ecosystem as far back as 2014, it had been a 'tough road' in some places.

Nevertheless, the DevOps side continues to expand. Only last week the company began to advertise for a big data DevOps engineer role. The job, based at Canary Wharf, requires experience on Google Cloud, or other suitable cloud vendor, as well as skills in Java, Scala and Spark on the programming side, alongside SQL, relational database, and Elasticsearch expertise.

"We invest in technologies which are strategically important to our business, and which help us serve our customers better," said Dinesh Keswani, chief technology officer for HSBC shared services. "The DevOps market is growing fast, as organisations like us drive automation, intelligence and security into the way we deliver software. CloudBees is already a strategic business partner of HSBC; we are excited by our investment and by the opportunity to be part of the story of continuous delivery."

From CloudBees' perspective, the investment takes the company's overall funding to more than $120 million. Among the firm's recent bets include the acquisition of Electric Cloud in April, as well as leading the launch of the Continuous Delivery Foundation in March, alongside Google and the Linux Foundation. CEO Sacha Labourey said the funding would be used for growing strategic partnerships and accelerating business growth.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.