All posts by James

HPE puts $4 billion aside to invest in the intelligent edge

Hewlett Packard Enterprise (HPE) has seen the future – and it’s all about the intelligent edge.

The company has announced a $4 billion (£3.04bn) investment over four years in technologies and services to deliver personalised user experiences, seamless interactions, and artificial intelligence (AI) and machine learning to improve customer experiences and adapt in real time.

HPE cited Gartner figures which argue that by 2022 three quarters of enterprise-generated data will be created and processed outside of the traditional data centre or cloud, up significantly from 10% this year. It’s a race against time for companies to get proper processes and actionable insights from their data wherever it lies – and HPE feels as though it has the solutions to those problems.

“Data is the new intellectual property, and companies that can distil intelligence from their data – whether in a smart hospital or an autonomous car – will be the ones to lead,” said Antonio Neri, HPE president and CEO. “HPE has been at the forefront of developing technologies and services for the intelligent edge, and with this investment, we are accelerating our ability to drive this growing category for the future.”

Details are a little scant on where this money will go – however HPE did note that it will ‘invest in research and development to advance and innovate new products’ as well as ‘continue to invest in open standards and open source technologies, cultivate communities of software, AI and network engineers, and further develop its ecosystem through new and expanded partnerships.’

The $4bn HPE is putting aside for this investment is not quite the $5bn Microsoft announced back in April focusing on the Internet of Things. Microsoft also favours the term ‘intelligent edge’ when discussing the future of technology. In February, during the company’s Q2 financial report, CEO Satya Nadella told analysts that the ‘intelligent cloud and intelligent edge platform [was] fast becoming a reality.’

Kubernetes skills demand continues to soar – but are organisations dropping the ball on security?

If you have Kubernetes skills then you will almost certainly be in demand from employers, as a new survey from CyberArk has found that IT jobs with the container orchestration tool in the title have soared year on year. But beware the security risks when getting involved.

According to the company, which has crunched data from IT Jobs Watch, roles involving Kubernetes have broken into the top 250 most popular IT vacancies, having been around the 1000 mark this time last year. The most likely job title for potential applicants is either DevOps engineer (40%) or developer (23%).

Regular readers of this publication will be more than aware of the initiatives taking place within the industry over the past year. The leading cloud providers are getting on board; Amazon Web Services (AWS) and Microsoft both made their managed Kubernetes services generally available this month, while back in March Kubernetes itself ‘graduated’ from its arbiter, the Cloud Native Computing Foundation (CNCF), recognising the technology’s maturity.

Those with product to shift are eating their own dog food, making their own internal process container-based. IBM, as John Considine, general manager of cloud infrastructure services, told CloudTech earlier this year, and Google, as Diane Greene told Cisco Live attendees last week, are but two examples. Alongside this, customers are putting containers at the forefront of their buying decisions; GoDaddy said as much when it was announced the hosting provider would be going all-in on AWS.

Yet with so many organisations going in at the deep end, there is a danger of getting into trouble when swimming against the tide.

In a report published this week (pdf), security firm Lacework assessed there were more than 21,169 publicly facing container orchestration platform dashboards out there. 300 of these dashboards were completely open. Whether Weight Watchers’ Kubernetes admin console, which researchers from Kromtech Security found earlier this month to be completely accessible without password protection, was included, we will of course not know. Another widely publicised story was around Tesla; back in February, research from RedLock found hackers had been running crypto mining scripts on unsecured Kubernetes instances owned by the electric car firm.

“During our research we learned that there are a lot of different ways to manage your containers, and that they are all incredibly flexible and powerful,” the Lacework report notes. “With each one you essentially have the keys to the castle from deployment, discovery, deletion, and manageability.

“We suggest that if you are a security professional and you don’t know you are running a container orchestration system, you should definitely find out ASAP.”

CyberArk offers a similar message of concern. “There is a very real danger that the rush to achieve IT and business advantages will outpace awareness of the security risks,” said Josh Kirkwood, CyberArk DevOps security lead. “If privileged accounts in Kubernetes are left unmanaged, and attackers get inside the control panel, they could gain control of an organisation’s entire IT infrastructure.

“Many organisations simple task the same DevOps hires – often with no security experience – to protect these new Kubernetes environments, in addition to the numerous other responsibilities they have to deliver,” added Kirkwood. “That’s no longer sufficient, and security teams need to get more closely involved to support the platform.”

According to the Lacework report, if you’re running Kubernetes you need to build a pod security policy, configure your pods to run read-only file systems and restrict privilege escalation. More general container advice doubles up essentially as good security practice; multi-factor authentication at all times, using SSL for all servers, and using valid certificates with proper expiration and enforcement policies.

So is it time to take a step back? If you have Kubernetes skills then you’re in a good place – but get some security smarts alongside it and you’ll be in an even better one.

Google Cloud opens up in Finland, uses natural cooling for sustainability

Google Cloud is now open for business in Finland, making it the 16th region to launch across five continents.

The move was first announced at the start of the year when Google significantly expanded its infrastructure, with five new regions and three subsea cables. Facilities in the Netherlands and Montreal have already been opened, with Los Angeles, Hong Kong and Switzerland – the latter announced last month – to follow.

The facilities are located in Google’s existing data centre in Hamina and will utilise the natural environment, including sea water cooling from the Gulf of Finland. “[It] is the first of its kind anywhere in the world,” said Kirill Tropin, product manager at Google Cloud in a blog post. “This means that when you use this region to run your compute workloads, store your data, and develop your applications, you are doing so sustainably.”

Google said hosting applications in the new region would be able to improve latencies by up to 65% for end users in the Nordics and by up to 88% for users in Eastern Europe.

As is always the way with Google, a gaggle of customers were rolled out, including Rolls-Royce, who focused on sustainability as well as latency. “The road to emission-free and sustainable shipping is a long and challenging one, but thanks to exciting innovation and strong partnerships, Rolls-Royce is well-prepared for the journey,” said Karno Tenovuo, senior vice president ship intelligence.

“For us, being able to train machine learning models to deliver autonomous vessels in the most effective manner is key to success,” added Tenovuo. “We see the Google Cloud for Finland launch as a great advantage to speed up our delivery of the project.”

You can find out more about the news here.

Organisations continue to hit cost pitfalls with cloud migration, says Rackspace

The cloud journey is costing a bit more than organisations realise, according to new research from Rackspace.

The findings appear in a new report, titled ‘Maintaining Momentum: Cloud Migration Learnings’. The study, conducted by Forrester and polling more than 300 organisations in the UK, France, Germany and the US, argued that while half of business and IT decision makers polled saw cost reduction as a key driver for cloud adoption, two in five (40%) said their migration costs were still higher than expected.

Part of the issue is the same old story around ‘hidden costs’ and stipulations barely mentioned in the small print. The majority (60%) of those polled said costs were higher than expected around upgrading, culling, or replacing legacy business apps and systems.

Yet other firms simply made miscalculations; issues with data capture and governance in the planning phase saw many companies come a cropper, as well as ensuring the right vision and strategy for their cloud transformation. Firms also hit snags after projects were completed. Many did not see issues with regard to employee resistance – or at least the ‘hidden’ costs which accompanied them – and change management programs.

It’s important companies get this right – 71% of those polled said they were now more than two years into their cloud journey, with migrating existing workloads into a public or private cloud environment remaining either ‘critical’ or ‘high’ priority in the coming year for more than four in five (81%).

As regular readers of this publication will testify, gaps will usually appear when organisations commence their cloud journeys. A study from Skytap earlier this month cited resistance to change as an issue holding businesses back, citing the vagueness of terms such as ‘modernisation’ and ‘digital transformation’.

Writing for CloudTech this month, Thomas La Rock, head geek at IT management software provider SolarWinds, cited the importance of consistency throughout each step of the process. “The skillset needed across IT is becoming increasingly blended, requiring versatile tech professionals that can adapt and flex in accordance with a changing landscape,” La Rock wrote. “Once you’ve got the right team in place, it’s crucial to give the migration project the attention it deserves and, by planning meticulously in advance, you’re setting yourself up for success.”

“As a business generation, we are getting faster at new technology adoption, but we still seem to stumble when it comes to understanding the requirements (and limitations) of the business consuming it,” said Adam Evans, Rackspace director of professional services in a statement. “Introducing new cloud-based operating practices across an entire organisation is rarely straightforward, as with anything involving people, processes and their relationship with technology.

“Managing the gap between expectation and reality plays a huge role in programme success, so it’s imperative that organisations start with an accurate perspective on their maturity, capability and mindset,” added Evans. “Only then can we start to forecast cost and complexity reliably.”

Microsoft announces general availability of Azure Kubernetes Services

Those who attended or watched Cisco Live’s opening keynotes earlier this week – or indeed read our story on it – will have recognised the importance of Kubernetes to attendees with Google Cloud’s cameo being a particular highlight.

Now Microsoft is making waves of its own – with the general availability of its Azure Kubernetes Services (AKS).

The move will see Microsoft add five new regions of availability, with two in the US, two in Europe – including one in the UK – and the other in Australia, bringing the total number up to 10. Microsoft said it hoped to double its reach in the coming months.

Earlier this month, Amazon Web Services (AWS) announced the general availability of Amazon EKS, its managed Kubernetes service, with regions operational in US East and US West and rapid expansion promised.

The major providers are therefore all in the race to become the most developer and company-friendly resource on which to build and manage Kubernetes projects. Google, of course, as the original designer of the orchestration system, is a bit further ahead with its Google Kubernetes Engine (GKE) being a focal point of the Cisco and Google partnership elaborated on this week.

“With AKS in all these regions, users from around the world, or with applications that span the world, can deploy and manage their production Kubernetes applications with the confidence that Azure’s engineers are providing constant monitoring, operations, and support for our customers’ fully managed Kubernetes clusters,” wrote Brendan Burns, Microsoft Azure distinguished engineer in a blog post confirming the news.

“Azure was also the first cloud to offer a free managed Kubernetes service and we continue to offer it for free in GA,” Burns added. “We think you should be able to use Kubernetes without paying for our management infrastructure.”

You can find out more about the news here.

DevOps and microservices will be huge for business – but many orgs are nowhere near it yet

Mind the gap please: according to new research from the Ponemon Institute, the gap between organisations’ ideal DevOps and microservices capabilities and what they are actually able to deliver is costing enterprises on average $34 million per year.

The study, which was sponsored by hybrid cloud management provider Embotics and which polled more than 600 cloud management professionals, found three quarters (74%) said DevOps enablement capabilities were either ‘essential’, ‘very important’, or ‘important’ to their organisations. Four in five (80%) said microservices were essential to important. Yet only a third said their company had the ability to push through those capabilities.

Ultimately, the root of the problem is how organisations are managing – and struggling – to cope with how employees are consuming cloud resources. Almost half (46%) of those polled said their company was ‘cloud direct’ – employees bypassing IT to communicate directly to AWS, Azure et al through native APIs or their own public cloud accounts.

This leads to issues with regards to visibility and management. 70% of respondents said they have no visibility at all into the purpose or ownership of the VMs in their cloud environment, while a similar number (66%) said they were ‘constantly challenged’ with management and tracking of assets in their cloud ecosystem.

The solution? DevOps, DevOps, DevOps, as a former Microsoft chief executive might have put it. The report puts this under the banner of CMP (Cloud Management Platform) 2.0 – a new era of hybrid cloud management. Almost three quarters (71%) of those polled say they have adopted or are planning to adopt DevOps methodologies, with the majority saying it will improve project quality (69%), delivery scheduling (61%), and budgeting (60%).

This is by no means the only study in recent weeks which has come to this conclusion. According to Puppet’s most recent note, issued earlier this month, there was not only a clear difference between high and low performers but a significant gap between lower performers depending on industry.

“To enable true digital business process transformation, enterprises need to find a way to bridge the gap between the speed and agility developers need and the control and governance required by the IT organisation,” said Larry Ponemon, chairman and founder of the Ponemon Institute in a statement. “The report shows that this isn’t happening with current cloud management strategies.”

You can read the full report here (email required).

Google Cloud cameo steals the show at Cisco Live with partnership update top of the agenda

It’s a mad, mad, multi-cloud world all right. At Cisco Live US in Florida yesterday – the company’s flagship jamboree of all things networking – CEO Chuck Robbins was all but upstaged by Google Cloud chief Diane Greene.

Robbins is more than happy to let the great and good share the stage with him at Cisco’s events – Apple CEO Tim Cook appeared at Cisco Live Las Vegas last year to discuss securing the mobile workforce. It is testament to the strength of partnerships at this level too from both sides; Apple and Google have plenty of the consumer clout, but Cisco’s presence in the enterprise market is vital too.

Yet it was also telling that Robbins mentioned Cisco’s Catalyst 9000 platform after Greene’s cameo. The Catalyst 9000 is, in the words of the CEO, ‘the fastest ramping product in the history of Cisco’. But first, an update on the company’s partnership with Google Cloud – in which the word Kubernetes was said rather a lot.

Robbins asked the audience of Cisco customers and partners how many were testing, piloting, or generally getting a feel for Kubernetes today. The applause which came back suggested a positive uptake. Google and Cisco’s partnership, launched in October last year, aims to give Cisco customers the same experience when running Kubernetes applications either on-premise or in Google Kubernetes Engine (GKE). Or in other words, to enable organisations to tackle their cloud journeys at their own pace.

“It’s helping all of you keep doing what you’re doing – keep innovating, and non-disruptively keep disrupting what your company’s doing,” said Greene. “You can’t just rewrite your applications and move to a new environment, and so what we’re doing here is bringing you Kubernetes containers, and then you can let your application developers concentrate on what they’re doing.”

Greene said there were four stakeholder bases who would see benefit from the partnership; engineers, developers, ops, and security. “For engineers, being able to take this incremental approach, this non-disruptive way to keep disrupting what you’re capable of doing in a fast-moving company – that’s one huge advantage,” said Greene. “It really modernises the developer environment.

“I’ve been involved in software development for a long, long time, and I really think these modern technologies are almost giving a 10x productivity improvement,” added Greene. “The Kubernetes environment and Istio is just taking care of a lot of things that developers used to have to worry about. Now they can focus more on the business of the company.

“For the ops folks, it gives you a consistent environment that you can monitor,” Greene continued. “Istio’s going to be really powerful there. For security, to have one consistent model across everywhere that you’re running, that’s huge – and it’s really good for the developers because you don’t have these lowest common denominator rules that can get in the way of innovation.”

Perhaps talk of upstaging is a little harsh. Cisco’s vision is around how its network architecture (above) underpins the innovation and partnerships taking place. “The reality, I believe, is it’s this architecture that brings together automation, security, analytics,” said Robbins. “For me, that’s what’s made the big difference – because you all understand how this can change the operational paradigm in your organisations and allow you to focus on other strategic things.”

Robbins touched on the importance of emerging technologies and their effects on the business in his opening salvo. “This is going to define how we think about the network’s next act: what does it have to do?” asked Robbins. “When you think about the complexity of the world you’re operating in now – which candidly is more complex than it was even four years ago – and then you introduce these new tech changes that bring incredible capabilities.

“Artificial intelligence, augmented reality, machine learning – you think about the requirements, and what you’re being asked to do by the business,” added Robbins. “Your business leaders in your organisation actually don’t care about the technology. They care deeply about the outcome that technology can deliver. They care deeply about moving faster. They care deeply about being able to execute on a strategy the minute they have a strategy.

“This is at the heart of how we defined our strategy that we first began to launch last year. If you step back and look at all of the connections, you have traffic going to public cloud, SaaS, applications, consuming M2M/IoT connectivity at the edge… the only common denominator is the network. Therefore the network has to become a secure platform that enables you to help your organisation achieve its strategies.”

As regular readers of this publication will be aware, Google’s cloud push has been a serious bet over the past couple of years, with validation of Greene’s work coming from the most recent Gartner Magic Quadrant for cloud IaaS. The analyst firm put Google in its leaders’ section for the first time in five years.

Cisco offered one other piece of news yesterday; the company is working with NetApp to deliver a new managed private cloud FlexPod product. FlexPod ‘combines Cisco USC integrated infrastructure with NetApp data services to help organisations accelerate application delivery and transition to a hybrid cloud with a trusted platform for innovation’, in the company’s words.

Main picture credits: Cisco

Cohesity secures $250 million in series D round in hyperconverged storage boost

Cohesity, a hyperconverged storage provider, has announced it has raised $250 million (£186.7m) in an oversubscribed series D funding round to help further drive the company’s momentum.

The round was led by the SoftBank Vision Fund – making it only the second time the group has invested in an enterprise software company – with participation from Cisco Investments, Hewlett Packard Enterprise (HPE), Morgan Stanley Expansion Capital, and Sequoia Capital among others.

Cohesity offers hyperconverged storage for secondary data – in other words, not collected by the user – with the company saying secondary data consumes up to 80% of enterprise storage capacity. With the data in different repositories, such as backups, archives, test/dev and analytics, the company aims to simplify the process with its data platform.

The company has had significant success over the past 12 months, with more than 200 new enterprise customers – from Schneider Electric to the San Francisco Giants – coming on board in the past two quarters alone. Cohesity also secured increased revenues to the tune of 600% between 2016 and 2017.

“My vision has always been to provide enterprises with cloud-like simplicity for their many fragmented applications and data – backup, test and development, analytics, and more,” said Mohit Aron, CEO and founder of Cohesity. “Cohesity has built significant momentum and market share during the last 12 months and we are just getting started. We succeed because our customers are some of the world’s brightest and most fanatical IT organisations and are an extension of our development efforts.”

“Cohesity pioneered hyperconverged secondary storage as a first stepping stone on the path to a much larger transformation of enterprise infrastructure spanning public and private clouds,” said Deep Nishar, senior managing partner of SoftBank Investment Advisers. “We believe that Cohesity’s web-scale Google-like approach, cloud-native architecture, and incredible simplicity is changing the business of IT in a fundamental way.”

Despite SoftBank’s leadership being the eye-catching headline, it is interesting to compare some of the other investors.

Cisco and HPE also put a stake in last April with Cohesity’s series C round of $90 million. As a Business Insider story put it at the time, eyebrows were raised given Cisco and HPE’s fierce competition across multiple business units – storage being one of the hotter ones. Both companies have a clear interest in the space; in January last year, HPE acquired hyperconverged infrastructure (HCI) provider SimpliVity for $650 million in cash, while Cisco caught up by snaffling fellow HCI firm Skyport Systems at the start of this year.

The series D funding has given Cohesity a total of $410 million raised.

Google Cloud launches sole-tenant nodes for improved compliance and utilisation

Google Cloud has announced the launch of sole-tenant nodes on Google Compute Engine – helping customers in various industries around compliance in the process.

The new service, which is currently in beta availability, gives customers ownership of all VMs, hypervisor and host hardware, going against the traditional cloud use case of multi-tenant architecture and shared resources.

“Normally, VM instances run on physical hosts that may be shared by many customers,” explained Google’s Manish Dalwadi and Bryan Nairn in a blog post confirming the news. “With sole-tenant nodes, you have the host all to yourself.”

This will potentially be good news to companies in finance and healthcare, along with other firms who adopt an all-data-is-equal-but-some-data-is-more-equal-than-others mindset. Organisations with strict compliance and regulatory requirements can use sole-tenant nodes to ensure physical separation of compute resources in the cloud, while Google also noted that companies can achieve higher levels of utilisation if they are creative with their instance placements and machine types launched on sole-tenant nodes.

The move puts Google in line with Amazon Web Services (AWS) and Microsoft. The former, for example, offers EC2 Dedicated Hosts, a physical server with EC2 instance capacity dedicated to the user, as well as Dedicated Instances. An AWS document outlines the differences between the two; apart from the straightforward difference in terms of per-host and per-instance billing, Dedicated Hosts offers visibility on sockets and physical cores, targeted instance placement and bring your own license (BYOL).

This is just one of various initiatives Google has put into place this year to beef up its cloudy operations. Last month, this investment was validated in the form of Gartner’s Magic Quadrant for cloud infrastructure as a service. Google made the leaders’ section, which for five years had been the sole domain of AWS and Microsoft, for the first time.

Pricing for Google’s sole-tenant nodes is on a per-second basis with a minimum charge of one minute.

Main picture credit: Google

Why enterprises are creating a self-induced skills gap despite strong cloud appetite

Enterprises have a serious appetite to move their resources to the cloud at one level – but a lack of skills and resistance to change from some quarters is holding organisations back.

That’s the key finding from new research from global cloud provider Skytap. The study, in conjunction with 451 Research, may have many rolling their eyes in a manner suggesting they have seen it before – yet it still proves companies are not getting to grips with the change.

More than two thirds (67%) of the 450 C-level and director-level technology leaders polled said they planned to migrate or modernise at least half of their on-premises applications in the next 12-24 months. Yet half (49%) said they wanted to go about migrating to the cloud through refactoring or rewriting applications – the strategies that require the highest degree of IT skill. As the report puts it, organisations are ‘their own worst enemy.’

Part of this is down to the lure of hyperscale cloud providers. Two in three respondents say they use one or more of Amazon Web Services (AWS), Azure, Google or IBM Cloud. Yet the report argues this form of cloud modernisation – focusing predominantly on the front-end – neglects the engine room, the enterprise data centre, where gnarled, complex, ERP and CRM apps live. They’re critical to the business, but more importantly, they’re ill-suited for cloud environments.

This may end up explaining a certain amount of apathy among organisations polled. More than half (55%) of respondents said their most critical recruiting need was ‘people capable of migrating existing applications to the cloud’, while a similar number (54%) said ‘internal resistance to change’ was key to holding their firm back from modernisation.

All things considered, the key point of the research is a simple one: don’t believe anyone who tells you that cloud is done, at least in the enterprise. Enterprise approaches continue to be haphazard – a ‘myriad of difficult choices exacerbated by urgent skills needs and the significant challenges created by traditional but mission-critical applications left in the data centre’, as the company puts it.

“Cloud is often overhyped and simplified, while modernisation and digital transformation can be even more vague,” said Brad Schick, Skytap CTO. “Our study cuts through to clarify the fact that technological change is hard and is being further aggravated by cookie-cutter approaches to cloud adoption.

“We want to be part of a conversation that gives enterprises clear choices to manage change and progressive modernise without burning everything to the ground,” added Schick.

You can find out more and read the report here.