Todas las entradas hechas por Adam Shepherd

How Hive keeps the lights on with VMware and AWS


Adam Shepherd

22 Nov, 2018

If you’re a techie living in the UK, you’re almost certainly familiar with Hive.

This home-grown smart home firm was created in 2012 by parent company Centrica – which also owns British Gas – as a dedicated division to handle its burgeoning connected heating project. While it’s technically part of Centrica, it’s run as a separate company, operating as a lean startup independent of the rest of the business.

The venture has clearly proved successful; in the six years since it launched, Hive has expanded its portfolio to include smart lighting, motion sensors, surveillance cameras and more, and in May this year the company reached one million customers. However, supporting one million connected homes and counting requires a robust and scalable IT infrastructure.

Hybrid can still be costly

As you’d expect from a modern smart tech company, Hive’s infrastructure is now entirely cloud-based, running on AWS and VMware. This wasn’t always the case, however, as Hive’s infrastructure has evolved as the business and its needs changed over time.

According to Hive’s head of site reliability engineering, Chris Livermore, the man who’s responsible for provisioning and maintaining the infrastructure on which Hive’s software engineers deploy their code, the company initially started out with a hybrid model. The team used cloud environments to build and deliver Hive’s mobile applications but also maintained a physical data centre.

The main reason for this, Livermore says, is that AlertMe – a key partner that provided Hive with a platform for remote monitoring and automation services – only supported on-prem deployments, forcing Hive to run its own infrastructure.

«The data centre we had, we put a virtualisation platform on it, we used OpenStack, but we did that to allow our dev teams to interact with it in a cloud-type manner,» explains Livermore. «We wanted them to be able to spin up a virtual environment to work on without having to stop and wait for somebody in my team to do it. It’s all about moving that empowerment to the developers.»

Hive was investing a lot of time, effort and manpower in maintaining its data centres, Livermore says, and the company ultimately decided to shutter them around two years ago.

«All of those guys still work for me, they just don’t run a data centre any more – they do other stuff,» he explains. «It’s very interesting. We’ve done a lot of consolidation work, but none of it has been from a cost reduction point-of-view, it’s just been a better deployment of resources.»

IoT built on IoT

Now that it’s ditched its data centres, Hive is all-in on cloud; the company runs exclusively on AWS, with anywhere from 18,00 to 22,00 virtual machines running on VMware’s virtualisation software. It’s also a big user of Lambda, AWS’ serverless computing platform, as well as its IoT platform.

The fact that Hive uses Amazon’s IoT service may sound a little odd, given that Hive actually owns and operates its own IoT platform, but the deal allows the company to focus entirely on its own products, and leave much of the overhead management to AWS.

«At the time, it was a means to an end,» Livermore explains. «Five years ago when we started, you couldn’t go out to market and find an IoT platform provider, so in order to deliver Hive we partnered with AlertMe; they had an IoT platform. We subsequently acquired AlertMe and acquired an IoT platform, but you have all the overhead of maintenance of maintaining and evolving that IoT platform.»

Some products, like the relatively complicated Hive heating system, benefit from running on a custom-made platform, but for simpler devices like smart lights and motion sensors, Livermore says that it makes sense to find a platform provider «and let them do all the hard work… we will wherever possible use best-of-breed and buy-in services».

Hive has completely embraced the concept of business agility, and is not shy about periodically reinventing its IT. For example, despite the fact that its entire infrastructure runs on AWS, the company is considering moving portions of its workloads from the cloud to the edge, having the device process more instructions locally rather than pushing them to the cloud and back.

This would mean a reduction in Hive’s usage of AWS, but as with the data centre consolidation efforts from previous years, Livermore stresses that this is about technological efficiency rather than cost-cutting. More on-device processing means lower latency for customers, and a better user experience. «There are certain things that make sense to be a lot closer to the customer,» he says.

Building for scale

This constant pace of change may seem chaotic, but according to Livermore, it’s an essential part of scaling a company. «That presents opportunities to reevaluate what we’re doing and say ‘are there any new suppliers or new services that we can leverage?’.»

«We’re part-way through a re-architecting of our platform,» he tells Cloud Pro, «and we now need to be building a platform that will scale with the business aspirations. You get to these milestones in scaling. Up to half a million customers, the system will scale, [but] then you get to bits where you realise the code isn’t quite right, or that database technology choice you’ve made doesn’t work.»

For Livermore, his role is fundamentally about giving Hive’s developers as easy and seamless an experience as possible.

«Essentially, my job is to give my dev teams a platform where they can deploy their code and do their job with the minimum of fuss,» he says. «It’s all about empowering the developers to spend as much time as possible on solving customer problems and as little time as possible worrying about where the server’s going to come from or where they’re going to put their log files or where their monitoring and telemetry goes.»

Pure Storage adds new AWS integrations to support hybrid cloud


Adam Shepherd

19 Nov, 2018

Pure Storage is embracing the public cloud, announcing a new suite of cloud-based services designed to support the hybrid cloud operating models the company’s customers are demanding.

The new capabilities have been collectively dubbed Pure Storage Cloud Data Services, and comprise three new features built around AWS’ public cloud platform.

The first, Cloud Block Store for AWS, is billed as «industrial-strength block storage» for mission-critical apps hosted natively on AWS. Running the same Purity storage software layer as the company’s on-premise flash hardware and managed by its Pure1 cloud management service. The goal is to make it as easy as possible to move data between on-premise storage and AWS environments, using the same APIs, plug-ins and automation tools across both environments.

Pure is touting benefits for efficiency, reliability and performance thanks to features like thin provisioning and asynchronous replication to AWS. «We really elevate cloud storage up to a product that is suitable for tier one mission-critical applications that are reliant on having consistent, performant access to data,» said Pure Storage field CTO Patrick Smith.

The company is also adding a new feature to its FlashArray all-flash data centre storage products. CloudSnap for AWS allows FlashArray products to take portable snapshots to AWS S3 as a target, but also allows those snapshots to be restored quickly either on-prem or in an AWS environment via Cloud Block Store.

«Where we’re differentiated here is that FlashArray will back up to FlashBlade, which is our all-flash object storage platform. It not only backs up incredibly quickly, but it also provides fast recovery,» Smith said; «10x faster, in most cases, than our competition.»

The final feature being announced is StorReduce for AWS, an object storage deduplication engine that Pure picked up when it acquired deduplication specialists StorReduce earlier this year. The new replication features will allow Pure’s customers to do away with tape backups for long-term storage, the company says, and embrace a more flexible flash-based architecture.

«It allows us to change the economics of FlashBlade; not just make it a gamechanger in terms of rapid recovery, but also allow us to do that at a price point that means it’s not just for those troublesome applications that you can’t restore in time,» Smith told Cloud Pro. «This now makes FlashBlade suitable for all our customers’ on-prem backup requirements»

CloudSnap is available for FlashArray customers now, while Cloud Block Store and StorReduce are entering a limited public beta, with full public availability planned for both by mid-2019.

The company also told Cloud Pro that while AWS is the only public cloud provider supported at launch, adding other major providers is «a priority» post-launch.

«AWS is the start. We needed to start somewhere and AWS is a good partner with us,» Smith said, «and so they were a logical start – but we will absolutely have plans to add the other large cloud providers.»

Smith also predicted big benefits for Pure Storage’s partner ecosystem, on which Pure depends for its route to market.

«I think in the same way that this opens up new opportunities for Pure Storage, it also opens up new opportunities for our channel partners,» Smith told Cloud Pro. «I think the impact of us supporting the public cloud allows them to benefit as well as us.»

«We are absolutely committed to the channel; it is our go-to-market, and so our Cloud Data Services will all route through our channel partners. We are a partner company.»

VMworld Europe 2018: VMware expands AWS and IBM partnerships to fuel hybrid cloud


Adam Shepherd

7 Nov, 2018

VMware is deepening its partnerships with IBM and AWS in a bid to further increase the adoption and deployment of hybrid cloud. The company has also announced a number of additional partnerships and a new acquisition.

Announced at the company’s annual European conference VMworld Europe, the biggest news was a tie-up with IBM, a company which itself shook the tech world just last week with the announcement that it was set to snap up open source giant Red Hat.

IBM and VMware are teaming up to launch a new fully automated cloud architecture based on minimising downtime for mission-critical VMware workloads across IBM Cloud’s 18 global zones. Offered through IBM’s Services division, the new architecture will include Intel Optane DC SSD technology, IBM Cloud infrastructure hardware and VMware’s software-defined data centre products, aiming to offer customers 99.99% uptime for their essential workloads with automatic failover.

«We believe this a great game-changer for enterprise clients», said Arvind Krishna, IBM’s senior vice president of hybrid cloud.

«The VMware and IBM partnership builds upon the strengths of both companies,» VMware CEO Pat Gelsinger said. «Now with the latest advancements in our relationship, we’re making it possible for customers to move, modernise and operate any application – VM or containerised, traditional or mission-critical – in the IBM Cloud.»

In addition, support was announced for a number of products within IBM and VMware’s respective portfolios. For example, VMware vCenter Server deployments on IBM Cloud now support installation of IBM Cloud Private Hosted, and products under the IBM Cloud for VMware banner can now be integrated with IBM Cloud Kubernetes Service, while vRealise Operations is now compatible with IBM Power Systems servers and IBM has certified VMware’s NSX-T network virtualisation technology for use as an IBM Cloud Private network stack.

Of course, it wouldn’t be an IBM announcement without Watson, and sure enough, Gelsinger announced that VMware would be integrating IBM’s AI into its customer service portals to allow users to navigate through the support portal using natural language rather than impersonal drop-down interfaces, hopefully giving a better – and faster – support experience.

The two companies are even opening a ‘Joint Innovation Lab’, which will see engineers from both companies working together to collaborate on new products, solutions and technologies.

IBM wasn’t the only company VMware was cosying up to, however; public cloud titan AWS was also singled out as a key partner, with Gelsinger announcing that VMware Cloud on AWS would be coming to 16 new regions worldwide over the next year. Ireland is first up in Q4 2018, followed by Paris in Q1 next year and Sweden in the second half of 2019, with the rest mostly spread across the APAC region.

Not only that, but the company is also expanding its AWS-based DRaaS offering VMware Site Recovery, doubling the amount of supported virtual machines from 500 per software-defined data centre to 1,000. It has also worked with parent company Dell EMC to integrate VMware Site Recovery with VxRail, the hyper-converged infrastructure solution co-designed by Dell and VMware. The integration will allow customers to quickly set up and enact failover from their VxRail appliances to VMware Cloud on AWS instances without having to reconfigure or modifying their VMs.

New features were also announced for VMware Horizon 7 installations running on AWS, and customers running VMware Cloud on AWS will also soon have access to customer support from within their VMware environments – a feature that VMware says it’s planning to bring to the rest of its products at some point in the future.

There was a raft of smaller-scale partnership announcements too, including the integration of services from Okta, Carbon Black and Google into VMware’s WorkSpace ONE VDI platform, which now also supports DeX-enabled Samsung devices like the Galaxy S9. Dell Provisioning for Workspace ONE is also now available as part of the Dell ProDeploy Client Suite, allowing customers to bolt additional deployment services onto their Workspace ONE provisioning orders at a reduced rate.

While not strictly speaking a partnership, one of the most interesting announcements was the news that VMware would be acquiring Heptio, a company specialising in Kubernetes tools and development that was founded by Craig McLuckie and Joe Beda, two of the original founders of Kubernetes. VMware will be looking to use the skills and technologies that it acquires as part of the deal to improve its PKS offering, increasing its strength in the container space.

«The Heptio news this morning made my day,» said Jim Zemlin, executive director of the Linux Foundation. «Craig Mcluckie and Joe Beda were instrumental in the creation of Kubernetes and the founding of the Cloud Native Computing Foundation. We are all happy for their success.»

«Following so closely after the IBM/Red Hat news, this is yet another example of a large company that believes open source and open cloud computing are critical to future growth.»

VMworld Europe 2018: VMware beefs up hybrid portfolio with Cloud Foundation 3.5 release


Adam Shepherd

7 Nov, 2018

VMware has beefed up its hybrid cloud offering, announcing the release of VMware Cloud Foundation 3.5 at its annual European conference VMworld Europe, along with updates to its Workspace ONE VDI platform and its VMware Cloud Verified programme.

Cloud Foundation 3.5 for multi-cloud complexity management

Designed to support hybrid cloud deployments, Cloud Foundation 3.5 introduces support for VMware’s latest product versions, including the company’s Kubernetes platform VMware PKS (through integration with VMware NSX-T 2.3), the latest version of vSphere, vRealize Automation 7.5 and vRealize Operations 7.0.

VMware’s parent company Dell EMC is also taking the opportunity to announce a sneak peek at the Cloud Foundation software running on VxRail, the hyper-converged infrastructure appliance co-designed by the two companies.

It’s unclear whether VxRail deployments will be supported by the time Cloud Foundation 3.5 launches (expected before the end of VMware’s fiscal year in February), but it will be validated for use with Dell EMC’s vSAN Ready Nodes running on the company’s PowerEdge MX platform – Dell’s recently-launched ‘kinetic infrastructure’ designed to support software-defined data centre projects.

It will also be heavily integrated with HPE’s composable infrastructure, with customers able to manage hardware run by HPE Synergy Composer and OneView through VMware’s SDDC Manager software. Customers can also deploy Cloud Foundation to the public cloud through some of VMware’s many partners, including IBM and AWS, who VMware has just improved its partnerships with.

«Fundamentally, we see hybrid clouds as being driven largely by IT operations; proven infrastructure, production environments – and the public cloud are a range of consumers and more driven by developers and line of business,» VMware CEO Pat Gelsinger said during a keynote speech at the event.

«The VMware Cloud Foundation is essentially the full recipe for building a cloud environment. Virtualised compute, storage and networking with a layer of automation and operations – and as I describe it, the rule of the cloud: ‘ruthlessly automate everything’. Every people operation becomes an automation solution.»

Elsewhere, the company has also announced version 4.0 of its vRealize Network Insight product, with the ability to troubleshoot connectivity between apps in hybrid environments, as well as the connection between on-premise VMs and AWS EC2 instances. Support for Cisco ACI underlay and ASA firewall will also be coming in this new version (also set to release before February), alongside new visualisation features for NSX-T topology.

vRealise Operations will also be getting a new feature, in the form of Skyline Proactive Support. This automated support system uses gathered data to provide pre-emptive recommendations in order to keep customers’ infrastructure ticking over smoothly, and also automates the process of uploading log files to VMware’s technical support staff. Skyline Proactive support will be arriving early next year.

Endpoint device management with Workspace ONE

Workspace ONE has had a number of tweaks and tune-ups as well. Workspace ONE Intelligence, the analytics and automation component of VMware’s VDI solution, has now been updated to support the creation of integrations with third-party systems like service desk platforms.

Workspace ONE also now supports Sensors for macOS, which allows Workspace ONE admins to query various details about Workspace ONE devices, like configuration, hardware and BIOS info. The feature initially supported Windows 10 devices when it was introduced earlier this year, but has now been expanded to support Macs as well.

Elsewhere, Workspace ONE’s Boxer email client now supports G Suite email accounts, and Workspace ONe supports Samsung’s DeX platform, meaning VMWare’s VDI platform can be run on devices like the Galaxy S9 and Tab S4 while they’re in desktop mode. Support for Flexera AdminStudio has been added too, allowing devs to export Win32 apps directly to their Workspace ONE catalogue.

«2018 has been a transformative year for our Workspace ONE platform,» said the company’s senior vice president and general manager for end-user computing, Shankar Iyer. «With today’s announcement we continue to deliver new capabilities at a blistering pace that fully embrace the heterogeneity we see across customers in the industry today. And, we have no intention of slowing down.»

VMware’s virtual cloud on AWS

The conference was peppered with announcements regarding VMware’s cloud partners, too. VMware Cloud on AWS deployments running the company’s Horizon 7 endpoint virtualisation software now have support for Instant Clones and App Volumes, offering customers a maximum reduction in storage consumption of up to 80% and a spin-up time of around two seconds per virtual desktop instance.

VMware Cloud on AWS deployments of Horizon 7 will also be integrated with the company’s Horizon Cloud Service for simplified monitoring, and has teased that admins will soon be able to partially automate their installations of the software.

Finally, the company announced that the VMware Cloud Verified Partner programme has swelled from five companies last year to over 27 companies globally, including more than 12 in Europe. In addition, it is launching new VMware Validated Designs to help partners quickly deploy VMware-approved solutions, and has announced the general availability of a number of previously-launched products. These include VMware Cloud Provider Pod, VMware vCloud Director 9.5, VMware vCloud Availability for Cloud-to-Cloud DR, and VMware vCloud Usage Insight Service.

Not everybody wants to rule the world: Why HPE isn’t worried about catching up to Dell


Adam Shepherd

10 Jul, 2018

Looking at the figures from analysts like Gartner and IDC, one could be forgiven for thinking that HPE is in a spot of trouble; according to the latest reports, the company is trailing behind its main rival Dell Technologies in revenues and market share across both servers and storage.

You would imagine HPE would be concerned about this; its market share has shrunk over the past year whilst Dell’s has expanded, and this trend doesn’t show any immediate signs of stopping. Dell has gone from strength to strength since it swallowed EMC in 2016, while the last few years have been turbulent for HPE, to say the least.

However, the company appears to be weathering the storm. New CEO Antonio Neri seems like a strong and confident leader, its recent financial results have been showing improvement, and recent announcements about its intentions to simplify its channel programme have met with approval from partners.

Now that HPE has regained some stability, surely it’s looking to retake its position at the head of the infrastructure market? Yet, according to Mark Linesch, vice president of strategy for HPE, the company isn’t remotely concerned with whether or not it holds the market crown.

«Yeah, Dell’s got a couple of points of share according to Gartner – big deal,» he tells Cloud Pro.

«We’re not worried about Dell in servers at all. They’re a tough competitor, and we take them very seriously, but no – why would we worry about Dell getting a couple of points on us in servers? Who cares?»

Instead of chasing rankings, he says, the company is focusing on delivering maximum value and satisfaction to its customers, trying to help them solve their business problems by building the best infrastructure it possibly can.

This might sound like excuses from a company hoping to save face after losing the top spot that it held for so many years, and that may well be the case. However, downplaying its traditional infrastructure to a certain extent may actually be a sound strategic move for the vendor.

«I think it’s important that at this time of its existence – a new CEO, spin outs complete, et cetera – that HPE demonstrate to the market that it can set realistic goals and achieve them, or over-achieve, even,» says 451 Research co-founder William Fellows. «I don’t think that needs to be about catching Dell.»

On the other hand, Forrester senior analyst Naveen Chhabra warns that Dell is one competitor that shouldn’t be underestimated.

«While there is no doubt that HPE is gaining customers and market share, it absolutely needs to keep an eye on the market momentum,» he says. «Dell has forged a great number of technology partnerships, has a great ecosystem internally and externally.»

Dell has its own share of issues, but nothing notable enough that HPE should not be worried about Dell. Dell has a formidable family of technology offerings across its multitude of businesses.»

A shift to the ‘Intelligent Edge’

Both experts agree, however, that the biggest imminent threat to HPE is not Dell – or any other vendor, for that matter. Instead, it’s the industry’s growing shift towards the cloud.

As cloud infrastructure becomes more robust, more affordable and more popular, HPE needs to change up its strategy. To borrow a phrase from its sister company, it needs to reinvent itself.

HPE is doing this, counterintuitively, by embracing the cloud – or at least certain aspects of it. In particular, it’s adopting cloud-like service models for its on-premise infrastructure, offering consumption-based pricing for its hardware customers through HPE GreenLake. Using its traditional infrastructure business as a bedrock, the company is hoping that it can build long-term services and subscription-based revenue models that will sustain it going forward.

In addition to this new cloud-style go-to-market model, HPE is also putting considerable weight behind what it calls ‘the intelligent edge’ – the mish-mash of connected devices, peripherals, networking hardware and industrial equipment that comprises everything that’s not in the cloud or in the data centre. The company is ploughing $4 billion into the intelligent edge over the next four years, and has indicated that it’s a significant strategic priority.

According to Chhabra, while this is is a smart play for the company, it’s not without its risks, and he cautions that the market still isn’t totally mature.

«There is no doubt that the edge business is growing and hence almost all the large infrastructure vendors are putting their bets on ‘expected developments’ on the intelligent edge,» he says. «However we still need that to mature to levels where their independent and collective losses by adoption of public cloud can be offset.»

«In my humble and honest opinion, the messaging and focus on ‘the intelligent edge’ is directional and still at corporate levels. I don’t see concrete evidences of the developments – like technology and go-to-market partnerships, solution development, et cetera – that the infrastructure vendors are making. These developments are important and critical to ensure they are either ahead of the market, or take the leading position and create a niche for themselves.»

It’s true that HPE is no longer the market leader in server shipments, and that isn’t set to change any time soon – but that might not matter. Market trends suggest that as the traditional on-prem infrastructure business is increasingly eaten by the cloud, pivoting to emerging technologies is going to be the only way that companies like HPE are going to remain relevant.

CEO Antonio Neri says he’s playing the long game with his strategy, and that makes sense. Duking it out with Dell over market share may have been the way things worked with the old HPE, but that’s not the game any more. The two companies may well end up competing on the battlefield of edge computing – Dell has made significant investments in the area itself – but when it comes to old-school infrastructure, HPE may have to lose the battle in order to win the war.

Image courtesy of HPE

Box CEO Aaron Levie says Facebook data scandals could undermine trust in Silicon Valley


Adam Shepherd

9 Jul, 2018

Box CEO Aaron Levie has warned that the actions of Google and Facebook are a «contagion» which could result in major organisations losing trust in Silicon Valley as a whole.

Speaking to Recode‘s Kara Swisher, he said that Box – and, by extension, other enterprise-focused companies – could find themselves suffering if the actions of more well-known tech firms casts doubt over the motivations of Silicon Valley at large.

«The worst-case scenario for us is that Silicon Valley gets so far behind on these issues that we just can’t be trusted as an industry. And then you start to have either companies from other countries,» he said, «or you have just completely different approaches and architectures to technology.»

Even though enterprise-focused tech companies might think that they are separate from the current wave of data-harvesting and privacy scandals by virtue of the fact that they don’t handle public data in the same way, the blow-back it causes could potentially result in a loss of confidence throughout the market.

«We rely on the Fortune 500 trusting Silicon Valley’s technology, to some extent, for our success,» Levie said. «When you see that these tools can be manipulated or they’re being used in more harmful ways, or regulators are stamping them down, then that impacts anybody, whether you’re consumer or enterprise.»

As a company, Box itself isn’t worried by the looming threat of increased regulation – something that has been mooted as a potential way to curb the excesses of Facebook and Google. By virtue of the fact that many of Box’s customers are in heavily-regulated industries like banking and life sciences, the company is «almost by proxy regulated», Levie says.

The biggest barrier to regulating the largest tech companies, he argued, is that they’re so broad and diffuse that it’s difficult to apply single regulations to them. Instead, what’s more likely according to Levie is the application of separate pieces of legislation regarding individual issues, such as campaign financing, self-driving vehicles and AI use within healthcare.

In order to successfully achieve this, he said, government and regulatory bodies should be staffed with «super-savvy» individuals who understand the industry and the tech which they will be dealing with.

«We have an extremely strong vested interest in ensuring that Silicon Valley and DC are operating effectively,» he said. «We care that we get through this mess, and that Google resolves their issues, and Facebook resolves their issues, and so on.»

Image credit: Stephen Brashear

View from the airport: HPE Discover 2018


Adam Shepherd

25 Jun, 2018

This year marks my very first HPE Discover, stepping in to cover for IT Pro’s resident HPE expert Jane McCallion, and it’s been a good introduction to the company’s new direction – it’s safe to say that the HPE we saw this week is a rather different beast to the enterprise giant of old.

This year’s event was new CEO Antonio Neri’s first Discover as head of the company, and the first real opportunity for HPE’s customers, partners and staff to get a sense of his leadership style without the shadow of former boss Meg Whitman hanging over him. More than anything else, he came across as profoundly genuine; he’s been with the company for more than 20 years, starting out in the customer service department and working his way up the ranks, and it’s clear that he eats, sleeps, lives and breathes HPE.

He obviously cares deeply about the company, and one of the messages he kept repeating throughout the week was that he’s planning for the long game, rather than chasing short-term successes. As far as I’m concerned, HPE couldn’t be in safer hands from a leadership perspective.

With that said, however, I do have some slight reservations coming away from Discover 2018.

For one thing, the company’s strategy feels somewhat confused – HPE is an infrastructure provider first and foremost, but the company had virtually no new technology to show off. There were some minor updates to its Edgeline systems and new software-defined networking from Aruba, but other than that, the company’s traditional storage and server products hardly got a look-in.

This is slightly troubling for a company whose main business still revolves around these products. HPE has been putting a lot of effort into building out its GreenLake flexible consumption offering – which is a good direction to explore for HPE and its channel partners, especially in light of the growing desire for businesses to shift their spending from CapEx to OpEx.

On the other hand, the fact remains that even with flexible consumption, customers will still need something to consume, and we’re slightly worried that the company may soon end up slipping behind its rivals in traditional infrastructure R&D.

There is one notable exception to this – The Machine.

Long-time HPE followers will know that The Machine is the surprisingly awesome-sounding codename given to the company’s memory-driven computing project, which has had something of a chequered history. Martin Fink, the ex-CTO who was the brains behind the project, retired two years ago, and many believed The Machine had retired with him.

Amazingly, however, this year’s discover saw HPE actually launch something off the back of the project, in the form of a cloud-based environment designed to let developers play around with memory-driven computing. It may not be quite what we were initially promised – not yet, anyway – but it’s still surprising to see that The Machine is still chugging along.

As for the rest of the show, most of the focus was placed on what HPE is branding ‘the intelligent edge’. Translated, this means ‘anything that’s not a data centre or the cloud’. Astute readers will notice that this covers a pretty huge range of products, environments and use-cases, from industrial IoT systems, to office networking, to connected cars and more.

HPE has committed to a $4 billion investment in ‘the intelligent edge’ over the next four years, and while it’s a smart play for the company (not to mention being in line with its previous strategy), I can’t help but worry that covering such a broad area with a single blanket term runs the risk that it’ll lose all meaning.

One thing that was also repeatedly emphasised was HPE’s renewed focus on customers and partners, and unlike some other enterprise companies, it does seem sincere in this regard. Whether or not its more ambitious bets around edge computing and flexible consumption pay off, it seems like HPE has its heart firmly in the right place, and we’ll be watching with interest when Discover Europe rolls around in Autumn.

Image: Shutterstock

Aruba’s SD-Branch hooks SD-WAN, wired and wireless networks together


Adam Shepherd

19 Jun, 2018

Aruba has designed a new software-defined networking (SDN) tool to allow multi-site customers to manage their networking in a simpler and more streamlined way.

The HPE-owned company’s new SD-Branch links SD-WAN, wired and wireless networking infrastructure together, routing them all through Aruba’s new Branch Gateways so they can be managed and controlled through the cloud-based Aruba Central management platform.

In addition, the inclusion of Aruba’s ClearPass policy manager means network policy can be created and enforced remotely and automatically, without administrators having to manually provision equipment or conduct on-site maintenance. For Aruba, the aim is to help businesses cut out inefficiency, speed up deployment and reduce networking complexity.

«First and foremost, this software-defined branch solution and architecture significantly increases IT’s ability to respond in real time to the business’s need to be agile,» Aruba’s Lissa Hollinger said at HPE Discover 2018 yesterday, citing the fact that many customers have 10 to 12 IT staff managing up to 3,000 branches.

«You can imagine how complex that is if you don’t have a centralised way to automate deployment and provisioning and monitoring, so this significantly increases IT’s ability to be agile and to focus on more strategic initiatives as opposed to just keeping the lights on,» she added.

Simple, zero-touch provisioning is another key benefit of the service, and vice-president and general manager of Aruba’s cloud and SD-Branch division, Kishore Seshadri, noted that this is a critical feature for many customers.

«If you own a thousand cafes or a thousand restaurants, and you want to deploy these solutions,» he explained, «previously you could do this across two or three years – now we’re asked to be able to do this in two or three months. You have to assume that there is no technical resource on the ground, there is no IT team on the ground, so it’s just expected that you will ship a device to the location, somebody unpacks it, plugs it in; it just has to work.»

As with any networking technology, security is a critical feature of SD-Branch. Aruba has partnered with network security vendors including Zscaler, Palo Alto Networks and Check Point to offer cloud-based firewall protections, in addition to the Branch Gateway’s built-in firewall and deep packet inspection tools.

The new branch gateway units also offer context awareness, allowing for dynamic traffic optimisation to ensure maximum quality of service for bandwidth-hungry business-critical devices and applications. This also feeds into policy-based routing tools that ensure organisations can specify exactly which services they want to prioritise.

SD-Branch is hardware-agnostic, in that customers do not necessarily need to deploy Aruba’s switches or access points in order to make use of it – although the company claimed that customers may be limited by the features offered by third-party vendors.

In order to deploy the new package, customers will need to be subscribed to Aruba Central, with a headend gateway in their datacentre to manage traffic and a branch gateway unit in each physical location. Prices start at $1,495 each for the physical gateway hardware, as well as $450 in subscription fees per gateway per year.

HPE invests $4 billion in edge computing


Adam Shepherd

19 Jun, 2018

HPE is set to spend $4 billion on edge computing over the next four years, underlining the company’s strategic shift away from its traditional datacentre roots.

Speaking at the company’s annual conference, HPE Discover, CEO Antonio Neri yesterday revealed that his company would invest heavily to support the collection, processing and analysis of data outside of datacentre or cloud environments. This investment will be focused on research and development in the pursuit of new products and services in areas including automation, AI, edge computing and security.

«The edge is where we interact with our customers. That’s what the edge is all about,» Neri told attendees. «Actually, the edge is anywhere technology gets put into action. And I believe the edge is the next big opportunity for all of us.

«This next revolution requires what we call an ‘edge-to-cloud’ architecture. A world with millions of clouds distributed everywhere – that’s the future as we see it. And HPE is uniquely positioned to drive this next revolution.»

This move is a direct response to the explosion in data that has occured over the last few years, Neri said, explaining that a large portion of data that is generated at the edge is still being lost or wasted because businesses do not have the capacity to process it, and that the forthcoming development of smart cities, driverless cars and other tech innovations will only increase the amount of data being generated.

«The reality is that two years from now, we are going to generate twice the amount of data we have generated in the entirety of human history,» Neri said, «and that’s an incredible opportunity. Data that actually has the potential value to drive insights and actions across our world. To change our lives and our businesses.»

One example Neri cited was Tottenham Hotspur FC, which is using an ‘edge-to-cloud solution’ delivered by HPE’s PointNext and Aruba divisions to deliver high-speed networking for fans, combined with personalised interactive experiences and new merchandising opportunities.

HPE isn’t the only company that’s putting significant store by edge computing, though; its main rival, Dell Technologies, is also investing in the area through subsidiary VMware. The company launched a suite of new IoT packages for edge compute use cases at this year’s Mobile World Congress, powered by Dell’s hyper-converged infrastructure.

«We believe the enterprise of the future will be edge-centric, cloud-enabled and data-driven,» Neri finished. «Those that can act with the speed and agility on a continuous stream of insights and knowledge will win. That’s why our strategy is to accelerate your enterprise from edge to cloud, helping connect all of your edges, all your clouds, everywhere.»

HPE launches hybrid cloud-as-a-service offering


Adam Shepherd

20 Jun, 2018

HPE has launched a new consumption-based hybrid cloud-as-a-service offering, designed to help customers manage costs and reduce complexity within their hybrid IT infrastructure deployments.

Offered under the company’s IT-as-a-service umbrella brand GreenLake, HPE GreenLake Hybrid Cloud is a managed service that allows customers to more efficiently consume cloud services and on-premise infrastructure as part of a long-term monthly cost rather than a large upfront investment.

GreenLake Hybrid Cloud customers can have their cloud infrastructure – both public and private – designed, configured and deployed by HPE, and then maintained, supported and optimised on an ongoing basis. The company is utilising technology and capabilities from its PointNext consulting division, as well as its recent acquisitions, Cloud Technology Partners and RedPixie.

Similar to the company’s GreenLake Flex Capacity consumption model, the service uses metering services under HPE-acquired cloud monitoring firm Cloud Cruiser. Customers can closely monitor the costs of their cloud services and set limits on spending, scaling up and down as necessary.

Scott Ramsey, vice president of consumption and managed services for HPE PointNext, said this model can save organisations considerable amounts of money compared to traditional infrastructure procurement models.

«We’ve got strong empirical evidence from our 540 [existing GreenLake] customers that we’ve got that you’re total cost of ownership is in the region of 25% to 30% lower in this type of model,» he told Cloud Pro. «If you’re a customer or a business, and you’re not interested in something that can save you 25% to 30% total cost of ownership, then I’ve got to question what you’re doing, to be honest.»

The service brings with it benefits in a number of areas, according to HPE. Aside from the obvious cost control benefits that come from a consumption-based model, GreenLake Hybrid Cloud also allows businesses to trim additional operation costs by reducing the need to train IT staff in deploying and maintaining cloud infrastructure, the vendor claimed. In addition, HPE said it reduces the burden these tasks place on IT staff, freeing them up to work on projects that can deliver more practical business value.

«This model with HPE GreenLake Hybrid Cloud allows us to take out that heavy lifting that isn’t really about driving innovation in the business, but is about operating the underlying infrastructure,» explained John Treadway, senior vice president of Cloud Technology Partners. «It’s not the most value-adding thing that an IT organisation should be focused on. It should be focused on solutions to drive revenue growth and to provide analytics in business decision-making. Us taking that on allows the clients to actually be faster.»

HPE has also used the set of internal rules that it developed as part of its acquisition of Cloud Technology Partners – which PointNext SVP Ana Pinczuk said covers some 1,000 regulatory and compliance standards – to build compliance management capabilities into GreenLake Hybrid Cloud, which will supposedly allow customers to automate much of the work that goes into ensuring compliance.

The new service supports public cloud deployments on Microsoft Azure and AWS, and private cloud infrastructure via Microsoft Azure Stack and HPE ProLiant for Azure Stack, all of which is managed by HPE OneSphere, the company’s over-arching management layer.

«For some of our customers, their hybrid strategy is really Azure on and off-premise,» said Ric Lewis, senior vice president and general manager for HPE’s cloud division. «But what some customers don’t realise is Microsoft Azure Stack and Microsoft Azure are pretty separate. They run the same kind of code on the same base, but it’s not like you can move things back and forth, and it’s difficult to manage between the two.»

«With GreenLake Hybrid Cloud and some of the management and analytics that we lift from HPE OneSphere, we can help customers stitch those two fairly separate things together, regardless of the fact that they run the same thing; at least they’ll look like they’re part of the same estate and we can make that seamless for customers.»

Customers using HPE hardware for their private cloud deployments can also take advantage of seamless automatic management and provisioning via the company’s OneView on-prem automation product, which can be controlled directly via an integration with OneSphere.

However, those using alternative hardware vendors to power their infrastructure aren’t left out in the cold; OneSphere is vendor-agnostic, meaning that you can use GreenLake Hybrid Cloud to manage your infrastructure regardless of whether you’re using servers from HPE, Dell EMC, Broadberry, Lenovo or anyone else.

Although customers have a wealth of choice in terms of the infrastructure hardware they want to use to run their private clouds, they are more limited when it comes to which clouds they can actually run.

Out of the box, GreenLake Hybrid Cloud only supports AWS and Azure public cloud deployments, and despite Lewis assuring reporters that HPE is actively working on adding support for Cloud28+ partners, Ramsey told Cloud Pro that in the immediate future, the company won’t be adding support for any additional providers.

«We’ve picked the two giants of the industry to work with,» he said, «and both have a lot of strength in the enterprise arena. Over the next 12 months or so, I would say our focus will be going more vertical with those guys, getting stronger value propositions with AWS and Azure.»

«For now, our roadmap is really focused in upon making sure that the AWS and the Azure experience gets better and richer, so we’ll add more features, more functionality, more capability, more tooling into that as we go forward. That’ll be where our primary focus is going to be.»

He did, however, point out that this only applies to GreenLake Hybrid Cloud when viewed as a turnkey solution – if customers come to HPE with specific requirements that can best be met by a local Cloud28+ provider, this will be taken into consideration.

He also stated that the addition of Google Cloud support as standard was «a distinct possibility», but said that he had no specific plans to include it. «They’re fundamentally still pretty much in the consumer space from a cloud perspective,» he said, «[but] they’re the obvious next big player that we’d want to think about working with.»

Similarly, while Azure Stack is the only private cloud infrastructure that is officially supported out the box, Ramsey confirmed that HPE is happy to support other providers if a customer has specific needs.

«We have the GreenLake solutions – which I always describe as the ‘turnkey’ solutions, where we give it to you and it’s ready to go out the box – but if a customer says to me ‘I want to run an OpenStack private cloud’, we will solution that for them, and we have done that [for some customers].»