Archivo de la categoría: Features

Why did anyone think HP was in it for public cloud?

HP president and chief executive officer Meg Whitman (right) is leading HP's largest restructuring ever

HP president and chief executive officer Meg Whitman (pictured right) is leading HP’s largest restructuring ever

Many have jumped on a recently published interview with Bill Hilf, the head of HP’s cloud business, as a sign HP is finally coming to terms with its inability to make a dent in Amazon’s public cloud business. But what had me scratching my head is not that HP would so blatantly seem to cede ground in this segment – but why many assume it wanted to in the first place.

For those of you that didn’t see the NYT piece, or the subsequent pieces from the hordes of tech insiders and journalists more or less towing the “I told you so” line, Hilf was quoted as candidly saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head [with AWS].”

HP has made mistakes in this space – the list is long, and others have done a wonderful job at fleshing out the classic “large incumbent struggles to adapt to new paradigm” narrative the company’s story, so far, smacks of.

I would only add that it’s a shame HP didn’t pull a “Dell” and publicly get out of the business of directly offering public cloud services to enterprise users, which was a good move. Standing up public cloud services is by most accounts an extremely capitally intensive exercise that a company like HP, given its current state, is simply not best positioned to see through.

But it’s also worth pointing out that a number of interrelated factors have been pushing HP towards private and hybrid cloud for some time now, and despite HP’s insistence that it still runs the largest OpenStack public cloud – a claim other vendors have made in the past – its dedication to public cloud has always seemed superficial at best (particularly if you’ve had the, um, privilege, of sitting through years of sermons from HP executives at conferences and exhibitions).

HP’s heritage is in hardware – desktops, printers and servers, and servers still present a reasonably large chunk of the company’s revenue, something it has no choice but to keep in mind as it seeks to move up the stack in other areas (its NFV and cloud workload management-focused acquisitions as of late attest to this, beyond the broader industry trend). According to the latest Synergy Research figures the company still has a lead in the cloud infrastructure market, but primarily in private cloud.

It wants to keep that lead in private cloud, no doubt, but it also wants to bolster its pitch to the scale-out market exclusively (where telcos are quite keen to play) without alienating its enterprise customers. This also means delivering capabilities that are starting to see increased demand among that segment, like hybrid cloud workload management, security and compliance tools, and offering a platform that has enough buy-in to ensure a large ecosystem of applications and services will be developed for it.

Whether OpenStack is the best way of hitting those sometimes competing objectives remains to be seen – HP hasn’t had these products in the market very long, and take-up has been slow – but that’s exactly what Helion is to HP.

Still, it’s worth pointing out that OpenStack, while trying to evolve capabilities that would whet the appetites of communications services providers and others in the scale-out segment (NFV, object storage, etc.), is seeing much more takeup from the private cloud crowd. Indeed one of the key benefits of OpenStack is easy burstability into, and (more of a work in progress), federatability between OpenStack-based public and private clouds, respectively. The latter, by the way, is definitely consistent with the logic underpinning HP’s latest cloud partnership with the European Commission, which looks at – among other things – the potential federatability of regional clouds that have strong security and governance requirements.

Even HP’s acquisition strategy – particularly its purchase of Eucalyptus, a software platform that makes it easy to shift workloads between on premise systems and AWS – seems in line with the view that a private cloud needs to be able to lean on someone else’s datacentre from time to time.

HP has clearly chosen its mechanism for doing just that, just as VMware looked at the public cloud and thought much the same in terms of extending vSphere and other legacy offerings. Like HP, it wanted to hedge its bets stand up its own public cloud platform because, apart from the “me too” aspect, it thought doing so was in line with where users were heading, and to a much more minimal extent didn’t want to let AWS, Microsoft and Google have all the fun if it didn’t have to. But public cloud definitely doesn’t seem front-of-mind for HP, or VMware, or most other vendors coming at this from an on-premise heritage (HP’s executives mentioned “public cloud” just once in the past three quarterly results calls with journalists and analysts).

Funnily enough, even VMware has come up with its own OpenStack distribution, and now touts a kind of “one cloud, any app, any device” mantra that has hybrid cloud written all of it – ‘hybrid cloud service’ being what the previous incarnation of its public cloud service was called.

All of this is of course happening against the backdrop of the slow crawl up the stack with NFV, SDN, cloud resource management software, PaaS, and so forth  – not just at HP. Cisco, Dell, and IBM, are all looking to make inroads in software, while at the same time on the hardware side fighting off lower-cost Asian ODMs that are – with the exception of IBM – starting to significantly encroach on their turf, particularly in the scale-out markets.

The point is, HP, like many old-hat enterprise vendors, know that what ultimately makes AWS so appealing isn’t its cost (it can actually be quite expensive, though prices – and margins – are dropping) or ease of procurement as an elastic hosting provider. It’s the massive ecosystem of services that give the platform so much value, and the ability to tap into them fairly quickly. HP has bet the farm on OpenStack’s capacity to evolve into a formidable competitor to AWS in that sense (IBM and Cisco also, with varying degrees, towing a similar line), and it shouldn’t be dismissed outright given the massive buy-in that open source community has.

But – and some would view this as part of the company’s problem – HP’s bread and butter has been and continues to be in offering the technologies and tools to stand up predominately private clouds, or in the case of service providers, very large private clouds (it’s also big on converged infrastructure), and to support those technologies and tools, which really isn’t – directly – the business that AWS is in, despite there being substantial overlap in the enterprise customers they go after.

However, while it started in this space as an elastic hosting provider offering CDN and storage services, AWS, on the other hand, has more or less evolved into a kind of application marketplace, where any app can be deployed on almost infinitely scalable compute and storage platforms. Interestingly, AWS’s messaging has shifted from outright hostility towards the private cloud crowd (and private cloud vendors) towards being more open to the idea some enterprises simply don’t want to expose their workloads or host them on shared infrastructure – in part because it understands there’s growing overlap, and because it wants them to on-board their workloads onto AWS.

HP’s problem isn’t that it tried and failed at the public cloud game – you can’t really fail at something if you don’t have a proper go at it; and on the private cloud front, Helion is still quite young, as is OpenStack, Cloud Foundry, and many of the technologies at the core of its revamped strategy.

Rather, it’s that HP, for all its restructuring efforts, talk of change and trumpeting of cloud, still risks getting stuck in its old-world thinking, which could ultimately hinder the company further as it seeks to transform itself. AWS senior vice president Andy Jassy, who hit out at tech companies like HP at the unveiling of Amazon’s Frankfurt-based cloud service last year, hit the nail on the head: “They’re pushing private cloud because it’s not all that different from their existing operating model. But now people are voting with their workloads… It remains to see how quickly [these companies] will change, because you can’t simply change your operating model overnight.”

Can the cloud save Hollywood?

The film and TV industry is warming to cloud

The film and TV industry is warming to cloud

You don’t have to watch the latest ‘Avengers’ film to get the sense the storage and computational requirements of film and television production are continuing their steady increase. But Guillaume Aubichon, chief technology officer of post-production and visual effects firm DigitalFilm Tree (DFT) says production and post-production outfits may find use in the latest and greatest in open source cloud technologies to help plug the growing gap between technical needs and capabilities – and unlock new possibilities for the medium in the process.

Since its founding in 2000, DFT has done post-production work for a number of motion pictures as well as television shows airing on some of the largest networks in America including ABC, TNT and TBS. And Aubichon says that like many in the industry DFT’s embrace of cloud came about because the company was trying to address a number of pain points.

“The first and the most pressing pain point in the entertainment industry right now is storage – inexpensive, commodity storage that is also internet ready. With 4K becoming more prominent we have some projects that generate about 12TB of content a day,” he says. “The others are cost and flexibility.”

This article appeared in the March/April issue of the BCN Magazine. Click here to download your copy today.

Aubichon explains three big trends are converging in the entertainment and media industry right now that are getting stakeholders from production to distribution interested in cloud.

4K broadcast, a massive step up from High– Definition in terms of the resources required for rendering, transmission and storage, is becoming more prominent.

Next, IP broadcasters are supplanting traditional broadcasters – Netflix, Amazon or Hulu are taking the place of CBS, ABC, and slowly displacing the traditional content distribution model.

And, films are no longer exclusively filmed in the Los Angeles area – with preferential tax regimes and other cost-based incentives driving production of English-speaking motion pictures outward into Canada, the UK, Central Europe and parts of New Zealand and Australia.

“With production and notably post-production costs increasing – both in terms of dollars and time – creatives want to be able to make more decisions in real time, or as close to real time as possible, about how a shot will look,” he says.

Can Cloud Save Hollywood?

DFT runs a hybrid cloud architecture based on OpenStack and depending on the project can link up to other private OpenStack clouds as well as OpenStack-based public cloud platforms. For instance, in doing some of the post-production work for Spike Jonze’s HER the company used a combination of Rackspace’s public cloud and its own private cloud instances, including Swift for object storage as well as a video review and approval application and virtual file system application – enabling creatives to review and approve shots quickly.

The company runs most of its application landscape off a combination of Linux and Microsoft virtualised environments, but is also a heavy user of Linux containers – which has benefits as a transmission format and also offers some added flexibility, like the ability run simple compute processes directly within a storage node.

Processes like video and audio transcoding are a perfect fit for containers because they don’t necessarily warrant an entire virtual machine, and because the compute and storage can be kept so close to one another.

Aubichon: 'My goal is to help make the media and entertainment industry avoid what the music industry did'

Aubichon: ‘My goal is to help make the media and entertainment industry avoid what the music industry did’

“Any TV show or film production and post-production process involves multiple vendors. For instance, on both Mistresses and Perception there was an outside visual effects facility involved as well. So instead of having to take the shots, pull it off an LTO tape, put it on a drive, and send it over to the visual effects company, they can send us a request and we can send them an authorised link that connects back to our Swift object storage, which allows them to pull whatever file we authorise. So there’s a tremendous amount of efficiency gained,” he explains.

For an industry just starting to come out of physical transmission, that kind of workflow can bring tremendous benefits to a project. Although much of the post-production work for film and television still happens in LA an increasing number of shows aren’t shot there; DFT for instance is currently working on shows shot in Vancouver, Toronto, and Virginia. So what the company does is run an instance of OpenStack on-site where the shooting occurs and feed the raw camera footage into an object storage instance, which is then container-sunk back to Los Angeles.

“We’ve even been toying with the idea of pushing raw camera files into OpenStack instances, and have those instances transcode those files into an H.265 resolution that could theoretically be pushed over a mobile data connection back to the editor in Los Angeles. The editor could then start cutting in proxies, and 12 to 18 hours later, when the two OpenStack instances have then sunk that material, you can then merge the data to the higher resolution version,” he says.

“We get these kinds of requests often, like when a director is shooting on location and he’s getting really nervous that his editor isn’t seeing the material before he has to move on from the location and finish shooting.”

So for DFT, he says, cloud is solving a transport issue, and a storage issue. “What we’re trying to push into now is solving the compute issue. Ideally we’d like to push all of this content to one single place, have this close to the compute and then all of your manipulation just happens via an automated process in the cloud or via VDI. That’s where we really see this going.”

The other element here, and one that’s undoubtedly sitting heavily on the minds of the film industry in recent months more than ever, is the security issue. Aubichon says that because the information, where it’s stored and how secure that information is, changes over the lifecycle of a project, a hybrid cloud model – or connectable cloud platforms with varying degrees of exposure – is required to support them. That’s where features like federated identity, which in OpenStack is still quite nascent, comes into play. It offers a mechanism for linking clouds, granting and authenticating user identity quickly (and taking access away equally fast), and leaves a trail revealing who touches what content.

“You need to be able to migrate authentication and data from a very closed instance out to something more open, and eventually out to public,” he says, adding that he has spent many of the past few years trying to convince the industry to eliminate any distinction between public and private clouds.

“In an industry that’s so paranoid about security, I’ve been trying to say ‘well, if you run an OpenStack instance in Rackspace, that’s really a private instance; they’re a trusted provider, that’s a private instance.’ To me, it’s just about how many people need to touch that material. If you have a huge amount of material then you’re naturally going to move to a public cloud vendor, but just because you’re on a public cloud vendor doesn’t mean that your instance is public.”

“I spend a lot of time just convincing the entertainment industry that this isn’t banking,” he adds. “They are slowly starting to come around; but it takes time.”

It All Comes Back To Data

Aubichon says the company is looking at ways to add value beyond simply cost and time reduction, with data and metadata aggregation figuring front and centre in that pursuit. The company did a proof of concept for Cougar Town where it showed how people watching the show on their iPads could interact with that content – a “second screen” interactive experience of sorts, but on the same viewing platform.

“Maybe a viewer likes the shirt one of the actresses is wearing on the show – they can click on it, and the Amazon or Target website comes up,” he says, adding that it could be a big source of revenue for online commerce channels as well as the networks. “This kind of stuff has been talked about for a while, but metadata aggregation and the process of dynamically seeking correlations in the data, where there have always been bottlenecks, has matured to the point where we can prove to studios they can aggregate all of this information without incurring extra costs on the production side. It’s going to take a while until it is fully mature, but it’s definitely coming.”

This kind of service assumes there exists loads of metadata on what’s happening in a shot (or the ability to dynamically detect and translate that into metadata) and, critically, the ability to detect correlations in data that are tagged differently.

The company runs a big MongoDB backend but has added capabilities from an open source project called Karma, which is an ontology mapping service that originally came out of museums. It’s a method of taking two MySQL databases and presenting to users correlations in data that are tagged differently.

DFT took that and married it with the text search function in MongoDB, a NoSQL paltform, which basically allows it to push unstructured data into the system and find correlations there (the company plans to seed this capability back into the open source MongoDB community).

“Ultimately we can use all of this metadata to create efficiencies in the post-production process, and help generate revenue for stakeholders, which is fairly compelling,” Aubichon says. “My goal is to help make the media and entertainment industry avoid what the music industry did, and to become a more unified industry through software, through everyone contributing. The more information is shared, the more money is made, and everyone is happy. That’s something that philosophically, in the entertainment industry, is only now starting to come to fruition.”

It would seem open source cloud technologies like OpenStack as well as innovations in the Linux kernel, which helped birth Docker and similar containerisation technologies, are also playing a leading role in bringing this kind of change about.

How to achieve success in the cloud

To cloud or not to cloud? With the right strategy, it need not be the question.

To cloud or not to cloud? With the right strategy, it need not be the question.

There are two sides to the cloud coin: one positive, the other negative, and too many people focus on one at the expense of the other for a variety of reasons ranging from ignorance to wilful misdirection. But ultimately, success resides in embracing both sides and pulling together the capabilities of both enterprises and their suppliers to make the most of the positive and limit the negative.

Cloud services can either alleviate or compound the business challenges identified by Ovum’s annual ICT Enterprise Insights program, based on interviews with 6,500 senior IT executives. On the positive side both public and private clouds, and everything in between, help:

Boost ROI at various levels: From squeezing more utilization from the underlying infrastructure to making it easier to launch new projects with the extra resources exposed asa result.

Deal with the trauma of major organisational/ structural changes as they can adapt to the ups and downs of requirements evolution.

Improve customer/citizen experience, and therefore satisfaction: This has been one of the top drivers for cloud adoption. Cloud computing is at its heart user experience-centric. Unfortunately many forget this, preferring instead to approach cloud computing from a technical perspective.

Deal with security, security compliance, and regulatory compliance: An increasing number of companies acknowledge that public cloud security and compliance credentials are at least as good if not better than their own, particularly in a world where security and compliance challenges are evolving so rapidly. Similarly, private clouds require security to shift from reactive and static to proactive and dynamic security, whereby workloads and data need to be secured as they move in and out of internal IT’s boundaries.

On the other hand, cloud services have the potential to compound business challenges. For instance, the rise of public cloud adoption contributes to challenges related to increasing levels of outsourcing. It is all about relationship management, and therefore relates to another business challenge: improving supplier relationships.

In addition to having to adapt to new public cloud offerings (rather than the other way round), once the right contract is signed (another challenging task), enterprises need to proactively manage not only their use of the service but also their relationships with the service provider, if only to be able to keep up with their fast-evolving offerings.

Similarly, cloud computing adds to the age-old challenge of aligning business and IT at two levels: cloud-enabling IT, and cloud-centric business transformation.

From a cloud-enabling IT perspective, the challenge is to understand, manage, and bridge a variety of internal divides and convergences, including consumer versus enterprise IT, developers versus IT operations, and virtualisation ops people versus network and storage ops. As the pace of software delivery accelerates, developers and administrators need to not only to learn from and collaborate with one another, but also deliver the right user experience – not just the right business outcomes. Virtualisation ops people tend to be much more in favour than network and storage ops people of software-defined datacentre, storage, and networking (SDDC, SDS, SDN) with a view to increasingly take control of datacentre and network resources. But the storage and network ops people, however, are not so keen on letting the virtualisation people in.

When it comes to cloud-centric business transformation, IT is increasingly defined in terms of business outcomes within the context of its evolution from application siloes to standardised, shared, and metered IT resources, from a push to a pull provisioning model, and more importantly, from a cost centre to an innovation engine.

The challenge, then, is to understand, manage, and bridge a variety of internal divides and convergences including:

Outside-in (public clouds for green-field application development) versus inside-out (private cloud for legacy applicationmodernization) perspectives. Supporters of the two approaches can be found on both the business and IT sides of the enterprise.

Line-of-business executives (CFO, CMO, CSO) versus CIOs regarding cloud-related roles, budgets, and strategies: The up-andcoming role of chief digital officer (CDO) exemplifies the convergence between technology and business C-level executives. All CxOs can potentially fulfil this role, with CDOs increasingly regarded as “CEOs in waiting”. In this context, there is a tendency to describe the role as the object of a war between CIOs and other CxOs. But what digital enterprises need is not CxOs battling each other, but coordinating their IT investments and strategies. Easier said than done since, beyond the usual political struggles, there is a disparity between all side in terms of knowledge, priorities, and concerns.

Top executives versus middle management: Top executives who are broadly in favour of cloud computing in all its guises, versus middle management who are much less eager to take it on board, but need to be won over since they are critical to cloud strategy execution.

Laurent Lachal

Shadow IT versus Official IT: Where IT acknowledges the benefits of Shadow IT (it makes an organisation more responsive and capable of delivering products and services that IT cannot currently support) and its shortcomings (in terms of costs, security, and lack of coordination, for example). However, too much focus on control at the expense of user experience and empowerment perpetuates shadow IT.

Only then will your organisation manage to balance both sides of the cloud coin.

Laurent Lachal is leading Ovum Software Group’s cloud computing research. Besides Ovum, where he has spent most of his 20 year career as an analyst, Laurent has also been European software market group manager at Gartner Ltd.

Every little helps: How Tesco is bringing the online food retail experience back in-store

Tesco is in the midst of overhauling its connectivity and IT services

Tesco is in the midst of overhauling its connectivity and IT services

Food retailers in the UK have for years spent millions of pounds on going digital and cultivating a web presence, which includes the digitisation of product catalogues and all of the other necessary tools on the backend to support online shopping, customer service and food delivery. But Tomas Kadlec, group infrastructure IT director at Tesco tells BCN more emphasis is now being place on bringing the online experience back into physical stores, which is forcing the company to completely rethink how it structures and handles data.

Kadlec, who is responsible for Tesco’s IT infrastructure strategy globally, has spent the better part of the past few years building a private cloud deployment model the company could easily drop into regional datacentres that power its European operations and beyond. This has largely been to improve the services it can provide to clients and colleagues within the company’s brick and mortar shops, and support a growing range of internal applications.

“If you look at what food retailers have been doing for the past few years it was all about building out an online extension to the store. But that trend is reversing, and there’s now a kind of ‘back to store’ movement brewing,” Kadlec says.

“If we have 30,000 to 50,000 SKUs in one store at any given time, how do you handle all of that data in a way that can contribute digital feature-rich services for customers? And how do you offer digital services to customers in Tesco stores that cater to the nuances in how people act in both environments?  For instance, people like to browse more in-store, sometimes calling a friend or colleague to ask for advice on what to get or recipes; in a digital environment people are usually just in a rush to head for the checkout. These are all fairly big, critical questions.”

Some of the digital services envisioned are fairly ambitious and include being able to queue up tons of product information – recipes, related products and so forth – on mobile devices by scanning items with built-in cameras, and even, down the line, paying for items on those devices. But the food retail sector is one of the most competitive in the world, and it’s possible these kinds of services could be a competitive differentiator for the firm.

“You should be able to create a shopping list on your phone and reach all of those items in-store easily,” he says. “When you’re online you have plenty of information about those products at your fingertips, but far less when you’re in a physical store. So for instance, if you have special dietary requirement we should be able to illuminate and guide the store experience on these mobile platforms with this in mind.”

Tomas_Kadlec“The problem is that in food retail the app economy doesn’t really exist yet. It exists everywhere else, and in food retail the app economy will come – it’s just that we as an industry have failed to make the data accessible so applications aren’t being developed.”

To achieve this vision, Tesco had to drastically change its approach to data and how it’s deployed across the organisation. The company originally started down the path of building its own API and offering internal users a platform-as-a-service to enable more agile app development, but Kadlec says the project quickly morphed into something much larger.

“It’s one thing to provide an elastic compute environment and a platform for development and APIs – something we can solve in a fairly straightforward way. It’s another thing entirely to expose the information you need for these services to work effectively in such a scalable system.”

Tesco’s systems handle and structure data the way many traditional enterprises within and outside food retail do – segmenting it by department, by function, and in alignment with the specific questions the data needs to answer. But the company is trying to move closer to a ‘store and stream now, ask questions later’ type of data model, which isn’t particularly straightforward.

“Data used to be purpose-built; it had a clearly defined consumer, like ERP data for example. But now the services we want to develop require us to mash up Tesco data and open data in more compelling ways, which forces us to completely re-think the way we store, categorise and stream data,” he explains. “It’s simply not appropriate to just drag and drop our databases into a cloud platform – which is why we’re dropping some of our data systems vendors and starting from scratch.”

Kadlec says the debate now centres on how the company can effectively democratise data while keeping critical kinds of information – like consumers’ personal information – secure and private: “There should only be two types of data. Data that should be open, and we should make sure we make that accessible, and then there’s the type of data that’s so private people get fired for having made it accessible – and setting up very specific architectural guidelines along with this.”

The company hasn’t yet had the security discussion with its customers yet, which is why Kadlec says the systems Tesco puts in place initially will likely focus on improving internal efficiency and productivity – “so we don’t have to get into the privacy data nightmare”.

The company also wants to improve connectivity to its stores to better service both employees and customers. Over the next 18 months the company will implement a complete overhaul of store connectivity and infrastructure, which will centre on delivering low latency bandwidth for in-store wifi and quadrupling the amount of access points. It also plans to install 4G signal booster cells in its stores to improve GSM-based connectivity. Making sure that infrastructure will be secure so that customer data isn’t leaked is top priority, he says.

Tesco is among a number of retailers to make headlines as of late – though not because of datacentre security or customer data loss, but because the company, having significantly inflated its profits by roughly £250m, is in serious financial trouble. But Kadlec says what many may see as a challenge is in fact an opportunity for the company.

One of the things the company is doing is piloting OmniTrail’s indoor location awareness technology to improve how Tesco employees are deployed in stores and optimise how they respond to changes in demand.

“If anything this is an opportunity for IT. If you look at the costs within the store today, there are great opportunities to automate stuff in-store and make colleagues within our stores more focused on customer services. If for instance we’re looking at using location-based services in the store, why do you expect people to clock in and clock out? We still use paper ledgers for holidays – why can’t we move this to the cloud? The opportunities we have in Tesco to optimise efficiency are immense.”

“This will inevitably come back to profits and margins, and the way we do this is to look at how we run operations and save using automation,” he says.

Tomas is speaking at the Telco Cloud Forum in London April 27-29, 2015. To register click here.