Archivo de la categoría: Features

Q&A with Mark Evans, head of IT, RLB

Mark EvansAs we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Mark Evans, head of IT at global property and construction practice Rider Levett Bucknall (RLB) to discuss supporting BYOD, the need for standards in the cloud sector and the impact of working with large data models on the technology choices the firm has to make.

 

What do you see as the most disruptive trend in enterprise IT today?

I’m not entirely sure that the most disruptive trend in enterprise IT is entirely technical. Admittedly, the driving impetus for change is coming from technology, but it is being driven by non-IT people who are equipping their homes, cars and any one of a multitude of other environments with technology which works for them. The disruption manifests itself in the attitude which is brought to business from these domestic environments; people no longer see the bastion of “Corporate IT” as unassailable as it once was, before the commoditisation of IT equipment became the norm. Domestic procurement cycles are driven in a different manner to those of any business – it’s what the likes of Apple thrive on.

There’s more of a “heart” aspiration than a “head” decision when it comes to buying IT at home. Let’s be honest? Who – at home – works out depreciation of an asset when a loved one is being tugged at by slick marketing and peer pressure? Maybe I’m a misanthrope, but this sort of pressure has a knock-on effect with a lot of people and they seek the flexibility, the performance, the ease of use and (let’s be honest) the flashiness of new toys at work. The person with the keys to the “toy box”, the erstwhile IT director, is seen as a barrier to that oft-quoted, rarely well-informed concept of ‘agility’.

So… BYOD. People bring their home kit to work and expect it to work and to offer them an ‘edge’. I think the disruption is bigger than Dave from Accounts bringing in his shiny new laptop (with added speed stripes). It is the expectation that this is acceptable in the face of business-wide legal constraints of liability, compliance and business planning – the directors of a business set the rules and this new, almost frivolous attitude to the complexity and requirements of corporate IT is a “wolf in sheep’s clothing” in terms of the risk it brings to a business. Where do I sit on this? I say, “bring it on”.

 

What do you think the industry needs to work on in terms of cloud service evolution?

Portability. Standards. Standards of portability. I still believe that there is a general complicity between vendors and purchasers to create a “handcuffs” relationship (“Fifty Shades of Big Blue”?) which is absolutely fine in the early part of a business relationship as it provides a predictable environment from the outset, but this predictability can become moribund and in an era where business models flex and morph at previously alarming rates, the “handcuffs” agreement can become shackles. If the agreement is on a month-by-month basis, it is rarely easy to migrate across Cloud platforms. Ignoring the potential volumes of data which may need to be moved, there is no lingua franca for Cloud services to facilitate a “switch on/switch off” ease-of-migration one might expect in the Cloud environment, predicated as it is on ease-of-use and implementation.

Data tends to move slowly in terms of development (after all, that’s where the value is), so maybe as an industry we need to consider a Data Cloud Service which doesn’t require massive agility, but a front-end application environment which is bound by standards of migratability (is that a word? If it isn’t – it should be!) to offer front-end flexibility against a background of data security and accessibility. In that way, adopting new front-end processes would be easier as there would be no requirement to haul terabytes of data across data centres. Two different procurement cycles, aligned to the specific vagaries of their environments.

 

Can you describe some of the unique IT constraints or features particular to your sector?

Acres of huge data structures. When one of the major software suppliers in your industry (AutoDESK and Construction, respectively) admit that the new modelling environment for buildings goes beyond the computing and data capability in the current market – there are alarm bells. This leads to an environment where the client front end ‘does the walking’ and the data stays in a data centre or the Cloud. Models which my colleagues need to use have a “starting price” of 2Gb and escalate incredibly as the model seeks to more accurately represent the intended construction project. In an environment where colleagues would once carry portfolios of A1 or A0 drawings, they now have requirements for portable access to drawings which are beyond the capabilities of even workstation-class laptop equipment. Construction and, weirdly enough, Formula One motorsport, are pushing the development of Cloud and virtualisation to accommodate these huge, data-rich, often highly graphical models. Have you ever tried 3D rendering on a standard x64 VMWare or Hyper-V box? We needed Nvidia to sort out the graphics environment in the hardware environment and even that isn’t the ‘done deal’ we had hoped.

 

Is the combination of cloud and BYOD challenging your organisation from a security perspective? What kind of advice would you offer to other enterprises looking to secure their perimeter within this context?

Not really. We have a strong, professional and pragmatic HR team who have put in place the necessary constraints to ensure that staff are fully aware of their responsibilities in a BYOD environment. We have backed this up with decent MDM control. Beyond that? I honestly believe that “where there’s a will, there’s a way” and that if MI5 operatives can leave laptops in taxis we can’t legislate for human frailties and failings. Our staff know that there is a ‘cost of admission’ to the BYOD club and it’s almost a no-brainer; MDM controls their equipment within the corporate sphere of influence and their signature on a corporate policy then passes on any breaches of security to the appropriate team, namely, HR.

My advice to my IT colleagues would be – trust your HR team to do their job (they are worth their weight in gold and very often under-appreciated), but don’t give them a ‘hospital pass’ by not doing everything within your control to protect the physical IT environment of BYOD kit.

 

What’s the most challenging part about setting up a hybrid cloud architecture?

Predicting the future. It’s so, so, so easy to map the current operating environment in your business to a hybrid environment (“They can have that, we need to keep this…”) but constraining the environment by creating immovable and impermeable glass walls at the start of the project is an absolutely, 100 per cent easy way to lead to frustration with a vendor in future and we must be honest and accept that by creating these glass walls we were the architect of our own demise. I can’t mention any names, but a former colleague of mine has found this out to his company’s metaphorical and bottom-line cost. They sought to preserve their operating environment in aspic and have since found it almost soul-destroying to start all over again to move to an environment which supported their new aspirations.

Reading between the lines, I believe that they are now moving because there is a stubbornness on both sides and my friend’s company has made it more of a pain to retain their business than a benefit. They are constrained by a mindset, a ‘groupthink’ which has bred bull-headedness and very constrained thinking. An ounce of consideration of potential future requirements could have built in some considerable flexibility to achieve the aims of the business in changing trading environments. Now? They are undertaking a costly migration in the midst of a potentially high-risk programme of work; it has created stress and heartache within the business which might have been avoided if the initial move to a hybrid environment had considered the future, rather than almost constrained the business to five years of what was a la mode at the time they migrated.

 

What’s the best part about attending Cloud World Forum?

Learning that my answers above may need to be re-appraised because the clever people in our industry have anticipated and resolved my concerns.

15591-CWF15-web-banner 2

Cloud democratises retail investor services

Cloud has the potential to democratize investment services

Cloud has the potential to democratize investment services

Cloud services are opening up possibilities for the retail investor to create individual customised funds in a way that was previously the preserve of the super-wealthy. Coupled with UK regulation such as the Retail Distribution Review, the effect has been to make new business models possible, according to Michael Newell, chief executive at InvestYourWay.

“Nobody is really talking about how the cloud is fundamental to what they do, but it is,” said Newell. “Where previously it might have taken days or even weeks to get the information to set up a fund, and to change your portfolio and positions completely, and to activate your account, it now takes just a few seconds thanks to Amazon Cloud.”

Newell previously worked at BATS, where he was involved alongside Mark Hemsley in setting up the exchange’s ETF services. For some time, he had been increasingly aware of the kind of services that high net worth investors were getting and began to form an idea that someone could bring that to the common retail investor. The idea was to create a system where each individual person has their own fund. However, Newell soon realised that to make that possible, it would be necessary to service customers investing smaller amounts at significantly lower cost – something that had never really been viable up to that point.

“You’d never get that kind of individual attention unless you were high net worth,” he said. “If you’ve only got £2,000 to invest, it’s not going to be worth a fund manager spending the time with you and charging just a few pounds for their time, which is what they’d need to do to make it viable. It just didn’t work.”

Cloud services changed both the economics of the situation and the practicality of his original idea. Newell found that by obtaining computing power as a service, calculations that would have taken 48 hours on a laptop could now be completed in 30 seconds. A manual Google search process carried out by an individual to work out how best to invest might take days at the least, or more realistically weeks and even months – but on InvestYourWay, it can be done in seconds because the process is automated.

Part of the impetus for the new business was also provided by regulatory change, which began to make it easier to compete in the UK with the established fund managers. Specifically, the Retail Distribution Review which came into effect in January 2013 had the effect of forcing fund managers to unbundle their services, providing transparency into previously opaque business charges. Customers could now see exactly what they were being charged for, and that has had the effect of forcing down prices and changing consumer behaviour.

“It’s amazing that it took so long to bring that to the retail investor,” said Newell. “If you think about it, all of this has been happening in the capital markets for years. The focus on greater transparency and unbundling. The clarity on costs and fees.”

However, the idea still needed visibility and a user-base. This was provided when the platform agreed a deal with broker IG, under which InvestYourWay became a service available as an option on the drop-down menu for IG customers. The platform launched in October 2014, offering investment based on indexes rather than single stocks. This was done in part to keep costs down, and partly for ideological reasons. Newell explains that alternative instruments such as ETFs are popular, but would have involved gradually increasing slippage over time due to the costs of middle men. Focusing on indexes removes that problem.

The platform also claims to be the first to offer non-leveraged contract for difference trading. While around 40% of trading in London is estimated to be accounted for by CFDs, normally these are leveraged such that an investor who puts in £1,000 stands to gain £10,000 (but may also lose on the same scale). But IYW’s contracts are not leveraged.

The interface of the platform has quite a bit in common with the latest personal financial management interfaces. The first page consists of a time slider, a risk slider, and the amount the user wants to invest, as well as preferred geographical focus – Europe, America or Asia. After that, users get a pie chart breaking down how the service has allocated their investment based on the sliders. For example, into categories such as North American fintech startups, Asian banks, European corporates, etc. Users also get bar charts showing the historical performance of the fund they are designing, as they go along. They can also see an Amazon-style recommendation suggesting “People who invested in X, also bought Y…”

After that, the user is presented with optional add-ons such as investment in gold, banks, metals, pharmaceuticals, and other areas that may be of special interest. Hovering the mouse over one of these options allows the user to see what percentage of other funds have used that add-on. Choosing one of the add-ons recalibrates the fund that the user is creating to match, for example adding a bit more Switzerland if the user selected banks.

In a demonstration seen by Banking Technology, it was possible to adjust a fund by getting out of Europe and moving the user’s investment to Asia in a few clicks. According to Newell, it would take weeks to do that the traditional way. The process might involve moving money from one fund manager to another or starting an entirely new fund. It was also possible to see how much the cost of that move was – in a demonstration seen byBanking Technology, on a £10,000 investment the cost was £13. Prices are matched to the most recent available end of day data.

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Why did anyone think HP was in it for public cloud?

HP president and chief executive officer Meg Whitman (right) is leading HP's largest restructuring ever

HP president and chief executive officer Meg Whitman (pictured right) is leading HP’s largest restructuring ever

Many have jumped on a recently published interview with Bill Hilf, the head of HP’s cloud business, as a sign HP is finally coming to terms with its inability to make a dent in Amazon’s public cloud business. But what had me scratching my head is not that HP would so blatantly seem to cede ground in this segment – but why many assume it wanted to in the first place.

For those of you that didn’t see the NYT piece, or the subsequent pieces from the hordes of tech insiders and journalists more or less towing the “I told you so” line, Hilf was quoted as candidly saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head [with AWS].”

HP has made mistakes in this space – the list is long, and others have done a wonderful job at fleshing out the classic “large incumbent struggles to adapt to new paradigm” narrative the company’s story, so far, smacks of.

I would only add that it’s a shame HP didn’t pull a “Dell” and publicly get out of the business of directly offering public cloud services to enterprise users, which was a good move. Standing up public cloud services is by most accounts an extremely capitally intensive exercise that a company like HP, given its current state, is simply not best positioned to see through.

But it’s also worth pointing out that a number of interrelated factors have been pushing HP towards private and hybrid cloud for some time now, and despite HP’s insistence that it still runs the largest OpenStack public cloud – a claim other vendors have made in the past – its dedication to public cloud has always seemed superficial at best (particularly if you’ve had the, um, privilege, of sitting through years of sermons from HP executives at conferences and exhibitions).

HP’s heritage is in hardware – desktops, printers and servers, and servers still present a reasonably large chunk of the company’s revenue, something it has no choice but to keep in mind as it seeks to move up the stack in other areas (its NFV and cloud workload management-focused acquisitions as of late attest to this, beyond the broader industry trend). According to the latest Synergy Research figures the company still has a lead in the cloud infrastructure market, but primarily in private cloud.

It wants to keep that lead in private cloud, no doubt, but it also wants to bolster its pitch to the scale-out market exclusively (where telcos are quite keen to play) without alienating its enterprise customers. This also means delivering capabilities that are starting to see increased demand among that segment, like hybrid cloud workload management, security and compliance tools, and offering a platform that has enough buy-in to ensure a large ecosystem of applications and services will be developed for it.

Whether OpenStack is the best way of hitting those sometimes competing objectives remains to be seen – HP hasn’t had these products in the market very long, and take-up has been slow – but that’s exactly what Helion is to HP.

Still, it’s worth pointing out that OpenStack, while trying to evolve capabilities that would whet the appetites of communications services providers and others in the scale-out segment (NFV, object storage, etc.), is seeing much more takeup from the private cloud crowd. Indeed one of the key benefits of OpenStack is easy burstability into, and (more of a work in progress), federatability between OpenStack-based public and private clouds, respectively. The latter, by the way, is definitely consistent with the logic underpinning HP’s latest cloud partnership with the European Commission, which looks at – among other things – the potential federatability of regional clouds that have strong security and governance requirements.

Even HP’s acquisition strategy – particularly its purchase of Eucalyptus, a software platform that makes it easy to shift workloads between on premise systems and AWS – seems in line with the view that a private cloud needs to be able to lean on someone else’s datacentre from time to time.

HP has clearly chosen its mechanism for doing just that, just as VMware looked at the public cloud and thought much the same in terms of extending vSphere and other legacy offerings. Like HP, it wanted to hedge its bets stand up its own public cloud platform because, apart from the “me too” aspect, it thought doing so was in line with where users were heading, and to a much more minimal extent didn’t want to let AWS, Microsoft and Google have all the fun if it didn’t have to. But public cloud definitely doesn’t seem front-of-mind for HP, or VMware, or most other vendors coming at this from an on-premise heritage (HP’s executives mentioned “public cloud” just once in the past three quarterly results calls with journalists and analysts).

Funnily enough, even VMware has come up with its own OpenStack distribution, and now touts a kind of “one cloud, any app, any device” mantra that has hybrid cloud written all of it – ‘hybrid cloud service’ being what the previous incarnation of its public cloud service was called.

All of this is of course happening against the backdrop of the slow crawl up the stack with NFV, SDN, cloud resource management software, PaaS, and so forth  – not just at HP. Cisco, Dell, and IBM, are all looking to make inroads in software, while at the same time on the hardware side fighting off lower-cost Asian ODMs that are – with the exception of IBM – starting to significantly encroach on their turf, particularly in the scale-out markets.

The point is, HP, like many old-hat enterprise vendors, know that what ultimately makes AWS so appealing isn’t its cost (it can actually be quite expensive, though prices – and margins – are dropping) or ease of procurement as an elastic hosting provider. It’s the massive ecosystem of services that give the platform so much value, and the ability to tap into them fairly quickly. HP has bet the farm on OpenStack’s capacity to evolve into a formidable competitor to AWS in that sense (IBM and Cisco also, with varying degrees, towing a similar line), and it shouldn’t be dismissed outright given the massive buy-in that open source community has.

But – and some would view this as part of the company’s problem – HP’s bread and butter has been and continues to be in offering the technologies and tools to stand up predominately private clouds, or in the case of service providers, very large private clouds (it’s also big on converged infrastructure), and to support those technologies and tools, which really isn’t – directly – the business that AWS is in, despite there being substantial overlap in the enterprise customers they go after.

However, while it started in this space as an elastic hosting provider offering CDN and storage services, AWS, on the other hand, has more or less evolved into a kind of application marketplace, where any app can be deployed on almost infinitely scalable compute and storage platforms. Interestingly, AWS’s messaging has shifted from outright hostility towards the private cloud crowd (and private cloud vendors) towards being more open to the idea some enterprises simply don’t want to expose their workloads or host them on shared infrastructure – in part because it understands there’s growing overlap, and because it wants them to on-board their workloads onto AWS.

HP’s problem isn’t that it tried and failed at the public cloud game – you can’t really fail at something if you don’t have a proper go at it; and on the private cloud front, Helion is still quite young, as is OpenStack, Cloud Foundry, and many of the technologies at the core of its revamped strategy.

Rather, it’s that HP, for all its restructuring efforts, talk of change and trumpeting of cloud, still risks getting stuck in its old-world thinking, which could ultimately hinder the company further as it seeks to transform itself. AWS senior vice president Andy Jassy, who hit out at tech companies like HP at the unveiling of Amazon’s Frankfurt-based cloud service last year, hit the nail on the head: “They’re pushing private cloud because it’s not all that different from their existing operating model. But now people are voting with their workloads… It remains to see how quickly [these companies] will change, because you can’t simply change your operating model overnight.”

Can the cloud save Hollywood?

The film and TV industry is warming to cloud

The film and TV industry is warming to cloud

You don’t have to watch the latest ‘Avengers’ film to get the sense the storage and computational requirements of film and television production are continuing their steady increase. But Guillaume Aubichon, chief technology officer of post-production and visual effects firm DigitalFilm Tree (DFT) says production and post-production outfits may find use in the latest and greatest in open source cloud technologies to help plug the growing gap between technical needs and capabilities – and unlock new possibilities for the medium in the process.

Since its founding in 2000, DFT has done post-production work for a number of motion pictures as well as television shows airing on some of the largest networks in America including ABC, TNT and TBS. And Aubichon says that like many in the industry DFT’s embrace of cloud came about because the company was trying to address a number of pain points.

“The first and the most pressing pain point in the entertainment industry right now is storage – inexpensive, commodity storage that is also internet ready. With 4K becoming more prominent we have some projects that generate about 12TB of content a day,” he says. “The others are cost and flexibility.”

This article appeared in the March/April issue of the BCN Magazine. Click here to download your copy today.

Aubichon explains three big trends are converging in the entertainment and media industry right now that are getting stakeholders from production to distribution interested in cloud.

4K broadcast, a massive step up from High– Definition in terms of the resources required for rendering, transmission and storage, is becoming more prominent.

Next, IP broadcasters are supplanting traditional broadcasters – Netflix, Amazon or Hulu are taking the place of CBS, ABC, and slowly displacing the traditional content distribution model.

And, films are no longer exclusively filmed in the Los Angeles area – with preferential tax regimes and other cost-based incentives driving production of English-speaking motion pictures outward into Canada, the UK, Central Europe and parts of New Zealand and Australia.

“With production and notably post-production costs increasing – both in terms of dollars and time – creatives want to be able to make more decisions in real time, or as close to real time as possible, about how a shot will look,” he says.

Can Cloud Save Hollywood?

DFT runs a hybrid cloud architecture based on OpenStack and depending on the project can link up to other private OpenStack clouds as well as OpenStack-based public cloud platforms. For instance, in doing some of the post-production work for Spike Jonze’s HER the company used a combination of Rackspace’s public cloud and its own private cloud instances, including Swift for object storage as well as a video review and approval application and virtual file system application – enabling creatives to review and approve shots quickly.

The company runs most of its application landscape off a combination of Linux and Microsoft virtualised environments, but is also a heavy user of Linux containers – which has benefits as a transmission format and also offers some added flexibility, like the ability run simple compute processes directly within a storage node.

Processes like video and audio transcoding are a perfect fit for containers because they don’t necessarily warrant an entire virtual machine, and because the compute and storage can be kept so close to one another.

Aubichon: 'My goal is to help make the media and entertainment industry avoid what the music industry did'

Aubichon: ‘My goal is to help make the media and entertainment industry avoid what the music industry did’

“Any TV show or film production and post-production process involves multiple vendors. For instance, on both Mistresses and Perception there was an outside visual effects facility involved as well. So instead of having to take the shots, pull it off an LTO tape, put it on a drive, and send it over to the visual effects company, they can send us a request and we can send them an authorised link that connects back to our Swift object storage, which allows them to pull whatever file we authorise. So there’s a tremendous amount of efficiency gained,” he explains.

For an industry just starting to come out of physical transmission, that kind of workflow can bring tremendous benefits to a project. Although much of the post-production work for film and television still happens in LA an increasing number of shows aren’t shot there; DFT for instance is currently working on shows shot in Vancouver, Toronto, and Virginia. So what the company does is run an instance of OpenStack on-site where the shooting occurs and feed the raw camera footage into an object storage instance, which is then container-sunk back to Los Angeles.

“We’ve even been toying with the idea of pushing raw camera files into OpenStack instances, and have those instances transcode those files into an H.265 resolution that could theoretically be pushed over a mobile data connection back to the editor in Los Angeles. The editor could then start cutting in proxies, and 12 to 18 hours later, when the two OpenStack instances have then sunk that material, you can then merge the data to the higher resolution version,” he says.

“We get these kinds of requests often, like when a director is shooting on location and he’s getting really nervous that his editor isn’t seeing the material before he has to move on from the location and finish shooting.”

So for DFT, he says, cloud is solving a transport issue, and a storage issue. “What we’re trying to push into now is solving the compute issue. Ideally we’d like to push all of this content to one single place, have this close to the compute and then all of your manipulation just happens via an automated process in the cloud or via VDI. That’s where we really see this going.”

The other element here, and one that’s undoubtedly sitting heavily on the minds of the film industry in recent months more than ever, is the security issue. Aubichon says that because the information, where it’s stored and how secure that information is, changes over the lifecycle of a project, a hybrid cloud model – or connectable cloud platforms with varying degrees of exposure – is required to support them. That’s where features like federated identity, which in OpenStack is still quite nascent, comes into play. It offers a mechanism for linking clouds, granting and authenticating user identity quickly (and taking access away equally fast), and leaves a trail revealing who touches what content.

“You need to be able to migrate authentication and data from a very closed instance out to something more open, and eventually out to public,” he says, adding that he has spent many of the past few years trying to convince the industry to eliminate any distinction between public and private clouds.

“In an industry that’s so paranoid about security, I’ve been trying to say ‘well, if you run an OpenStack instance in Rackspace, that’s really a private instance; they’re a trusted provider, that’s a private instance.’ To me, it’s just about how many people need to touch that material. If you have a huge amount of material then you’re naturally going to move to a public cloud vendor, but just because you’re on a public cloud vendor doesn’t mean that your instance is public.”

“I spend a lot of time just convincing the entertainment industry that this isn’t banking,” he adds. “They are slowly starting to come around; but it takes time.”

It All Comes Back To Data

Aubichon says the company is looking at ways to add value beyond simply cost and time reduction, with data and metadata aggregation figuring front and centre in that pursuit. The company did a proof of concept for Cougar Town where it showed how people watching the show on their iPads could interact with that content – a “second screen” interactive experience of sorts, but on the same viewing platform.

“Maybe a viewer likes the shirt one of the actresses is wearing on the show – they can click on it, and the Amazon or Target website comes up,” he says, adding that it could be a big source of revenue for online commerce channels as well as the networks. “This kind of stuff has been talked about for a while, but metadata aggregation and the process of dynamically seeking correlations in the data, where there have always been bottlenecks, has matured to the point where we can prove to studios they can aggregate all of this information without incurring extra costs on the production side. It’s going to take a while until it is fully mature, but it’s definitely coming.”

This kind of service assumes there exists loads of metadata on what’s happening in a shot (or the ability to dynamically detect and translate that into metadata) and, critically, the ability to detect correlations in data that are tagged differently.

The company runs a big MongoDB backend but has added capabilities from an open source project called Karma, which is an ontology mapping service that originally came out of museums. It’s a method of taking two MySQL databases and presenting to users correlations in data that are tagged differently.

DFT took that and married it with the text search function in MongoDB, a NoSQL paltform, which basically allows it to push unstructured data into the system and find correlations there (the company plans to seed this capability back into the open source MongoDB community).

“Ultimately we can use all of this metadata to create efficiencies in the post-production process, and help generate revenue for stakeholders, which is fairly compelling,” Aubichon says. “My goal is to help make the media and entertainment industry avoid what the music industry did, and to become a more unified industry through software, through everyone contributing. The more information is shared, the more money is made, and everyone is happy. That’s something that philosophically, in the entertainment industry, is only now starting to come to fruition.”

It would seem open source cloud technologies like OpenStack as well as innovations in the Linux kernel, which helped birth Docker and similar containerisation technologies, are also playing a leading role in bringing this kind of change about.

How to achieve success in the cloud

To cloud or not to cloud? With the right strategy, it need not be the question.

To cloud or not to cloud? With the right strategy, it need not be the question.

There are two sides to the cloud coin: one positive, the other negative, and too many people focus on one at the expense of the other for a variety of reasons ranging from ignorance to wilful misdirection. But ultimately, success resides in embracing both sides and pulling together the capabilities of both enterprises and their suppliers to make the most of the positive and limit the negative.

Cloud services can either alleviate or compound the business challenges identified by Ovum’s annual ICT Enterprise Insights program, based on interviews with 6,500 senior IT executives. On the positive side both public and private clouds, and everything in between, help:

Boost ROI at various levels: From squeezing more utilization from the underlying infrastructure to making it easier to launch new projects with the extra resources exposed asa result.

Deal with the trauma of major organisational/ structural changes as they can adapt to the ups and downs of requirements evolution.

Improve customer/citizen experience, and therefore satisfaction: This has been one of the top drivers for cloud adoption. Cloud computing is at its heart user experience-centric. Unfortunately many forget this, preferring instead to approach cloud computing from a technical perspective.

Deal with security, security compliance, and regulatory compliance: An increasing number of companies acknowledge that public cloud security and compliance credentials are at least as good if not better than their own, particularly in a world where security and compliance challenges are evolving so rapidly. Similarly, private clouds require security to shift from reactive and static to proactive and dynamic security, whereby workloads and data need to be secured as they move in and out of internal IT’s boundaries.

On the other hand, cloud services have the potential to compound business challenges. For instance, the rise of public cloud adoption contributes to challenges related to increasing levels of outsourcing. It is all about relationship management, and therefore relates to another business challenge: improving supplier relationships.

In addition to having to adapt to new public cloud offerings (rather than the other way round), once the right contract is signed (another challenging task), enterprises need to proactively manage not only their use of the service but also their relationships with the service provider, if only to be able to keep up with their fast-evolving offerings.

Similarly, cloud computing adds to the age-old challenge of aligning business and IT at two levels: cloud-enabling IT, and cloud-centric business transformation.

From a cloud-enabling IT perspective, the challenge is to understand, manage, and bridge a variety of internal divides and convergences, including consumer versus enterprise IT, developers versus IT operations, and virtualisation ops people versus network and storage ops. As the pace of software delivery accelerates, developers and administrators need to not only to learn from and collaborate with one another, but also deliver the right user experience – not just the right business outcomes. Virtualisation ops people tend to be much more in favour than network and storage ops people of software-defined datacentre, storage, and networking (SDDC, SDS, SDN) with a view to increasingly take control of datacentre and network resources. But the storage and network ops people, however, are not so keen on letting the virtualisation people in.

When it comes to cloud-centric business transformation, IT is increasingly defined in terms of business outcomes within the context of its evolution from application siloes to standardised, shared, and metered IT resources, from a push to a pull provisioning model, and more importantly, from a cost centre to an innovation engine.

The challenge, then, is to understand, manage, and bridge a variety of internal divides and convergences including:

Outside-in (public clouds for green-field application development) versus inside-out (private cloud for legacy applicationmodernization) perspectives. Supporters of the two approaches can be found on both the business and IT sides of the enterprise.

Line-of-business executives (CFO, CMO, CSO) versus CIOs regarding cloud-related roles, budgets, and strategies: The up-andcoming role of chief digital officer (CDO) exemplifies the convergence between technology and business C-level executives. All CxOs can potentially fulfil this role, with CDOs increasingly regarded as “CEOs in waiting”. In this context, there is a tendency to describe the role as the object of a war between CIOs and other CxOs. But what digital enterprises need is not CxOs battling each other, but coordinating their IT investments and strategies. Easier said than done since, beyond the usual political struggles, there is a disparity between all side in terms of knowledge, priorities, and concerns.

Top executives versus middle management: Top executives who are broadly in favour of cloud computing in all its guises, versus middle management who are much less eager to take it on board, but need to be won over since they are critical to cloud strategy execution.

Laurent Lachal

Shadow IT versus Official IT: Where IT acknowledges the benefits of Shadow IT (it makes an organisation more responsive and capable of delivering products and services that IT cannot currently support) and its shortcomings (in terms of costs, security, and lack of coordination, for example). However, too much focus on control at the expense of user experience and empowerment perpetuates shadow IT.

Only then will your organisation manage to balance both sides of the cloud coin.

Laurent Lachal is leading Ovum Software Group’s cloud computing research. Besides Ovum, where he has spent most of his 20 year career as an analyst, Laurent has also been European software market group manager at Gartner Ltd.

Every little helps: How Tesco is bringing the online food retail experience back in-store

Tesco is in the midst of overhauling its connectivity and IT services

Tesco is in the midst of overhauling its connectivity and IT services

Food retailers in the UK have for years spent millions of pounds on going digital and cultivating a web presence, which includes the digitisation of product catalogues and all of the other necessary tools on the backend to support online shopping, customer service and food delivery. But Tomas Kadlec, group infrastructure IT director at Tesco tells BCN more emphasis is now being place on bringing the online experience back into physical stores, which is forcing the company to completely rethink how it structures and handles data.

Kadlec, who is responsible for Tesco’s IT infrastructure strategy globally, has spent the better part of the past few years building a private cloud deployment model the company could easily drop into regional datacentres that power its European operations and beyond. This has largely been to improve the services it can provide to clients and colleagues within the company’s brick and mortar shops, and support a growing range of internal applications.

“If you look at what food retailers have been doing for the past few years it was all about building out an online extension to the store. But that trend is reversing, and there’s now a kind of ‘back to store’ movement brewing,” Kadlec says.

“If we have 30,000 to 50,000 SKUs in one store at any given time, how do you handle all of that data in a way that can contribute digital feature-rich services for customers? And how do you offer digital services to customers in Tesco stores that cater to the nuances in how people act in both environments?  For instance, people like to browse more in-store, sometimes calling a friend or colleague to ask for advice on what to get or recipes; in a digital environment people are usually just in a rush to head for the checkout. These are all fairly big, critical questions.”

Some of the digital services envisioned are fairly ambitious and include being able to queue up tons of product information – recipes, related products and so forth – on mobile devices by scanning items with built-in cameras, and even, down the line, paying for items on those devices. But the food retail sector is one of the most competitive in the world, and it’s possible these kinds of services could be a competitive differentiator for the firm.

“You should be able to create a shopping list on your phone and reach all of those items in-store easily,” he says. “When you’re online you have plenty of information about those products at your fingertips, but far less when you’re in a physical store. So for instance, if you have special dietary requirement we should be able to illuminate and guide the store experience on these mobile platforms with this in mind.”

Tomas_Kadlec“The problem is that in food retail the app economy doesn’t really exist yet. It exists everywhere else, and in food retail the app economy will come – it’s just that we as an industry have failed to make the data accessible so applications aren’t being developed.”

To achieve this vision, Tesco had to drastically change its approach to data and how it’s deployed across the organisation. The company originally started down the path of building its own API and offering internal users a platform-as-a-service to enable more agile app development, but Kadlec says the project quickly morphed into something much larger.

“It’s one thing to provide an elastic compute environment and a platform for development and APIs – something we can solve in a fairly straightforward way. It’s another thing entirely to expose the information you need for these services to work effectively in such a scalable system.”

Tesco’s systems handle and structure data the way many traditional enterprises within and outside food retail do – segmenting it by department, by function, and in alignment with the specific questions the data needs to answer. But the company is trying to move closer to a ‘store and stream now, ask questions later’ type of data model, which isn’t particularly straightforward.

“Data used to be purpose-built; it had a clearly defined consumer, like ERP data for example. But now the services we want to develop require us to mash up Tesco data and open data in more compelling ways, which forces us to completely re-think the way we store, categorise and stream data,” he explains. “It’s simply not appropriate to just drag and drop our databases into a cloud platform – which is why we’re dropping some of our data systems vendors and starting from scratch.”

Kadlec says the debate now centres on how the company can effectively democratise data while keeping critical kinds of information – like consumers’ personal information – secure and private: “There should only be two types of data. Data that should be open, and we should make sure we make that accessible, and then there’s the type of data that’s so private people get fired for having made it accessible – and setting up very specific architectural guidelines along with this.”

The company hasn’t yet had the security discussion with its customers yet, which is why Kadlec says the systems Tesco puts in place initially will likely focus on improving internal efficiency and productivity – “so we don’t have to get into the privacy data nightmare”.

The company also wants to improve connectivity to its stores to better service both employees and customers. Over the next 18 months the company will implement a complete overhaul of store connectivity and infrastructure, which will centre on delivering low latency bandwidth for in-store wifi and quadrupling the amount of access points. It also plans to install 4G signal booster cells in its stores to improve GSM-based connectivity. Making sure that infrastructure will be secure so that customer data isn’t leaked is top priority, he says.

Tesco is among a number of retailers to make headlines as of late – though not because of datacentre security or customer data loss, but because the company, having significantly inflated its profits by roughly £250m, is in serious financial trouble. But Kadlec says what many may see as a challenge is in fact an opportunity for the company.

One of the things the company is doing is piloting OmniTrail’s indoor location awareness technology to improve how Tesco employees are deployed in stores and optimise how they respond to changes in demand.

“If anything this is an opportunity for IT. If you look at the costs within the store today, there are great opportunities to automate stuff in-store and make colleagues within our stores more focused on customer services. If for instance we’re looking at using location-based services in the store, why do you expect people to clock in and clock out? We still use paper ledgers for holidays – why can’t we move this to the cloud? The opportunities we have in Tesco to optimise efficiency are immense.”

“This will inevitably come back to profits and margins, and the way we do this is to look at how we run operations and save using automation,” he says.

Tomas is speaking at the Telco Cloud Forum in London April 27-29, 2015. To register click here.