Category Archives: Features

Hybrid cloud issues are cultural first, technical second – Ovum

CIOs are still struggling with their hybrid cloud strategies

CIOs are still struggling with their hybrid cloud strategies

This week has seen a number of hybrid cloud deals which would suggest the industry is making significant progress delivering the platforms, services and tools necessary to make hybrid cloud practical. But if anything they also serve as a reminder that IT will forever be multimodal which creates challenges that begin with people, not technology, explains Ovum’s principle analyst of infrastructure solutions Roy Illsley.

There has been no shortage of hybrid cloud deals this week.

Rackspace and Microsoft announced a deal that would see the hosting and cloud provider expand its Fanatical Support to Microsoft Azure-based hybrid cloud platforms.

Google both announced it would support Windows technologies on its cloud platform, and that it would formally sponsor the OpenStack foundation – a move aimed at supporting container portability between multiple cloud platforms.

HP announced it would expand its cloud partner programme to include CenturyLink, which runs much of its cloud platform on HP technology, in a move aimed at bolstering HP’s hybrid cloud business and CenturyLink’s customer reach.

But one of the more interesting hybrid cloud stories this week came from the enterprise side of the industry. Copper and gold producer Freeport-McMoRan announced it is embarking on a massive overhaul of its IT systems. In a bid to become more agile the firm said it would deploy its entire application estate on a combination of private and public cloud platforms – though, and somewhat ironically, the company said the entire project would wrap up in five years (which, being pragmatic about IT overhauls, could mean far later).

“The biggest challenge with hybrid cloud isn’t the technology per se – okay, so you need to be able to have one version of the truth, one place where you can manage most the platforms and applications, one place where to the best of your abilities you can orchestrate resources, and so forth,” Illsley explains.

Of course you need all of those things, he says. There will be some systems that won’t fit into that technology model, that will likely be left out (i.e. mainframes). But there are tools out there to fit current hybrid use cases.

“When most organisations ‘do’ hybrid cloud, they tend to choose where their workloads will sit depending on their performance needs, scaling needs, cost and application architecture – and then the workloads sit there, with very little live migration of VMs or containers. Managing them while they sit there isn’t the major pain point. It’s about the business processes; it’s the organisational and cultural shifts in the IT department that are required in order to manage IT in a multimodal world.”

“What’s happening in hybrid cloud isn’t terribly different from what’s happening with DevOps. You have developers and you have operations, and sandwiching them together in one unit doesn’t change the fact that they look at the world – and the day-to-day issues they need to manage or solve – in their own developer or operations-centric ways. In effect they’re still siloed.”

The way IT is financed can also create headaches for CIOs intent on delivering a hybrid cloud strategy. Typically IT is funded in an ‘everyone pitches into the pot’ sort of way, but one of the things that led to the rise of cloud in the first place is line of businesses allocating their own budgets and going out to procure their own services.

“This can cause both a systems challenge – shadow IT and the security, visibility and management issues that come with that – and a cultural challenge, one where LOB heads see little need to fund a central organisation that is deemed too slow or inflexible to respond to customer needs. So as a result, the central pot doesn’t grow.”

While vendors continue to ease hybrid cloud headaches on the technology front with resource and financial (i.e. chargeback) management tools, app stores or catalogues, and standardised platforms that bridge the on-prem and public cloud divide, it’s less likely the cultural challenges associated with hybrid cloud will find any straightforward solutions in the short term.

“It will be like this for the next ten or fifteen years at least. And the way CIOs work with the rest of the business as well as the IT department will define how successful that hybrid strategy will be, and if you don’t do this well then whatever technologies you put in place will be totally redundant,” Illsley says.

Exclusive: How Virgin Active is getting fit with the Internet of Things

Virgin want to use IoT to make its service more holistic and improve customer retention

Virgin want to use IoT to make its service more holistic and improve customer retention

Virgin Active is embarking on an ambitious redesign of its facilities that uses the Internet of Things to improve the service it offers to customers and reduce subscriber attrition rates, explains Andy Caddy, chief information officer of Virgin Active.

“Five years ago you didn’t really need to be very sophisticated as a health club operator in terms of your IT and digital capability,” Caddy says. “But now I would argue that things have changed dramatically – and you have to be very smart about how you manage your relationship with customers.”

The health club sector is one of the most unique subscription-based businesses around, in part because the typical attrition rate is around 50 per cent – meaning by the end of the year the club has lost half of the members it started out with, and needs to gain new subscribers by at least as much in order to grow on aggregate. That’s quite a challenge to tackle.

Much of how Virgin Active intends to address this is through more clever use of data, and to use cloud-based software and IoT sensors to help better understand what its customers are doing inside and beyond the gym. The company’s vision involves creating once consolidated view of the customer, collating information stored on customers’ smartphones with health data generated from wearable sensors and gym machines being used by those customers.

The company is already in the process of trialling this vision with a new fitness club at Cannon Street, London, which opens later this month. Originally announced last year, the club, which Caddy says is to be Virgin Active’s flagship technology club, uses RFID chip-embedded membership wrist bands that can be used to do everything from entering the gym and logging cardiovascular data from the machines they use to buying drinks at the café, renting towels and accessing lockers.

“Now we start to see what people are doing in the clubs, which gives us a richer set of data to work with, and it starts to generate insights that are more relevant and engaging and perhaps also feeds our CRM and product marketing,” he says. “Over the next few months we’ll be able to compare this data with what we see at other clubs to find out a few important things – are we becoming more or less relevant to customers? Is customer retention improving?”

Combine that with IoT data from things like smartwatches that are worn outside the confines of the gym, and the company can get a better sense of how to improve what it suggests as a health or fitness activity from a holistic standpoint. It also means more effective marketing, which beckons a more sophisticated way of handling data and acting on it than it already does by Caddy’s admission.

“The kinds of questions I want to be able to answer for my customers are things like: What’s the kind of lunch I can eat tomorrow based on today’s activity? How should I change my calendar next week based on my current stress levels? These are the really interesting questions that would absolutely add value to [a customer’s] life and also create a reasonable extension of the role we’re already playing as a fitness provider.”

But Caddy says the vendors themselves, while pushing the boundaries in IoT from a technical standpoint, pose the biggest threats to the sector’s development.

“We want standards because it’s very hard to do anything when Nike want to talk about Fuel and Fitbit want to talk about Steps and Apple want to talk about Activity, and none of these things equal the same things,” he explains. “What we really want is some of these providers to start thinking about how you do something smart with that information, and what you need in order to do that, but I’m always surprised by how few vendors are asking those kinds of questions.”

“It’s an inevitable race to the bottom in sensor tech; the value is all in the data.”

Companies like Apple and Microsoft know this – and in health specifically are attempting to build out their own data services that developers can tap into for their own applications. But again, those are closed, proprietary systems, and it may be some time before the IoT sector opens up to effectively cater a multi-device, multi-cloud world.

“We’re lucky in a sense because health and fitness is one of the first places where IoT has taken off in a real sense. But to be honest, we’re still a good way from where we want to be,” he says.

Food retail, robotics, cloud and the Internet of Things

Ocado is developing a white-label grocery delivery service

Ocado is developing a white-label grocery delivery service

With a varied and fast moving supply chain, loads of stock moving quickly through warehouses, delivery trucks, stores, and an increasingly digital mandate, the food retail sector is unlike any other retail segment. Paul Clarke, director of technology at Ocado, a leading online food retailer, explains how the cloud, robotics, and the Internet of Things is increasingly at the heart of everything the company does.

Ocado started 13 years ago as an online supermarket where consumers could quickly and easily order food goods. It does not own or operate any brick-and-mortar stores, though it effectively competes with all other food retailers, in some ways now more than ever because of how supermarkets have evolved in the UK. Most of them offer online ordering and food delivery services.

But in 2013 the company struck a £216m deal with Morrisons that would see Ocado effectively operate as the Morrisons online food store, a shift from its previous strategy of offering a standalone end-to-end grocery service with its own brand on the front-end – and a move that would become central to its growth strategy moving forward. The day the Morrisons platform went live in early 2014 the company set to work on re-platforming the Ocado service and turning it into the Ocado Smart Platform (OSP), a white-label end-to-end grocery service that can be deployed by food retailers globally. Clarke was fairly tight-lipped about some of the details for commercial reasons, but suggested “there isn’t a continent where the company is not currently in discussions” with a major food retailers to deliver OSP.

The central idea behind this is that standing up a grocery delivery service – the technical infrastructure as well as support services – is hugely expensive for food retailers and involves lots of technical integration, so why not simply deploy a white label end-to-end service that will still retain the branding of said retailer but offer all the benefits?

Paul Clarke is speaking at the Cloud World Forum in London June 24-25. Click here to register!

“In new territories you don’t need the size of facilities that we have here in the Midlands. For instance, our site in the Midlands costs over £230m, and that is fine for the UK which has an established online grocery business and our customer base, but it wouldn’t fit well in a new territory where you’re starting from scratch, nor is there the willingness to spend such sums,” he explains.

The food delivery service operates in a hub-and-spoke model. The cloud service being developed by Ocado connects the ‘spokes’, smaller food depots (which could even be large food delivery trucks) to customer fulfilment centres, which are larger warehouses that house the majority of the stock (the ‘hub’).

The company is developing and hosting the service on a combination of AWS and Google’s cloud platforms – for the compute and data side, respectively.

“The breadth and depth of our estate is huge. You have robotics systems, vision systems, simulation systems, data science applications, and the number of different kinds of use cases we’re putting in the cloud is significant. It’s a microservices architecture that we’re building with hundreds of different microservices. A lot of emphasis is being put on security through design, and robust APIs so it can be integrated with third party products – it’s an end-to-end solution but many of those incumbents will have other supply chain or ERP solutions and will want to integrate it with those.”

AWS and Google complement eachother well, he says. “We’re using most things that both of those companies have in their toolbox; there’s probably not much that we’re not using there.”

The warehousing element including the data systems will run on a private cloud in the actual product warehouses, so low latency real-time control systems will run in the private cloud, but pretty much everything else will run in the public cloud.

The company is also looking at technologies like OpenStack, Apache Mesos and CoreOS because it wants to run as many workloads as possible in Linux containers; they’re more portable than VMs and because of the variation between the regions (legislation and performance) where it will operate the company may have to change whether it deploys certain workloads in a public cloud or private cloud quite quickly.

The Internet of Things and the Great Data Lakes

IoT is very important for the company in several areas. Its warehouses are like little IoT worlds all on their own, Clarke says, with lots of M2M, hundreds of kilometres of conveyor, and thousands of things on the move at any one time including automated cranes and robotics.

Then there’s all of the data the company collects from drivers for things like route optimisation and operational improvement – things like wheel speed, tire pressure, road speed, engine revs, fuel consumption, cornering performance, which are all fed back to the company in real-time and used to track driver performance.

There’s also a big role for wearables in those warehouses. Clarke says down the line wearables have the potential to help it improve safety and productivity (“we’re not there yet but there is so much potential.”)

But where IoT can have the biggest impact in food retail, and where it’s most underestimated, Clarke explains, is the customer element: “This is where many companies underestimate the scale of transformation IoT is going to bring, the intersection of IoT and smart machines. In our space we see that in terms of the smart home, smart appliances, smart packaging, it’s all very relevant. The customers living in this world are going to demand this kind of smartness from all the systems they use, so it’s going to raise the bar for all the mobile apps and service we build.”

“Predictive analytics are going to play a big part there, as will machine learning, to help them do their shop up in our case, or knowing what they want before they even have a clue themselves. IoT has a very important part to play in that in terms of delivering that kind of information to the customer to the extent that they wish to share it,” he says.

But challenges, ones that straddle the legal, technical and cultural, persist in this nascent space. One of them, largely technical, is data management, which isn’t insurmountable. The company has implemented a data lake built on Google BigQuery, where it publishes a log of pretty much every business event onto a backbone that it persists through that service as well as data exhaust from its warehouse logs, alerts, driver monitoring information, clickstream data and front-end supply chain information (at the point of order), and it uses technologies like Dataflow and Hadoop for number crunching.

Generally speaking, Clarke says, grocery is just fundamentally different to non-grocery or food in ways that have data-specific implications. “When you go buy stationary or a printer cartridge you usually buy one or two items. With grocery there can often be upwards of 50 items, there are multiple suppliers and multiple people involved, sometimes at different places, often on different devices and different checkouts. So that journey of stitching that order, that journey together, is a challenge from a data perspective in itself.”

Bigger challenges in the IoT arena, where more unanswered questions lie, include security and identity management, discoverability, data privacy and standards – or the lack of. These are the problems that aren’t so straightforward.

“A machine is going to have to have an identity. That whole identity management question for these devices is key and so far goes unanswered. It’s also linked to discoverability. How do you find out what the device functions are? Discovery is going to get far too complex for humans. You get into a train station these days and there are already 40 different Wi-Fi networks, and hundreds of Bluetooth devices visible. So the big question is: How do you curate this, on a much larger scale, for the IoT world?”

“The type of service that creates parameters around who you’re willing to talk to as a device, how much you’re willing to pay for communications, who you want to be masked from, and so forth – that’s going to be really key, as well as how you implement this so that you don’t make a mistake and share the wrong kinds of information with the wrong device. It’s core to the privacy issue.”

“The last piece is standardisation. How these devices talk to one another – or don’t – is going to be key. What is very exciting is the role that all the platforms like Intel Edison, Arduino, BeagleBone have played in lowering the barrier by providing amazing Lego with which to prototype, and in some cases build these systems; it has allowed so many people to get involved,” he concluded.

Food retail doesn’t have a large industry-specific app ecosystem, which in some ways has benefited a company like Ocado. And as it makes the transition away from being the sole vendor of its product towards being a platform business, Clarke said the company will inevitably have to develop some new capabilities, from sales to support and consultancy, which it didn’t previously depend so strongly upon. But its core development efforts will only accelerate as it ramps up to launch the platform. It has 610 developers and is looking to expand to 750 by January next year across its main development centre in Hatfield and two others in Poland, one of which is being set up at the moment.

“I see no reason why it has to stop there,” he concludes.

Real-time cloud monitoring too challenging for most providers, TFL tech lead says

Reed says TFL wants to encourage greater greater use of its data

Reed says TFL wants to encourage greater greater use of its data

Getting solid data on what’s happening in your application in real-time seems to be a fairly big challenge for most cloud services providers out there explains Simon Reed, head of bus systems & technology at Transport for London (TFL).

TFL, the executive agency responsible for transport planning and delivery for the city of London, manages a slew of technologies designed to support over 10 million passenger journeys each day. These include back office ERP, routing and planning systems, mammoth databases tapped in to line-of-business applications as well as customer-facing app (i.e. real-time travel planning apps, and the journey planner website), line-of-business apps, as well as all the vehicle telematics, monitoring and tracking technologies.

A few years ago TFL moved its customer facing platforms – the journey planner, the TFL website, and the travel journey databases – over to a scalable cloud-based platform in a bid to ensure it could deal with massive spikes in demand. The key was to get much of that work completed before the Olympics, including a massive data syndication project so that app developers could more easily tap into all of TFL’s journey data.

“Around the Olympics you have this massive spike in traffic hitting our databases and our website, which required highly scalable front and back-ends,” Reed said. “Typically when we have industrial action or a snowstorm we end up with 10 to 20 times the normal use, often triggered in less than half an hour.”

Simon Reed is speaking at the Cloud World Forum in London June 24-25. Register for the event here.

The organisation processes bus arrival predications for all 19,000 bus stops in London which is constantly dumped into the cloud in a leaky-tap model, and there’s a simple cloud application that allows subscribers to download the data in a number of formats, and APIs to build access to that data directly into applications. “As long as developers aren’t asking for predictions nanoseconds apart, the service doesn’t really break down – so it’s about designing that out and setting strict parameters on how the data can be accessed and at what frequency.”

But Reed said gaining visibility into the performance of a cloud service out of the box seems to be a surprisingly difficult thing to do.

“I’m always stunned about how little information there is out of the box though when it comes to monitoring in the cloud. You can always add something in, but really, should I have to? Surely everyone else is in the same position where monitoring actual usage in real-time is fairly important. The way you often have to do this is to specify what you want and then script it, which is a difficult approach to scale,” he said. “You can’t help but think surely this was a ‘must-have’ when people had UNIX systems.”

Monitoring (and analytics) will be important for Reed’s team as they expand their use of the cloud, particularly within the context of the journey data TFL publishes. Reed said its likely those systems, while in a strong position currently, will likely see much more action as TFL pursues a strategy of encouraging use of the data outside the traditional transport or journey planning app context.

“What else can we do to that data? How can we turn it around in other ways? How can other partners do the same? For us it’s a question of exploiting the data capability we have and moving it into new areas,” he said.

“I’m still not convinced of the need to come out of whatever app you’re in – if you’re looking at cinema times you should be able to get the transportation route that gets you to the cinema on time, and not have to come out of the cinema listings app. I shouldn’t have to match the result I get in both apps in order to plan that event – it should all happen in one place. It’s that kind of thinking we’re currently trying to promote, to think more broadly than single purpose apps, which is where the market is currently.”

Bring Your Own Encryption: The case for standards

BYOE is the new black

BYOE is the new black

Being free to choose the most suitable encryption for your business seems like a good idea. But it will only work in a context of recognised standards across encryption systems and providers’ security platforms. Since the start of the 21st century, security has emerged from scare-story status to become one of IT users’ biggest issues – as survey after survey confirms. Along the way a number of uncomfortable lessons are still being learned.

The first lesson is that security technology must always be considered in a human context. No one still believes in a technological fix that will put an end to all security problems, because time and again we hear news of new types of cyber attack that bypass sophisticated and secure technology by targeting human nature – from alarming e-mails ostensibly from official sources, to friendly social invitations to share a funny download; from a harmless-looking USB stick ‘accidentally’ dropped by the office entrance, to the fake policeman demanding a few personal details to verify that you are not criminally liable.

And that explains the article’s heading: a balance must be struck between achieving the desired level of protection against keeping all protection procedures quick and simple. Every minute spent making things secure is a minute lost to productivity – so the heading could equally have said “balancing security with efficiency”.

The second lesson still being learned is never to fully trust to instinct in security matters. It is instinctive to obey instructions that appear to come from an authoritative source, or to respond in an open, friendly manner to a friendly approach – and those are just the sort of instincts that are exploited by IT scams. Instincts can open us to attack, and they can also evoke inappropriate caution.

In the first years of major cloud uptake there was the oft-repeated advice to business that the sensible course would be to use public cloud services to simplify mundane operations, but that critical or high priority data should not be trusted to a public cloud service but kept under control in a private cloud. Instinctively this made sense: you should not allow your secrets to float about in a cloud where you have no idea where they are stored or who is in charge of them.

The irony is that the cloud – being so obviously vulnerable and inviting to attackers – is constantly being reinforced with the most sophisticated security measures: so data in the cloud is probably far better protected than any SME could afford to secure its own data internally. It is like air travel: because flying is instinctively scary, so much has been spent to make it safe that you are

less likely to die on a flight than you are driving the same journey in the “safety” of your own car. The biggest risk in air travel is in the journey to the airport, just as the biggest risk in cloud computing lies in the data’s passage to the cloud – hence the importance of a secure line to a cloud service.

So let us look at encryption in the light of those two lessons. Instinctively it makes sense to keep full control of your own encryption and keys, rather than let them get into any stranger’s hands – so how far do we trust that instinct, bearing in mind the need also to balance security against efficiency?

BYOK

Hot on the heels of BYOD – or “Bring Your Own Device” to the workplace – come the acronym for Bring Your Own Key (BYOK).

The idea of encryption is as old as the concept of written language: if a message might fall into enemy hands, then it is important to ensure that they will not be able to read it. We have recently been told that US forces used Native American communicators in WW2 because the chances of anyone in Japan understanding their language was near zero. More typically, encryption relies on some sort of “key” to unlock and make sense of the message it contains, and that transfers the problem of security to a new level: now the message is secure, the focus shifts to protecting the key.

In the case of access to cloud services: if we are encrypting data because we are worried about its security in an unknown cloud, why then should we trust the same cloud to hold the encryption keys?

Microsoft for instance recently announced a new solution to this dilemma using HSMs (Hardware Security Modules) within their Windows Azure cloud – so that an enterprise customer can use its own internal HSM to produce a master key that is then transmitted to the HSM within the Windows Azure cloud. This provides secure encryption when in the cloud, but it also means that not even Microsoft itself can read it, because they do not have the master key hidden in the enterprise HSM.

It is not so much that the enterprise cannot trust Microsoft to protect its data from attack, it is more to do with growing legal complexities. In the wake of Snowden revelations, it is becoming known that even the most well protected data might be at risk from a government or legal subpoena demanding to reveal its content. Under this BYOK system, however, Microsoft cannot be forced to reveal the enterprise’s secrets because it cannot access them itself, and the responsibility lies only with the owner.

This is increasingly important because of other legal pressures that insist on restricting access to certain types of data. A government can, for example, forbid anyone from allowing data of national importance to leave the country – not a simple matter in a globally connected IP network. There are also increasing legal pressures on holders of personal data to guarantee levels of privacy.

Instinctively it feels a lot more secure to manage your own key and use BYOK instead of leaving it to the cloud provider. As long as that instinct is backed by a suitable and strict in-house HSM based security policy, these instincts can be trusted.

BYOE

BYOK makes the best of the cloud provider’s encryption offering, by giving the customer ultimate control over its key. But is the customer happy with the encryption provided?

Bearing in mind that balance between security and efficiency, you might prefer a higher level of encryption than that used by the cloud provider’s security system, or you might find the encryption mechanism is adding latency or inconvenience and would rather opt for greater nimbleness at the cost of lighter encryption. In this case you could go a step further and employ your own encryption algorithms or processes. Welcome to the domain of BYOE (Bring Your Own Encryption).

Again, we must balance security against efficiency. Take the example of an enterprise using the cloud for deep mining its sensitive customer data. This requires so much computing power that only a cloud provider can do the job, and that means trusting private data to be processed in a cloud service. This could infringe regulations, unless the data is protected by suitable encryption. But how can the data be processed if the provider cannot read it?

Taking the WW2 example above: if a Japanese wireless operator was asked to edit the Native American message so a shortened version could be sent to HQ for cryptanalysis, any attempt to edit an unknown language would create gobbledygook, because translation is not a “homomorphic mapping”.

Homomorphic encryption means that one can perform certain processes on the encrypted data, and the same processes will be performed on the source data without any need to de-crypt the encrypted data. This usually implies arithmetical processes: so the data mining software can do its mining on the encrypted data file while it remains encrypted, and the output data, when decrypted, will be the same output as if the data had been processed without any intervening encryption.

It is like operating one of those automatic coffee vendors that grinds the beans, heats the water and adds milk and sugar according to which button was pressed: you do not know what type of coffee bean is used, whether tap, filtered or spring water or whether the milk is whole cream, skimmed or soya. All you know is that what comes out will be a cappuccino with no sugar. In the data mining example: what comes out might be a neat spread-sheet summary of customers average buying habits based on millions of past transactions, without a single personal transaction detail being visible to the cloud’s provider.

The problem with the cloud provider allowing the users to choose their own encryption, is that the provider’s security platform has to be able to support the chosen encryption system. As an interim measure, the provider might offer a choice from a range of encryption offerings that have been tested for compatibility with the cloud offering, but that still requires one to trust another’s choice of encryption algorithms. A full homomorphic offering might be vital for one operation, but a waste of money and effort for a whole lot of other processes.

The call for standards

So what is needed for BOYE to become a practical solution is a global standard cloud security platform that any encryption offering can be registered for support by that platform. The customer chooses a cloud offering for its services and for its certified “XYZ standard” security platform, then the customer goes shopping for an “XYZ certified” encryption system that matches its particular balance between security and practicality.

Just as in the BYOD revolution, this decision need not be made at an enterprise level, or even by the IT department. BYOE, if sufficiently standardised, could become the responsibility of the department, team or individual user: just as you can bring your own device to the office, you could ultimately take personal responsibility for your own data security.

What if you prefer to use your very own implementation of your own encryption algorithms? All the more reason to want a standard interface! This approach is not so new for those of us who remember the Java J2EE Crypto library – as long as we complied with the published interfaces, anyone could use their own crypto functions. This “the network is the computer” ideology becomes all the more relevant in the cloud age. As the computer industry has learned over the past 40 years, commonly accepted standards and architecture (for example the Von Neumamm model or J2EE Crypto) play a key role in enabling progress.

BYOE could prove every bit as disruptive as BYOD – unless the industry can ensure that users choose their encryption from a set of globally sanctioned and standardised encryption systems or processes. If business is to reap the full benefits promised by cloud services, it must have the foundation of such an open cloud environment.

Written by Dr. Hongwen Zhang, chair security working group, CloudEthernet Forum.

Q&A with Mark Evans, head of IT, RLB

Mark EvansAs we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Mark Evans, head of IT at global property and construction practice Rider Levett Bucknall (RLB) to discuss supporting BYOD, the need for standards in the cloud sector and the impact of working with large data models on the technology choices the firm has to make.

 

What do you see as the most disruptive trend in enterprise IT today?

I’m not entirely sure that the most disruptive trend in enterprise IT is entirely technical. Admittedly, the driving impetus for change is coming from technology, but it is being driven by non-IT people who are equipping their homes, cars and any one of a multitude of other environments with technology which works for them. The disruption manifests itself in the attitude which is brought to business from these domestic environments; people no longer see the bastion of “Corporate IT” as unassailable as it once was, before the commoditisation of IT equipment became the norm. Domestic procurement cycles are driven in a different manner to those of any business – it’s what the likes of Apple thrive on.

There’s more of a “heart” aspiration than a “head” decision when it comes to buying IT at home. Let’s be honest? Who – at home – works out depreciation of an asset when a loved one is being tugged at by slick marketing and peer pressure? Maybe I’m a misanthrope, but this sort of pressure has a knock-on effect with a lot of people and they seek the flexibility, the performance, the ease of use and (let’s be honest) the flashiness of new toys at work. The person with the keys to the “toy box”, the erstwhile IT director, is seen as a barrier to that oft-quoted, rarely well-informed concept of ‘agility’.

So… BYOD. People bring their home kit to work and expect it to work and to offer them an ‘edge’. I think the disruption is bigger than Dave from Accounts bringing in his shiny new laptop (with added speed stripes). It is the expectation that this is acceptable in the face of business-wide legal constraints of liability, compliance and business planning – the directors of a business set the rules and this new, almost frivolous attitude to the complexity and requirements of corporate IT is a “wolf in sheep’s clothing” in terms of the risk it brings to a business. Where do I sit on this? I say, “bring it on”.

 

What do you think the industry needs to work on in terms of cloud service evolution?

Portability. Standards. Standards of portability. I still believe that there is a general complicity between vendors and purchasers to create a “handcuffs” relationship (“Fifty Shades of Big Blue”?) which is absolutely fine in the early part of a business relationship as it provides a predictable environment from the outset, but this predictability can become moribund and in an era where business models flex and morph at previously alarming rates, the “handcuffs” agreement can become shackles. If the agreement is on a month-by-month basis, it is rarely easy to migrate across Cloud platforms. Ignoring the potential volumes of data which may need to be moved, there is no lingua franca for Cloud services to facilitate a “switch on/switch off” ease-of-migration one might expect in the Cloud environment, predicated as it is on ease-of-use and implementation.

Data tends to move slowly in terms of development (after all, that’s where the value is), so maybe as an industry we need to consider a Data Cloud Service which doesn’t require massive agility, but a front-end application environment which is bound by standards of migratability (is that a word? If it isn’t – it should be!) to offer front-end flexibility against a background of data security and accessibility. In that way, adopting new front-end processes would be easier as there would be no requirement to haul terabytes of data across data centres. Two different procurement cycles, aligned to the specific vagaries of their environments.

 

Can you describe some of the unique IT constraints or features particular to your sector?

Acres of huge data structures. When one of the major software suppliers in your industry (AutoDESK and Construction, respectively) admit that the new modelling environment for buildings goes beyond the computing and data capability in the current market – there are alarm bells. This leads to an environment where the client front end ‘does the walking’ and the data stays in a data centre or the Cloud. Models which my colleagues need to use have a “starting price” of 2Gb and escalate incredibly as the model seeks to more accurately represent the intended construction project. In an environment where colleagues would once carry portfolios of A1 or A0 drawings, they now have requirements for portable access to drawings which are beyond the capabilities of even workstation-class laptop equipment. Construction and, weirdly enough, Formula One motorsport, are pushing the development of Cloud and virtualisation to accommodate these huge, data-rich, often highly graphical models. Have you ever tried 3D rendering on a standard x64 VMWare or Hyper-V box? We needed Nvidia to sort out the graphics environment in the hardware environment and even that isn’t the ‘done deal’ we had hoped.

 

Is the combination of cloud and BYOD challenging your organisation from a security perspective? What kind of advice would you offer to other enterprises looking to secure their perimeter within this context?

Not really. We have a strong, professional and pragmatic HR team who have put in place the necessary constraints to ensure that staff are fully aware of their responsibilities in a BYOD environment. We have backed this up with decent MDM control. Beyond that? I honestly believe that “where there’s a will, there’s a way” and that if MI5 operatives can leave laptops in taxis we can’t legislate for human frailties and failings. Our staff know that there is a ‘cost of admission’ to the BYOD club and it’s almost a no-brainer; MDM controls their equipment within the corporate sphere of influence and their signature on a corporate policy then passes on any breaches of security to the appropriate team, namely, HR.

My advice to my IT colleagues would be – trust your HR team to do their job (they are worth their weight in gold and very often under-appreciated), but don’t give them a ‘hospital pass’ by not doing everything within your control to protect the physical IT environment of BYOD kit.

 

What’s the most challenging part about setting up a hybrid cloud architecture?

Predicting the future. It’s so, so, so easy to map the current operating environment in your business to a hybrid environment (“They can have that, we need to keep this…”) but constraining the environment by creating immovable and impermeable glass walls at the start of the project is an absolutely, 100 per cent easy way to lead to frustration with a vendor in future and we must be honest and accept that by creating these glass walls we were the architect of our own demise. I can’t mention any names, but a former colleague of mine has found this out to his company’s metaphorical and bottom-line cost. They sought to preserve their operating environment in aspic and have since found it almost soul-destroying to start all over again to move to an environment which supported their new aspirations.

Reading between the lines, I believe that they are now moving because there is a stubbornness on both sides and my friend’s company has made it more of a pain to retain their business than a benefit. They are constrained by a mindset, a ‘groupthink’ which has bred bull-headedness and very constrained thinking. An ounce of consideration of potential future requirements could have built in some considerable flexibility to achieve the aims of the business in changing trading environments. Now? They are undertaking a costly migration in the midst of a potentially high-risk programme of work; it has created stress and heartache within the business which might have been avoided if the initial move to a hybrid environment had considered the future, rather than almost constrained the business to five years of what was a la mode at the time they migrated.

 

What’s the best part about attending Cloud World Forum?

Learning that my answers above may need to be re-appraised because the clever people in our industry have anticipated and resolved my concerns.

15591-CWF15-web-banner 2

Cloud democratises retail investor services

Cloud has the potential to democratize investment services

Cloud has the potential to democratize investment services

Cloud services are opening up possibilities for the retail investor to create individual customised funds in a way that was previously the preserve of the super-wealthy. Coupled with UK regulation such as the Retail Distribution Review, the effect has been to make new business models possible, according to Michael Newell, chief executive at InvestYourWay.

“Nobody is really talking about how the cloud is fundamental to what they do, but it is,” said Newell. “Where previously it might have taken days or even weeks to get the information to set up a fund, and to change your portfolio and positions completely, and to activate your account, it now takes just a few seconds thanks to Amazon Cloud.”

Newell previously worked at BATS, where he was involved alongside Mark Hemsley in setting up the exchange’s ETF services. For some time, he had been increasingly aware of the kind of services that high net worth investors were getting and began to form an idea that someone could bring that to the common retail investor. The idea was to create a system where each individual person has their own fund. However, Newell soon realised that to make that possible, it would be necessary to service customers investing smaller amounts at significantly lower cost – something that had never really been viable up to that point.

“You’d never get that kind of individual attention unless you were high net worth,” he said. “If you’ve only got £2,000 to invest, it’s not going to be worth a fund manager spending the time with you and charging just a few pounds for their time, which is what they’d need to do to make it viable. It just didn’t work.”

Cloud services changed both the economics of the situation and the practicality of his original idea. Newell found that by obtaining computing power as a service, calculations that would have taken 48 hours on a laptop could now be completed in 30 seconds. A manual Google search process carried out by an individual to work out how best to invest might take days at the least, or more realistically weeks and even months – but on InvestYourWay, it can be done in seconds because the process is automated.

Part of the impetus for the new business was also provided by regulatory change, which began to make it easier to compete in the UK with the established fund managers. Specifically, the Retail Distribution Review which came into effect in January 2013 had the effect of forcing fund managers to unbundle their services, providing transparency into previously opaque business charges. Customers could now see exactly what they were being charged for, and that has had the effect of forcing down prices and changing consumer behaviour.

“It’s amazing that it took so long to bring that to the retail investor,” said Newell. “If you think about it, all of this has been happening in the capital markets for years. The focus on greater transparency and unbundling. The clarity on costs and fees.”

However, the idea still needed visibility and a user-base. This was provided when the platform agreed a deal with broker IG, under which InvestYourWay became a service available as an option on the drop-down menu for IG customers. The platform launched in October 2014, offering investment based on indexes rather than single stocks. This was done in part to keep costs down, and partly for ideological reasons. Newell explains that alternative instruments such as ETFs are popular, but would have involved gradually increasing slippage over time due to the costs of middle men. Focusing on indexes removes that problem.

The platform also claims to be the first to offer non-leveraged contract for difference trading. While around 40% of trading in London is estimated to be accounted for by CFDs, normally these are leveraged such that an investor who puts in £1,000 stands to gain £10,000 (but may also lose on the same scale). But IYW’s contracts are not leveraged.

The interface of the platform has quite a bit in common with the latest personal financial management interfaces. The first page consists of a time slider, a risk slider, and the amount the user wants to invest, as well as preferred geographical focus – Europe, America or Asia. After that, users get a pie chart breaking down how the service has allocated their investment based on the sliders. For example, into categories such as North American fintech startups, Asian banks, European corporates, etc. Users also get bar charts showing the historical performance of the fund they are designing, as they go along. They can also see an Amazon-style recommendation suggesting “People who invested in X, also bought Y…”

After that, the user is presented with optional add-ons such as investment in gold, banks, metals, pharmaceuticals, and other areas that may be of special interest. Hovering the mouse over one of these options allows the user to see what percentage of other funds have used that add-on. Choosing one of the add-ons recalibrates the fund that the user is creating to match, for example adding a bit more Switzerland if the user selected banks.

In a demonstration seen by Banking Technology, it was possible to adjust a fund by getting out of Europe and moving the user’s investment to Asia in a few clicks. According to Newell, it would take weeks to do that the traditional way. The process might involve moving money from one fund manager to another or starting an entirely new fund. It was also possible to see how much the cost of that move was – in a demonstration seen byBanking Technology, on a £10,000 investment the cost was £13. Prices are matched to the most recent available end of day data.

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Why did anyone think HP was in it for public cloud?

HP president and chief executive officer Meg Whitman (right) is leading HP's largest restructuring ever

HP president and chief executive officer Meg Whitman (pictured right) is leading HP’s largest restructuring ever

Many have jumped on a recently published interview with Bill Hilf, the head of HP’s cloud business, as a sign HP is finally coming to terms with its inability to make a dent in Amazon’s public cloud business. But what had me scratching my head is not that HP would so blatantly seem to cede ground in this segment – but why many assume it wanted to in the first place.

For those of you that didn’t see the NYT piece, or the subsequent pieces from the hordes of tech insiders and journalists more or less towing the “I told you so” line, Hilf was quoted as candidly saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head [with AWS].”

HP has made mistakes in this space – the list is long, and others have done a wonderful job at fleshing out the classic “large incumbent struggles to adapt to new paradigm” narrative the company’s story, so far, smacks of.

I would only add that it’s a shame HP didn’t pull a “Dell” and publicly get out of the business of directly offering public cloud services to enterprise users, which was a good move. Standing up public cloud services is by most accounts an extremely capitally intensive exercise that a company like HP, given its current state, is simply not best positioned to see through.

But it’s also worth pointing out that a number of interrelated factors have been pushing HP towards private and hybrid cloud for some time now, and despite HP’s insistence that it still runs the largest OpenStack public cloud – a claim other vendors have made in the past – its dedication to public cloud has always seemed superficial at best (particularly if you’ve had the, um, privilege, of sitting through years of sermons from HP executives at conferences and exhibitions).

HP’s heritage is in hardware – desktops, printers and servers, and servers still present a reasonably large chunk of the company’s revenue, something it has no choice but to keep in mind as it seeks to move up the stack in other areas (its NFV and cloud workload management-focused acquisitions as of late attest to this, beyond the broader industry trend). According to the latest Synergy Research figures the company still has a lead in the cloud infrastructure market, but primarily in private cloud.

It wants to keep that lead in private cloud, no doubt, but it also wants to bolster its pitch to the scale-out market exclusively (where telcos are quite keen to play) without alienating its enterprise customers. This also means delivering capabilities that are starting to see increased demand among that segment, like hybrid cloud workload management, security and compliance tools, and offering a platform that has enough buy-in to ensure a large ecosystem of applications and services will be developed for it.

Whether OpenStack is the best way of hitting those sometimes competing objectives remains to be seen – HP hasn’t had these products in the market very long, and take-up has been slow – but that’s exactly what Helion is to HP.

Still, it’s worth pointing out that OpenStack, while trying to evolve capabilities that would whet the appetites of communications services providers and others in the scale-out segment (NFV, object storage, etc.), is seeing much more takeup from the private cloud crowd. Indeed one of the key benefits of OpenStack is easy burstability into, and (more of a work in progress), federatability between OpenStack-based public and private clouds, respectively. The latter, by the way, is definitely consistent with the logic underpinning HP’s latest cloud partnership with the European Commission, which looks at – among other things – the potential federatability of regional clouds that have strong security and governance requirements.

Even HP’s acquisition strategy – particularly its purchase of Eucalyptus, a software platform that makes it easy to shift workloads between on premise systems and AWS – seems in line with the view that a private cloud needs to be able to lean on someone else’s datacentre from time to time.

HP has clearly chosen its mechanism for doing just that, just as VMware looked at the public cloud and thought much the same in terms of extending vSphere and other legacy offerings. Like HP, it wanted to hedge its bets stand up its own public cloud platform because, apart from the “me too” aspect, it thought doing so was in line with where users were heading, and to a much more minimal extent didn’t want to let AWS, Microsoft and Google have all the fun if it didn’t have to. But public cloud definitely doesn’t seem front-of-mind for HP, or VMware, or most other vendors coming at this from an on-premise heritage (HP’s executives mentioned “public cloud” just once in the past three quarterly results calls with journalists and analysts).

Funnily enough, even VMware has come up with its own OpenStack distribution, and now touts a kind of “one cloud, any app, any device” mantra that has hybrid cloud written all of it – ‘hybrid cloud service’ being what the previous incarnation of its public cloud service was called.

All of this is of course happening against the backdrop of the slow crawl up the stack with NFV, SDN, cloud resource management software, PaaS, and so forth  – not just at HP. Cisco, Dell, and IBM, are all looking to make inroads in software, while at the same time on the hardware side fighting off lower-cost Asian ODMs that are – with the exception of IBM – starting to significantly encroach on their turf, particularly in the scale-out markets.

The point is, HP, like many old-hat enterprise vendors, know that what ultimately makes AWS so appealing isn’t its cost (it can actually be quite expensive, though prices – and margins – are dropping) or ease of procurement as an elastic hosting provider. It’s the massive ecosystem of services that give the platform so much value, and the ability to tap into them fairly quickly. HP has bet the farm on OpenStack’s capacity to evolve into a formidable competitor to AWS in that sense (IBM and Cisco also, with varying degrees, towing a similar line), and it shouldn’t be dismissed outright given the massive buy-in that open source community has.

But – and some would view this as part of the company’s problem – HP’s bread and butter has been and continues to be in offering the technologies and tools to stand up predominately private clouds, or in the case of service providers, very large private clouds (it’s also big on converged infrastructure), and to support those technologies and tools, which really isn’t – directly – the business that AWS is in, despite there being substantial overlap in the enterprise customers they go after.

However, while it started in this space as an elastic hosting provider offering CDN and storage services, AWS, on the other hand, has more or less evolved into a kind of application marketplace, where any app can be deployed on almost infinitely scalable compute and storage platforms. Interestingly, AWS’s messaging has shifted from outright hostility towards the private cloud crowd (and private cloud vendors) towards being more open to the idea some enterprises simply don’t want to expose their workloads or host them on shared infrastructure – in part because it understands there’s growing overlap, and because it wants them to on-board their workloads onto AWS.

HP’s problem isn’t that it tried and failed at the public cloud game – you can’t really fail at something if you don’t have a proper go at it; and on the private cloud front, Helion is still quite young, as is OpenStack, Cloud Foundry, and many of the technologies at the core of its revamped strategy.

Rather, it’s that HP, for all its restructuring efforts, talk of change and trumpeting of cloud, still risks getting stuck in its old-world thinking, which could ultimately hinder the company further as it seeks to transform itself. AWS senior vice president Andy Jassy, who hit out at tech companies like HP at the unveiling of Amazon’s Frankfurt-based cloud service last year, hit the nail on the head: “They’re pushing private cloud because it’s not all that different from their existing operating model. But now people are voting with their workloads… It remains to see how quickly [these companies] will change, because you can’t simply change your operating model overnight.”

Can the cloud save Hollywood?

The film and TV industry is warming to cloud

The film and TV industry is warming to cloud

You don’t have to watch the latest ‘Avengers’ film to get the sense the storage and computational requirements of film and television production are continuing their steady increase. But Guillaume Aubichon, chief technology officer of post-production and visual effects firm DigitalFilm Tree (DFT) says production and post-production outfits may find use in the latest and greatest in open source cloud technologies to help plug the growing gap between technical needs and capabilities – and unlock new possibilities for the medium in the process.

Since its founding in 2000, DFT has done post-production work for a number of motion pictures as well as television shows airing on some of the largest networks in America including ABC, TNT and TBS. And Aubichon says that like many in the industry DFT’s embrace of cloud came about because the company was trying to address a number of pain points.

“The first and the most pressing pain point in the entertainment industry right now is storage – inexpensive, commodity storage that is also internet ready. With 4K becoming more prominent we have some projects that generate about 12TB of content a day,” he says. “The others are cost and flexibility.”

This article appeared in the March/April issue of the BCN Magazine. Click here to download your copy today.

Aubichon explains three big trends are converging in the entertainment and media industry right now that are getting stakeholders from production to distribution interested in cloud.

4K broadcast, a massive step up from High– Definition in terms of the resources required for rendering, transmission and storage, is becoming more prominent.

Next, IP broadcasters are supplanting traditional broadcasters – Netflix, Amazon or Hulu are taking the place of CBS, ABC, and slowly displacing the traditional content distribution model.

And, films are no longer exclusively filmed in the Los Angeles area – with preferential tax regimes and other cost-based incentives driving production of English-speaking motion pictures outward into Canada, the UK, Central Europe and parts of New Zealand and Australia.

“With production and notably post-production costs increasing – both in terms of dollars and time – creatives want to be able to make more decisions in real time, or as close to real time as possible, about how a shot will look,” he says.

Can Cloud Save Hollywood?

DFT runs a hybrid cloud architecture based on OpenStack and depending on the project can link up to other private OpenStack clouds as well as OpenStack-based public cloud platforms. For instance, in doing some of the post-production work for Spike Jonze’s HER the company used a combination of Rackspace’s public cloud and its own private cloud instances, including Swift for object storage as well as a video review and approval application and virtual file system application – enabling creatives to review and approve shots quickly.

The company runs most of its application landscape off a combination of Linux and Microsoft virtualised environments, but is also a heavy user of Linux containers – which has benefits as a transmission format and also offers some added flexibility, like the ability run simple compute processes directly within a storage node.

Processes like video and audio transcoding are a perfect fit for containers because they don’t necessarily warrant an entire virtual machine, and because the compute and storage can be kept so close to one another.

Aubichon: 'My goal is to help make the media and entertainment industry avoid what the music industry did'

Aubichon: ‘My goal is to help make the media and entertainment industry avoid what the music industry did’

“Any TV show or film production and post-production process involves multiple vendors. For instance, on both Mistresses and Perception there was an outside visual effects facility involved as well. So instead of having to take the shots, pull it off an LTO tape, put it on a drive, and send it over to the visual effects company, they can send us a request and we can send them an authorised link that connects back to our Swift object storage, which allows them to pull whatever file we authorise. So there’s a tremendous amount of efficiency gained,” he explains.

For an industry just starting to come out of physical transmission, that kind of workflow can bring tremendous benefits to a project. Although much of the post-production work for film and television still happens in LA an increasing number of shows aren’t shot there; DFT for instance is currently working on shows shot in Vancouver, Toronto, and Virginia. So what the company does is run an instance of OpenStack on-site where the shooting occurs and feed the raw camera footage into an object storage instance, which is then container-sunk back to Los Angeles.

“We’ve even been toying with the idea of pushing raw camera files into OpenStack instances, and have those instances transcode those files into an H.265 resolution that could theoretically be pushed over a mobile data connection back to the editor in Los Angeles. The editor could then start cutting in proxies, and 12 to 18 hours later, when the two OpenStack instances have then sunk that material, you can then merge the data to the higher resolution version,” he says.

“We get these kinds of requests often, like when a director is shooting on location and he’s getting really nervous that his editor isn’t seeing the material before he has to move on from the location and finish shooting.”

So for DFT, he says, cloud is solving a transport issue, and a storage issue. “What we’re trying to push into now is solving the compute issue. Ideally we’d like to push all of this content to one single place, have this close to the compute and then all of your manipulation just happens via an automated process in the cloud or via VDI. That’s where we really see this going.”

The other element here, and one that’s undoubtedly sitting heavily on the minds of the film industry in recent months more than ever, is the security issue. Aubichon says that because the information, where it’s stored and how secure that information is, changes over the lifecycle of a project, a hybrid cloud model – or connectable cloud platforms with varying degrees of exposure – is required to support them. That’s where features like federated identity, which in OpenStack is still quite nascent, comes into play. It offers a mechanism for linking clouds, granting and authenticating user identity quickly (and taking access away equally fast), and leaves a trail revealing who touches what content.

“You need to be able to migrate authentication and data from a very closed instance out to something more open, and eventually out to public,” he says, adding that he has spent many of the past few years trying to convince the industry to eliminate any distinction between public and private clouds.

“In an industry that’s so paranoid about security, I’ve been trying to say ‘well, if you run an OpenStack instance in Rackspace, that’s really a private instance; they’re a trusted provider, that’s a private instance.’ To me, it’s just about how many people need to touch that material. If you have a huge amount of material then you’re naturally going to move to a public cloud vendor, but just because you’re on a public cloud vendor doesn’t mean that your instance is public.”

“I spend a lot of time just convincing the entertainment industry that this isn’t banking,” he adds. “They are slowly starting to come around; but it takes time.”

It All Comes Back To Data

Aubichon says the company is looking at ways to add value beyond simply cost and time reduction, with data and metadata aggregation figuring front and centre in that pursuit. The company did a proof of concept for Cougar Town where it showed how people watching the show on their iPads could interact with that content – a “second screen” interactive experience of sorts, but on the same viewing platform.

“Maybe a viewer likes the shirt one of the actresses is wearing on the show – they can click on it, and the Amazon or Target website comes up,” he says, adding that it could be a big source of revenue for online commerce channels as well as the networks. “This kind of stuff has been talked about for a while, but metadata aggregation and the process of dynamically seeking correlations in the data, where there have always been bottlenecks, has matured to the point where we can prove to studios they can aggregate all of this information without incurring extra costs on the production side. It’s going to take a while until it is fully mature, but it’s definitely coming.”

This kind of service assumes there exists loads of metadata on what’s happening in a shot (or the ability to dynamically detect and translate that into metadata) and, critically, the ability to detect correlations in data that are tagged differently.

The company runs a big MongoDB backend but has added capabilities from an open source project called Karma, which is an ontology mapping service that originally came out of museums. It’s a method of taking two MySQL databases and presenting to users correlations in data that are tagged differently.

DFT took that and married it with the text search function in MongoDB, a NoSQL paltform, which basically allows it to push unstructured data into the system and find correlations there (the company plans to seed this capability back into the open source MongoDB community).

“Ultimately we can use all of this metadata to create efficiencies in the post-production process, and help generate revenue for stakeholders, which is fairly compelling,” Aubichon says. “My goal is to help make the media and entertainment industry avoid what the music industry did, and to become a more unified industry through software, through everyone contributing. The more information is shared, the more money is made, and everyone is happy. That’s something that philosophically, in the entertainment industry, is only now starting to come to fruition.”

It would seem open source cloud technologies like OpenStack as well as innovations in the Linux kernel, which helped birth Docker and similar containerisation technologies, are also playing a leading role in bringing this kind of change about.