Category Archives: Interviews

Bring Your Own Encryption: The case for standards

BYOE is the new black

BYOE is the new black

Being free to choose the most suitable encryption for your business seems like a good idea. But it will only work in a context of recognised standards across encryption systems and providers’ security platforms. Since the start of the 21st century, security has emerged from scare-story status to become one of IT users’ biggest issues – as survey after survey confirms. Along the way a number of uncomfortable lessons are still being learned.

The first lesson is that security technology must always be considered in a human context. No one still believes in a technological fix that will put an end to all security problems, because time and again we hear news of new types of cyber attack that bypass sophisticated and secure technology by targeting human nature – from alarming e-mails ostensibly from official sources, to friendly social invitations to share a funny download; from a harmless-looking USB stick ‘accidentally’ dropped by the office entrance, to the fake policeman demanding a few personal details to verify that you are not criminally liable.

And that explains the article’s heading: a balance must be struck between achieving the desired level of protection against keeping all protection procedures quick and simple. Every minute spent making things secure is a minute lost to productivity – so the heading could equally have said “balancing security with efficiency”.

The second lesson still being learned is never to fully trust to instinct in security matters. It is instinctive to obey instructions that appear to come from an authoritative source, or to respond in an open, friendly manner to a friendly approach – and those are just the sort of instincts that are exploited by IT scams. Instincts can open us to attack, and they can also evoke inappropriate caution.

In the first years of major cloud uptake there was the oft-repeated advice to business that the sensible course would be to use public cloud services to simplify mundane operations, but that critical or high priority data should not be trusted to a public cloud service but kept under control in a private cloud. Instinctively this made sense: you should not allow your secrets to float about in a cloud where you have no idea where they are stored or who is in charge of them.

The irony is that the cloud – being so obviously vulnerable and inviting to attackers – is constantly being reinforced with the most sophisticated security measures: so data in the cloud is probably far better protected than any SME could afford to secure its own data internally. It is like air travel: because flying is instinctively scary, so much has been spent to make it safe that you are

less likely to die on a flight than you are driving the same journey in the “safety” of your own car. The biggest risk in air travel is in the journey to the airport, just as the biggest risk in cloud computing lies in the data’s passage to the cloud – hence the importance of a secure line to a cloud service.

So let us look at encryption in the light of those two lessons. Instinctively it makes sense to keep full control of your own encryption and keys, rather than let them get into any stranger’s hands – so how far do we trust that instinct, bearing in mind the need also to balance security against efficiency?

BYOK

Hot on the heels of BYOD – or “Bring Your Own Device” to the workplace – come the acronym for Bring Your Own Key (BYOK).

The idea of encryption is as old as the concept of written language: if a message might fall into enemy hands, then it is important to ensure that they will not be able to read it. We have recently been told that US forces used Native American communicators in WW2 because the chances of anyone in Japan understanding their language was near zero. More typically, encryption relies on some sort of “key” to unlock and make sense of the message it contains, and that transfers the problem of security to a new level: now the message is secure, the focus shifts to protecting the key.

In the case of access to cloud services: if we are encrypting data because we are worried about its security in an unknown cloud, why then should we trust the same cloud to hold the encryption keys?

Microsoft for instance recently announced a new solution to this dilemma using HSMs (Hardware Security Modules) within their Windows Azure cloud – so that an enterprise customer can use its own internal HSM to produce a master key that is then transmitted to the HSM within the Windows Azure cloud. This provides secure encryption when in the cloud, but it also means that not even Microsoft itself can read it, because they do not have the master key hidden in the enterprise HSM.

It is not so much that the enterprise cannot trust Microsoft to protect its data from attack, it is more to do with growing legal complexities. In the wake of Snowden revelations, it is becoming known that even the most well protected data might be at risk from a government or legal subpoena demanding to reveal its content. Under this BYOK system, however, Microsoft cannot be forced to reveal the enterprise’s secrets because it cannot access them itself, and the responsibility lies only with the owner.

This is increasingly important because of other legal pressures that insist on restricting access to certain types of data. A government can, for example, forbid anyone from allowing data of national importance to leave the country – not a simple matter in a globally connected IP network. There are also increasing legal pressures on holders of personal data to guarantee levels of privacy.

Instinctively it feels a lot more secure to manage your own key and use BYOK instead of leaving it to the cloud provider. As long as that instinct is backed by a suitable and strict in-house HSM based security policy, these instincts can be trusted.

BYOE

BYOK makes the best of the cloud provider’s encryption offering, by giving the customer ultimate control over its key. But is the customer happy with the encryption provided?

Bearing in mind that balance between security and efficiency, you might prefer a higher level of encryption than that used by the cloud provider’s security system, or you might find the encryption mechanism is adding latency or inconvenience and would rather opt for greater nimbleness at the cost of lighter encryption. In this case you could go a step further and employ your own encryption algorithms or processes. Welcome to the domain of BYOE (Bring Your Own Encryption).

Again, we must balance security against efficiency. Take the example of an enterprise using the cloud for deep mining its sensitive customer data. This requires so much computing power that only a cloud provider can do the job, and that means trusting private data to be processed in a cloud service. This could infringe regulations, unless the data is protected by suitable encryption. But how can the data be processed if the provider cannot read it?

Taking the WW2 example above: if a Japanese wireless operator was asked to edit the Native American message so a shortened version could be sent to HQ for cryptanalysis, any attempt to edit an unknown language would create gobbledygook, because translation is not a “homomorphic mapping”.

Homomorphic encryption means that one can perform certain processes on the encrypted data, and the same processes will be performed on the source data without any need to de-crypt the encrypted data. This usually implies arithmetical processes: so the data mining software can do its mining on the encrypted data file while it remains encrypted, and the output data, when decrypted, will be the same output as if the data had been processed without any intervening encryption.

It is like operating one of those automatic coffee vendors that grinds the beans, heats the water and adds milk and sugar according to which button was pressed: you do not know what type of coffee bean is used, whether tap, filtered or spring water or whether the milk is whole cream, skimmed or soya. All you know is that what comes out will be a cappuccino with no sugar. In the data mining example: what comes out might be a neat spread-sheet summary of customers average buying habits based on millions of past transactions, without a single personal transaction detail being visible to the cloud’s provider.

The problem with the cloud provider allowing the users to choose their own encryption, is that the provider’s security platform has to be able to support the chosen encryption system. As an interim measure, the provider might offer a choice from a range of encryption offerings that have been tested for compatibility with the cloud offering, but that still requires one to trust another’s choice of encryption algorithms. A full homomorphic offering might be vital for one operation, but a waste of money and effort for a whole lot of other processes.

The call for standards

So what is needed for BOYE to become a practical solution is a global standard cloud security platform that any encryption offering can be registered for support by that platform. The customer chooses a cloud offering for its services and for its certified “XYZ standard” security platform, then the customer goes shopping for an “XYZ certified” encryption system that matches its particular balance between security and practicality.

Just as in the BYOD revolution, this decision need not be made at an enterprise level, or even by the IT department. BYOE, if sufficiently standardised, could become the responsibility of the department, team or individual user: just as you can bring your own device to the office, you could ultimately take personal responsibility for your own data security.

What if you prefer to use your very own implementation of your own encryption algorithms? All the more reason to want a standard interface! This approach is not so new for those of us who remember the Java J2EE Crypto library – as long as we complied with the published interfaces, anyone could use their own crypto functions. This “the network is the computer” ideology becomes all the more relevant in the cloud age. As the computer industry has learned over the past 40 years, commonly accepted standards and architecture (for example the Von Neumamm model or J2EE Crypto) play a key role in enabling progress.

BYOE could prove every bit as disruptive as BYOD – unless the industry can ensure that users choose their encryption from a set of globally sanctioned and standardised encryption systems or processes. If business is to reap the full benefits promised by cloud services, it must have the foundation of such an open cloud environment.

Written by Dr. Hongwen Zhang, chair security working group, CloudEthernet Forum.

Q&A with Mark Evans, head of IT, RLB

Mark EvansAs we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Mark Evans, head of IT at global property and construction practice Rider Levett Bucknall (RLB) to discuss supporting BYOD, the need for standards in the cloud sector and the impact of working with large data models on the technology choices the firm has to make.

 

What do you see as the most disruptive trend in enterprise IT today?

I’m not entirely sure that the most disruptive trend in enterprise IT is entirely technical. Admittedly, the driving impetus for change is coming from technology, but it is being driven by non-IT people who are equipping their homes, cars and any one of a multitude of other environments with technology which works for them. The disruption manifests itself in the attitude which is brought to business from these domestic environments; people no longer see the bastion of “Corporate IT” as unassailable as it once was, before the commoditisation of IT equipment became the norm. Domestic procurement cycles are driven in a different manner to those of any business – it’s what the likes of Apple thrive on.

There’s more of a “heart” aspiration than a “head” decision when it comes to buying IT at home. Let’s be honest? Who – at home – works out depreciation of an asset when a loved one is being tugged at by slick marketing and peer pressure? Maybe I’m a misanthrope, but this sort of pressure has a knock-on effect with a lot of people and they seek the flexibility, the performance, the ease of use and (let’s be honest) the flashiness of new toys at work. The person with the keys to the “toy box”, the erstwhile IT director, is seen as a barrier to that oft-quoted, rarely well-informed concept of ‘agility’.

So… BYOD. People bring their home kit to work and expect it to work and to offer them an ‘edge’. I think the disruption is bigger than Dave from Accounts bringing in his shiny new laptop (with added speed stripes). It is the expectation that this is acceptable in the face of business-wide legal constraints of liability, compliance and business planning – the directors of a business set the rules and this new, almost frivolous attitude to the complexity and requirements of corporate IT is a “wolf in sheep’s clothing” in terms of the risk it brings to a business. Where do I sit on this? I say, “bring it on”.

 

What do you think the industry needs to work on in terms of cloud service evolution?

Portability. Standards. Standards of portability. I still believe that there is a general complicity between vendors and purchasers to create a “handcuffs” relationship (“Fifty Shades of Big Blue”?) which is absolutely fine in the early part of a business relationship as it provides a predictable environment from the outset, but this predictability can become moribund and in an era where business models flex and morph at previously alarming rates, the “handcuffs” agreement can become shackles. If the agreement is on a month-by-month basis, it is rarely easy to migrate across Cloud platforms. Ignoring the potential volumes of data which may need to be moved, there is no lingua franca for Cloud services to facilitate a “switch on/switch off” ease-of-migration one might expect in the Cloud environment, predicated as it is on ease-of-use and implementation.

Data tends to move slowly in terms of development (after all, that’s where the value is), so maybe as an industry we need to consider a Data Cloud Service which doesn’t require massive agility, but a front-end application environment which is bound by standards of migratability (is that a word? If it isn’t – it should be!) to offer front-end flexibility against a background of data security and accessibility. In that way, adopting new front-end processes would be easier as there would be no requirement to haul terabytes of data across data centres. Two different procurement cycles, aligned to the specific vagaries of their environments.

 

Can you describe some of the unique IT constraints or features particular to your sector?

Acres of huge data structures. When one of the major software suppliers in your industry (AutoDESK and Construction, respectively) admit that the new modelling environment for buildings goes beyond the computing and data capability in the current market – there are alarm bells. This leads to an environment where the client front end ‘does the walking’ and the data stays in a data centre or the Cloud. Models which my colleagues need to use have a “starting price” of 2Gb and escalate incredibly as the model seeks to more accurately represent the intended construction project. In an environment where colleagues would once carry portfolios of A1 or A0 drawings, they now have requirements for portable access to drawings which are beyond the capabilities of even workstation-class laptop equipment. Construction and, weirdly enough, Formula One motorsport, are pushing the development of Cloud and virtualisation to accommodate these huge, data-rich, often highly graphical models. Have you ever tried 3D rendering on a standard x64 VMWare or Hyper-V box? We needed Nvidia to sort out the graphics environment in the hardware environment and even that isn’t the ‘done deal’ we had hoped.

 

Is the combination of cloud and BYOD challenging your organisation from a security perspective? What kind of advice would you offer to other enterprises looking to secure their perimeter within this context?

Not really. We have a strong, professional and pragmatic HR team who have put in place the necessary constraints to ensure that staff are fully aware of their responsibilities in a BYOD environment. We have backed this up with decent MDM control. Beyond that? I honestly believe that “where there’s a will, there’s a way” and that if MI5 operatives can leave laptops in taxis we can’t legislate for human frailties and failings. Our staff know that there is a ‘cost of admission’ to the BYOD club and it’s almost a no-brainer; MDM controls their equipment within the corporate sphere of influence and their signature on a corporate policy then passes on any breaches of security to the appropriate team, namely, HR.

My advice to my IT colleagues would be – trust your HR team to do their job (they are worth their weight in gold and very often under-appreciated), but don’t give them a ‘hospital pass’ by not doing everything within your control to protect the physical IT environment of BYOD kit.

 

What’s the most challenging part about setting up a hybrid cloud architecture?

Predicting the future. It’s so, so, so easy to map the current operating environment in your business to a hybrid environment (“They can have that, we need to keep this…”) but constraining the environment by creating immovable and impermeable glass walls at the start of the project is an absolutely, 100 per cent easy way to lead to frustration with a vendor in future and we must be honest and accept that by creating these glass walls we were the architect of our own demise. I can’t mention any names, but a former colleague of mine has found this out to his company’s metaphorical and bottom-line cost. They sought to preserve their operating environment in aspic and have since found it almost soul-destroying to start all over again to move to an environment which supported their new aspirations.

Reading between the lines, I believe that they are now moving because there is a stubbornness on both sides and my friend’s company has made it more of a pain to retain their business than a benefit. They are constrained by a mindset, a ‘groupthink’ which has bred bull-headedness and very constrained thinking. An ounce of consideration of potential future requirements could have built in some considerable flexibility to achieve the aims of the business in changing trading environments. Now? They are undertaking a costly migration in the midst of a potentially high-risk programme of work; it has created stress and heartache within the business which might have been avoided if the initial move to a hybrid environment had considered the future, rather than almost constrained the business to five years of what was a la mode at the time they migrated.

 

What’s the best part about attending Cloud World Forum?

Learning that my answers above may need to be re-appraised because the clever people in our industry have anticipated and resolved my concerns.

15591-CWF15-web-banner 2

CenturyLink acquires Orchestrate to strengthen DBaaS offering

CenturyLink has acquired Orchestrate to strengthen its database-as-a-service proposition

CenturyLink has acquired Orchestrate to strengthen its database-as-a-service proposition

CenturyLink has acquired Orchestrate, a database-as-a-service provider specialising in delivering fully managed, high performance, fault tolerant NoSQL database technologies.

CenturyLink said that Orchestrate, which partners with AWS on public cloud hosting for its clients’ datasets, will help bolster its cloud-based database and managed services propositions.

“CenturyLink’s customers, like most enterprises, are expressing interest in solutions that help them meet the performance, scalability and agile development needs of large-scale big data analytics,” said Glen Post, chief executive officer and president of CenturyLink.

“The Orchestrate database service’s ease of use and ability to support multiple database technologies have emerged as key differentiators that we are eager to offer our customers through the CenturyLink Cloud platform,” Post said.

As for drivers of the acquisition, the company said growing use cases around the Internet of Things is creating more demand for fully-managed NoSQL technologies. Orchestrate offers a managed service that basically abstracts many of the underlying hardware and database-specific coding away and delivers an API that enables developers to store and query JSON data easily.

The acquisition will see the Orchestrate services team join CenturyLink’s product development and technology organisation, with Orchestrate co-founders Antony Falco and Ian Plosker as well as vice president of engineering Dave Smith joining the company.

“CenturyLink Cloud features one of the most sophisticated service infrastructures in the market, with a great interface and lots of options for managing complex workflow and third-party applications in the cloud,” Falco said. “Orchestrate’s database service takes the same approach to delivering cost efficiency and ease of use. Enterprise customers are increasingly expecting one global platform to provide these services.”

Cloud democratises retail investor services

Cloud has the potential to democratize investment services

Cloud has the potential to democratize investment services

Cloud services are opening up possibilities for the retail investor to create individual customised funds in a way that was previously the preserve of the super-wealthy. Coupled with UK regulation such as the Retail Distribution Review, the effect has been to make new business models possible, according to Michael Newell, chief executive at InvestYourWay.

“Nobody is really talking about how the cloud is fundamental to what they do, but it is,” said Newell. “Where previously it might have taken days or even weeks to get the information to set up a fund, and to change your portfolio and positions completely, and to activate your account, it now takes just a few seconds thanks to Amazon Cloud.”

Newell previously worked at BATS, where he was involved alongside Mark Hemsley in setting up the exchange’s ETF services. For some time, he had been increasingly aware of the kind of services that high net worth investors were getting and began to form an idea that someone could bring that to the common retail investor. The idea was to create a system where each individual person has their own fund. However, Newell soon realised that to make that possible, it would be necessary to service customers investing smaller amounts at significantly lower cost – something that had never really been viable up to that point.

“You’d never get that kind of individual attention unless you were high net worth,” he said. “If you’ve only got £2,000 to invest, it’s not going to be worth a fund manager spending the time with you and charging just a few pounds for their time, which is what they’d need to do to make it viable. It just didn’t work.”

Cloud services changed both the economics of the situation and the practicality of his original idea. Newell found that by obtaining computing power as a service, calculations that would have taken 48 hours on a laptop could now be completed in 30 seconds. A manual Google search process carried out by an individual to work out how best to invest might take days at the least, or more realistically weeks and even months – but on InvestYourWay, it can be done in seconds because the process is automated.

Part of the impetus for the new business was also provided by regulatory change, which began to make it easier to compete in the UK with the established fund managers. Specifically, the Retail Distribution Review which came into effect in January 2013 had the effect of forcing fund managers to unbundle their services, providing transparency into previously opaque business charges. Customers could now see exactly what they were being charged for, and that has had the effect of forcing down prices and changing consumer behaviour.

“It’s amazing that it took so long to bring that to the retail investor,” said Newell. “If you think about it, all of this has been happening in the capital markets for years. The focus on greater transparency and unbundling. The clarity on costs and fees.”

However, the idea still needed visibility and a user-base. This was provided when the platform agreed a deal with broker IG, under which InvestYourWay became a service available as an option on the drop-down menu for IG customers. The platform launched in October 2014, offering investment based on indexes rather than single stocks. This was done in part to keep costs down, and partly for ideological reasons. Newell explains that alternative instruments such as ETFs are popular, but would have involved gradually increasing slippage over time due to the costs of middle men. Focusing on indexes removes that problem.

The platform also claims to be the first to offer non-leveraged contract for difference trading. While around 40% of trading in London is estimated to be accounted for by CFDs, normally these are leveraged such that an investor who puts in £1,000 stands to gain £10,000 (but may also lose on the same scale). But IYW’s contracts are not leveraged.

The interface of the platform has quite a bit in common with the latest personal financial management interfaces. The first page consists of a time slider, a risk slider, and the amount the user wants to invest, as well as preferred geographical focus – Europe, America or Asia. After that, users get a pie chart breaking down how the service has allocated their investment based on the sliders. For example, into categories such as North American fintech startups, Asian banks, European corporates, etc. Users also get bar charts showing the historical performance of the fund they are designing, as they go along. They can also see an Amazon-style recommendation suggesting “People who invested in X, also bought Y…”

After that, the user is presented with optional add-ons such as investment in gold, banks, metals, pharmaceuticals, and other areas that may be of special interest. Hovering the mouse over one of these options allows the user to see what percentage of other funds have used that add-on. Choosing one of the add-ons recalibrates the fund that the user is creating to match, for example adding a bit more Switzerland if the user selected banks.

In a demonstration seen by Banking Technology, it was possible to adjust a fund by getting out of Europe and moving the user’s investment to Asia in a few clicks. According to Newell, it would take weeks to do that the traditional way. The process might involve moving money from one fund manager to another or starting an entirely new fund. It was also possible to see how much the cost of that move was – in a demonstration seen byBanking Technology, on a £10,000 investment the cost was £13. Prices are matched to the most recent available end of day data.

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Can the cloud save Hollywood?

The film and TV industry is warming to cloud

The film and TV industry is warming to cloud

You don’t have to watch the latest ‘Avengers’ film to get the sense the storage and computational requirements of film and television production are continuing their steady increase. But Guillaume Aubichon, chief technology officer of post-production and visual effects firm DigitalFilm Tree (DFT) says production and post-production outfits may find use in the latest and greatest in open source cloud technologies to help plug the growing gap between technical needs and capabilities – and unlock new possibilities for the medium in the process.

Since its founding in 2000, DFT has done post-production work for a number of motion pictures as well as television shows airing on some of the largest networks in America including ABC, TNT and TBS. And Aubichon says that like many in the industry DFT’s embrace of cloud came about because the company was trying to address a number of pain points.

“The first and the most pressing pain point in the entertainment industry right now is storage – inexpensive, commodity storage that is also internet ready. With 4K becoming more prominent we have some projects that generate about 12TB of content a day,” he says. “The others are cost and flexibility.”

This article appeared in the March/April issue of the BCN Magazine. Click here to download your copy today.

Aubichon explains three big trends are converging in the entertainment and media industry right now that are getting stakeholders from production to distribution interested in cloud.

4K broadcast, a massive step up from High– Definition in terms of the resources required for rendering, transmission and storage, is becoming more prominent.

Next, IP broadcasters are supplanting traditional broadcasters – Netflix, Amazon or Hulu are taking the place of CBS, ABC, and slowly displacing the traditional content distribution model.

And, films are no longer exclusively filmed in the Los Angeles area – with preferential tax regimes and other cost-based incentives driving production of English-speaking motion pictures outward into Canada, the UK, Central Europe and parts of New Zealand and Australia.

“With production and notably post-production costs increasing – both in terms of dollars and time – creatives want to be able to make more decisions in real time, or as close to real time as possible, about how a shot will look,” he says.

Can Cloud Save Hollywood?

DFT runs a hybrid cloud architecture based on OpenStack and depending on the project can link up to other private OpenStack clouds as well as OpenStack-based public cloud platforms. For instance, in doing some of the post-production work for Spike Jonze’s HER the company used a combination of Rackspace’s public cloud and its own private cloud instances, including Swift for object storage as well as a video review and approval application and virtual file system application – enabling creatives to review and approve shots quickly.

The company runs most of its application landscape off a combination of Linux and Microsoft virtualised environments, but is also a heavy user of Linux containers – which has benefits as a transmission format and also offers some added flexibility, like the ability run simple compute processes directly within a storage node.

Processes like video and audio transcoding are a perfect fit for containers because they don’t necessarily warrant an entire virtual machine, and because the compute and storage can be kept so close to one another.

Aubichon: 'My goal is to help make the media and entertainment industry avoid what the music industry did'

Aubichon: ‘My goal is to help make the media and entertainment industry avoid what the music industry did’

“Any TV show or film production and post-production process involves multiple vendors. For instance, on both Mistresses and Perception there was an outside visual effects facility involved as well. So instead of having to take the shots, pull it off an LTO tape, put it on a drive, and send it over to the visual effects company, they can send us a request and we can send them an authorised link that connects back to our Swift object storage, which allows them to pull whatever file we authorise. So there’s a tremendous amount of efficiency gained,” he explains.

For an industry just starting to come out of physical transmission, that kind of workflow can bring tremendous benefits to a project. Although much of the post-production work for film and television still happens in LA an increasing number of shows aren’t shot there; DFT for instance is currently working on shows shot in Vancouver, Toronto, and Virginia. So what the company does is run an instance of OpenStack on-site where the shooting occurs and feed the raw camera footage into an object storage instance, which is then container-sunk back to Los Angeles.

“We’ve even been toying with the idea of pushing raw camera files into OpenStack instances, and have those instances transcode those files into an H.265 resolution that could theoretically be pushed over a mobile data connection back to the editor in Los Angeles. The editor could then start cutting in proxies, and 12 to 18 hours later, when the two OpenStack instances have then sunk that material, you can then merge the data to the higher resolution version,” he says.

“We get these kinds of requests often, like when a director is shooting on location and he’s getting really nervous that his editor isn’t seeing the material before he has to move on from the location and finish shooting.”

So for DFT, he says, cloud is solving a transport issue, and a storage issue. “What we’re trying to push into now is solving the compute issue. Ideally we’d like to push all of this content to one single place, have this close to the compute and then all of your manipulation just happens via an automated process in the cloud or via VDI. That’s where we really see this going.”

The other element here, and one that’s undoubtedly sitting heavily on the minds of the film industry in recent months more than ever, is the security issue. Aubichon says that because the information, where it’s stored and how secure that information is, changes over the lifecycle of a project, a hybrid cloud model – or connectable cloud platforms with varying degrees of exposure – is required to support them. That’s where features like federated identity, which in OpenStack is still quite nascent, comes into play. It offers a mechanism for linking clouds, granting and authenticating user identity quickly (and taking access away equally fast), and leaves a trail revealing who touches what content.

“You need to be able to migrate authentication and data from a very closed instance out to something more open, and eventually out to public,” he says, adding that he has spent many of the past few years trying to convince the industry to eliminate any distinction between public and private clouds.

“In an industry that’s so paranoid about security, I’ve been trying to say ‘well, if you run an OpenStack instance in Rackspace, that’s really a private instance; they’re a trusted provider, that’s a private instance.’ To me, it’s just about how many people need to touch that material. If you have a huge amount of material then you’re naturally going to move to a public cloud vendor, but just because you’re on a public cloud vendor doesn’t mean that your instance is public.”

“I spend a lot of time just convincing the entertainment industry that this isn’t banking,” he adds. “They are slowly starting to come around; but it takes time.”

It All Comes Back To Data

Aubichon says the company is looking at ways to add value beyond simply cost and time reduction, with data and metadata aggregation figuring front and centre in that pursuit. The company did a proof of concept for Cougar Town where it showed how people watching the show on their iPads could interact with that content – a “second screen” interactive experience of sorts, but on the same viewing platform.

“Maybe a viewer likes the shirt one of the actresses is wearing on the show – they can click on it, and the Amazon or Target website comes up,” he says, adding that it could be a big source of revenue for online commerce channels as well as the networks. “This kind of stuff has been talked about for a while, but metadata aggregation and the process of dynamically seeking correlations in the data, where there have always been bottlenecks, has matured to the point where we can prove to studios they can aggregate all of this information without incurring extra costs on the production side. It’s going to take a while until it is fully mature, but it’s definitely coming.”

This kind of service assumes there exists loads of metadata on what’s happening in a shot (or the ability to dynamically detect and translate that into metadata) and, critically, the ability to detect correlations in data that are tagged differently.

The company runs a big MongoDB backend but has added capabilities from an open source project called Karma, which is an ontology mapping service that originally came out of museums. It’s a method of taking two MySQL databases and presenting to users correlations in data that are tagged differently.

DFT took that and married it with the text search function in MongoDB, a NoSQL paltform, which basically allows it to push unstructured data into the system and find correlations there (the company plans to seed this capability back into the open source MongoDB community).

“Ultimately we can use all of this metadata to create efficiencies in the post-production process, and help generate revenue for stakeholders, which is fairly compelling,” Aubichon says. “My goal is to help make the media and entertainment industry avoid what the music industry did, and to become a more unified industry through software, through everyone contributing. The more information is shared, the more money is made, and everyone is happy. That’s something that philosophically, in the entertainment industry, is only now starting to come to fruition.”

It would seem open source cloud technologies like OpenStack as well as innovations in the Linux kernel, which helped birth Docker and similar containerisation technologies, are also playing a leading role in bringing this kind of change about.

NXP: ‘Industry needs to ensure IoT is simple and secure’

Internet of Things devices need to be simple and secure if customers are to adopt

Internet of Things devices need to be simple and secure if customers are to adopt

The entire telecoms industry needs to focus on ensuring the IoT delivers real value to consumers, and the security and user simplicity of connected devices should be of paramount importance, said Jeff Fonseca, the regional sales director, Americas at chip vendor NXP in an interview with Telecoms.com.

As an NFC specialist whose customer case examples in the contactless payments space include the London Underground’s contactless travel, the badges at MWC, and several banks’ EMV cards, NXP is increasingly focusing on IoT. According to Fonseca, securing connected devices is something that has to happen for consumers to really get on board with the IoT.

“What we bring in terms of IoT is really the security. All the [secure] stuff we do in passports, all the stuff we do on bank cards, and secure payments, getting you securely onto trains, that type of secure technology, embedding that and infusing that into other categories like IoT [is on our agenda].”

But he said it is not yet clear what exactly is behind the much hyped term. “Honestly, IoT is a big word that I don’t know has a true definition of what’s going to be the one key thing that is IoT. There’s so many moving pieces and parts the difficulty is really unwrapping that, and then making sure we know where we need to be on the trajectory with the right players and partners.

“We need to have ways to execute upon very good security and connectivity that is simple for consumers to use, and that is scalable. It [IoT] shouldn’t be just a buzz word, it should actually have usable value for the consumer.”

Fonseca said there’s not much point in having numerous connected devices in the home unless there’s one common way to communicate with them. “You’re not gona have 10 different devices that all talk a different language in your home, that’s not gona scale in the IoT space. But if you have the ability to have a few devices that talk a similar language, then consumers start to see value from the perspective of managing your home with your smartphone, for example.”

But with having billions of devices connected to the internet come security implications, and Fonseca said ensuring consumers’ security is a key consideration. “How does that work, and how does that work securely? How do you take the cloud and connect it down to these end-point devices in your home and still manage them with your smartphone or your tablet.

“These are the difficult conversations we all have to have as an industry to move in that direction to make sure that in the end it’s all about the consumer, and making sure that there’s an extremely simple and usable product for them. Even though it’s complex underneath to do all this stuff that has to happen in IoT, the consumer doesn’t care, the consumer just wants it to work and they want it to be secure.”

At the MWC 2015 NXP was showcasing its product portfolio, which on top of the technology to secure bank cards and passports also includes solutions for connected car, wireless mobile charging, and ‘smart-audio’ solutions that enhance voice and call clarity based on information passed on by algorithms designed to recognise the environment from which the call is made. The firm has also developed wireless, magnetic inductance-based earbuds as part of a concept it calls ‘true mobility’.

At the beginning of the month NXP announced its plan to acquire competitor Freescale Semiconductor. “We are going to acquire them and the announcement so far has stated that part of that [acquisition] is this IoT convergence play,” Fonesca said. “Freescale is very strong in that category as well, and we’ll see some obvious synergies from taking what NXP has and from what they can bring to the table towards an IoT play.”

Visit the world’s largest & most comprehensive IoT event – Internet of Things World – this May

Every little helps: How Tesco is bringing the online food retail experience back in-store

Tesco is in the midst of overhauling its connectivity and IT services

Tesco is in the midst of overhauling its connectivity and IT services

Food retailers in the UK have for years spent millions of pounds on going digital and cultivating a web presence, which includes the digitisation of product catalogues and all of the other necessary tools on the backend to support online shopping, customer service and food delivery. But Tomas Kadlec, group infrastructure IT director at Tesco tells BCN more emphasis is now being place on bringing the online experience back into physical stores, which is forcing the company to completely rethink how it structures and handles data.

Kadlec, who is responsible for Tesco’s IT infrastructure strategy globally, has spent the better part of the past few years building a private cloud deployment model the company could easily drop into regional datacentres that power its European operations and beyond. This has largely been to improve the services it can provide to clients and colleagues within the company’s brick and mortar shops, and support a growing range of internal applications.

“If you look at what food retailers have been doing for the past few years it was all about building out an online extension to the store. But that trend is reversing, and there’s now a kind of ‘back to store’ movement brewing,” Kadlec says.

“If we have 30,000 to 50,000 SKUs in one store at any given time, how do you handle all of that data in a way that can contribute digital feature-rich services for customers? And how do you offer digital services to customers in Tesco stores that cater to the nuances in how people act in both environments?  For instance, people like to browse more in-store, sometimes calling a friend or colleague to ask for advice on what to get or recipes; in a digital environment people are usually just in a rush to head for the checkout. These are all fairly big, critical questions.”

Some of the digital services envisioned are fairly ambitious and include being able to queue up tons of product information – recipes, related products and so forth – on mobile devices by scanning items with built-in cameras, and even, down the line, paying for items on those devices. But the food retail sector is one of the most competitive in the world, and it’s possible these kinds of services could be a competitive differentiator for the firm.

“You should be able to create a shopping list on your phone and reach all of those items in-store easily,” he says. “When you’re online you have plenty of information about those products at your fingertips, but far less when you’re in a physical store. So for instance, if you have special dietary requirement we should be able to illuminate and guide the store experience on these mobile platforms with this in mind.”

Tomas_Kadlec“The problem is that in food retail the app economy doesn’t really exist yet. It exists everywhere else, and in food retail the app economy will come – it’s just that we as an industry have failed to make the data accessible so applications aren’t being developed.”

To achieve this vision, Tesco had to drastically change its approach to data and how it’s deployed across the organisation. The company originally started down the path of building its own API and offering internal users a platform-as-a-service to enable more agile app development, but Kadlec says the project quickly morphed into something much larger.

“It’s one thing to provide an elastic compute environment and a platform for development and APIs – something we can solve in a fairly straightforward way. It’s another thing entirely to expose the information you need for these services to work effectively in such a scalable system.”

Tesco’s systems handle and structure data the way many traditional enterprises within and outside food retail do – segmenting it by department, by function, and in alignment with the specific questions the data needs to answer. But the company is trying to move closer to a ‘store and stream now, ask questions later’ type of data model, which isn’t particularly straightforward.

“Data used to be purpose-built; it had a clearly defined consumer, like ERP data for example. But now the services we want to develop require us to mash up Tesco data and open data in more compelling ways, which forces us to completely re-think the way we store, categorise and stream data,” he explains. “It’s simply not appropriate to just drag and drop our databases into a cloud platform – which is why we’re dropping some of our data systems vendors and starting from scratch.”

Kadlec says the debate now centres on how the company can effectively democratise data while keeping critical kinds of information – like consumers’ personal information – secure and private: “There should only be two types of data. Data that should be open, and we should make sure we make that accessible, and then there’s the type of data that’s so private people get fired for having made it accessible – and setting up very specific architectural guidelines along with this.”

The company hasn’t yet had the security discussion with its customers yet, which is why Kadlec says the systems Tesco puts in place initially will likely focus on improving internal efficiency and productivity – “so we don’t have to get into the privacy data nightmare”.

The company also wants to improve connectivity to its stores to better service both employees and customers. Over the next 18 months the company will implement a complete overhaul of store connectivity and infrastructure, which will centre on delivering low latency bandwidth for in-store wifi and quadrupling the amount of access points. It also plans to install 4G signal booster cells in its stores to improve GSM-based connectivity. Making sure that infrastructure will be secure so that customer data isn’t leaked is top priority, he says.

Tesco is among a number of retailers to make headlines as of late – though not because of datacentre security or customer data loss, but because the company, having significantly inflated its profits by roughly £250m, is in serious financial trouble. But Kadlec says what many may see as a challenge is in fact an opportunity for the company.

One of the things the company is doing is piloting OmniTrail’s indoor location awareness technology to improve how Tesco employees are deployed in stores and optimise how they respond to changes in demand.

“If anything this is an opportunity for IT. If you look at the costs within the store today, there are great opportunities to automate stuff in-store and make colleagues within our stores more focused on customer services. If for instance we’re looking at using location-based services in the store, why do you expect people to clock in and clock out? We still use paper ledgers for holidays – why can’t we move this to the cloud? The opportunities we have in Tesco to optimise efficiency are immense.”

“This will inevitably come back to profits and margins, and the way we do this is to look at how we run operations and save using automation,” he says.

Tomas is speaking at the Telco Cloud Forum in London April 27-29, 2015. To register click here.