Todas las entradas hechas por Jonathan Brandon

eBay chief cloud engineer: ‘OpenStack needs to do more on scalability, upgradability’

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

OpenStack has improved leaps and bounds in the past four years but it still leaves much to be desired in terms of upgradability and manageability, according to Subbu Allamaraju, eBay’s top cloud engineer.

Allamaraju, who was speaking at the OpenStack Summit in Vancouver this week, said the ecommerce giant is a big believer in open source tech when it comes to building out its own internal, dev-and-test and customer-facing services.

In 2012 when the company, which is a 100 per cent KVM and OVS shop, started looking at OpenStack, it decided to deploy on around 300 servers. Now the company has deployed nearly 12,000 hypervisors on 300,000 cores, including 15 virtual private clouds, in 10 availability zones.

“In 2012 we had virtually no automation; in 2014 we still needed to worry about configuration drift to keep the fleet of hypervisors in sync. In 2012, there was also no monitoring,” he said. “We built tools to move workloads between deployments because in the early years there was no clear upgrade path.”

eBay has about 20 per cent of its customer-facing website running on OpenStack, and as of the holiday season this past year processed all PayPal transactions on applications deployed on the platform. The company also hosts significant amounts of data – Allamaraju claims eBay runs one of the largest Hadoop clusters in the world at around 120 petabytes.

But he said the company still faces concerns about deploying at scale, and about upgrading, adding that in 2012 eBay had to build a toolset just to migrate its workloads off the Essex release because no clear upgrade path presented itself.

“In most datacentre cloud is only running in part of it, but we want to go beyond that. We’re not there yet and we’re working on that,” he said, adding that the company’s goal is to go all-in on OpenStack within the next few years. “But at meetings we’re still hearing questions like ‘does Heat scale?’… these are worrying questions from the perspective of a large operator.”

He also said the data from recent user surveys suggest manageability and in particular upgradeability, long held to be a significant barrier to OpenStack adoption, are still huge issues.

“Production deployments went up, but 89 per cent are running a core base at least 6 months old, but 55 per cent of operators are running a year-old core base, and 18 per cent are running core bases older than 12 months,” he said. “Lots of people are coming to these summits, but the data suggests many are worried about the upgrading.”

“This is an example of manageability missing in action.  How do you manage large deployments? How do you manage upgradeability?”

OpenStack does some soul searching, finds its core self

Bryce: 'OpenStack will power the planet's clouds'

Bryce: ‘OpenStack will power the planet’s clouds’

The OpenStack Foundation announced new interoperability and testing requirements as well as enhancements to the software’s implementation of federated identity which the Foundation’s executive director Jonathan Bryce says will take the open source cloud platform one step closer to world domination.

OpenStack’s key pitch beyond being able to spin up scalable compute, storage and networking resources fairly quickly, is that OpenStack-based private clouds should be able to burst into the public cloud or some private cloud instances if need be. That kind of capability is essential if the company is going to take on companies like AWS, VMware and Microsoft, but has so far been quite basic in terms of implementation.

But for that kind of interoperability to happen you need three things: the ability to federate the identity of a cloud user so permissions and workloads can port over to whatever platforms are being deployed on (and to ensure those workloads are secure); a definition of what vendors, service providers and customers can reliably call core OpenStack, so they can all expect a standard collection of tools, services, and APIs to be found in every distribution; and, a way to test interoperability of OpenStack distributions and appliances.

To that end, the Foundation announced a new OpenStack Powered interoperability testing programme, so users can validate the interoperability of their own deployments as well as gain assurances from vendors that clouds and appliances branded as “OpenStack Powered” meet the same requirements. About 16 companies already have either certified cloud platforms or appliances available on the OpenStack Marketplace as of this week, and Bryce said there’s more to come.

The latest release of OpenStack, Kilo, also brings a number of improvements to federated identity, making it much easier to implement as well as more dynamic in terms of workload deployment, and Bryce said that over 30 companies have committed to implementing federated identity (which has been available since the Lighthouse release) by the end of this year – meaning the OpenStack cloud footprint just got a whole lot bigger.

“It has been a massive effort to come to an agreement on what we need to have in these clouds, how to test it,” Bryce said. “It’s a key step towards the goal of realising an OpenStack-powered planet.”

The challenge is, as the code gets bulkier and as groups add more services, joining all the bits and making sure they work together without one component or service breaking another becomes much more complex. That said, the move marks a significant milestone for the DefCore group, the internal committee in charge of setting base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. The group have been working for well over a year on developing a standard definition of what a core OpenStack deployment is.

TD Bank uses cloud as catalyst for cultural change in IT

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.

Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.

“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.

But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.

“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.

“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”

Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.

In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.

The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.

“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.

“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

Q&A with Mark Evans, head of IT, RLB

Mark EvansAs we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Mark Evans, head of IT at global property and construction practice Rider Levett Bucknall (RLB) to discuss supporting BYOD, the need for standards in the cloud sector and the impact of working with large data models on the technology choices the firm has to make.

 

What do you see as the most disruptive trend in enterprise IT today?

I’m not entirely sure that the most disruptive trend in enterprise IT is entirely technical. Admittedly, the driving impetus for change is coming from technology, but it is being driven by non-IT people who are equipping their homes, cars and any one of a multitude of other environments with technology which works for them. The disruption manifests itself in the attitude which is brought to business from these domestic environments; people no longer see the bastion of “Corporate IT” as unassailable as it once was, before the commoditisation of IT equipment became the norm. Domestic procurement cycles are driven in a different manner to those of any business – it’s what the likes of Apple thrive on.

There’s more of a “heart” aspiration than a “head” decision when it comes to buying IT at home. Let’s be honest? Who – at home – works out depreciation of an asset when a loved one is being tugged at by slick marketing and peer pressure? Maybe I’m a misanthrope, but this sort of pressure has a knock-on effect with a lot of people and they seek the flexibility, the performance, the ease of use and (let’s be honest) the flashiness of new toys at work. The person with the keys to the “toy box”, the erstwhile IT director, is seen as a barrier to that oft-quoted, rarely well-informed concept of ‘agility’.

So… BYOD. People bring their home kit to work and expect it to work and to offer them an ‘edge’. I think the disruption is bigger than Dave from Accounts bringing in his shiny new laptop (with added speed stripes). It is the expectation that this is acceptable in the face of business-wide legal constraints of liability, compliance and business planning – the directors of a business set the rules and this new, almost frivolous attitude to the complexity and requirements of corporate IT is a “wolf in sheep’s clothing” in terms of the risk it brings to a business. Where do I sit on this? I say, “bring it on”.

 

What do you think the industry needs to work on in terms of cloud service evolution?

Portability. Standards. Standards of portability. I still believe that there is a general complicity between vendors and purchasers to create a “handcuffs” relationship (“Fifty Shades of Big Blue”?) which is absolutely fine in the early part of a business relationship as it provides a predictable environment from the outset, but this predictability can become moribund and in an era where business models flex and morph at previously alarming rates, the “handcuffs” agreement can become shackles. If the agreement is on a month-by-month basis, it is rarely easy to migrate across Cloud platforms. Ignoring the potential volumes of data which may need to be moved, there is no lingua franca for Cloud services to facilitate a “switch on/switch off” ease-of-migration one might expect in the Cloud environment, predicated as it is on ease-of-use and implementation.

Data tends to move slowly in terms of development (after all, that’s where the value is), so maybe as an industry we need to consider a Data Cloud Service which doesn’t require massive agility, but a front-end application environment which is bound by standards of migratability (is that a word? If it isn’t – it should be!) to offer front-end flexibility against a background of data security and accessibility. In that way, adopting new front-end processes would be easier as there would be no requirement to haul terabytes of data across data centres. Two different procurement cycles, aligned to the specific vagaries of their environments.

 

Can you describe some of the unique IT constraints or features particular to your sector?

Acres of huge data structures. When one of the major software suppliers in your industry (AutoDESK and Construction, respectively) admit that the new modelling environment for buildings goes beyond the computing and data capability in the current market – there are alarm bells. This leads to an environment where the client front end ‘does the walking’ and the data stays in a data centre or the Cloud. Models which my colleagues need to use have a “starting price” of 2Gb and escalate incredibly as the model seeks to more accurately represent the intended construction project. In an environment where colleagues would once carry portfolios of A1 or A0 drawings, they now have requirements for portable access to drawings which are beyond the capabilities of even workstation-class laptop equipment. Construction and, weirdly enough, Formula One motorsport, are pushing the development of Cloud and virtualisation to accommodate these huge, data-rich, often highly graphical models. Have you ever tried 3D rendering on a standard x64 VMWare or Hyper-V box? We needed Nvidia to sort out the graphics environment in the hardware environment and even that isn’t the ‘done deal’ we had hoped.

 

Is the combination of cloud and BYOD challenging your organisation from a security perspective? What kind of advice would you offer to other enterprises looking to secure their perimeter within this context?

Not really. We have a strong, professional and pragmatic HR team who have put in place the necessary constraints to ensure that staff are fully aware of their responsibilities in a BYOD environment. We have backed this up with decent MDM control. Beyond that? I honestly believe that “where there’s a will, there’s a way” and that if MI5 operatives can leave laptops in taxis we can’t legislate for human frailties and failings. Our staff know that there is a ‘cost of admission’ to the BYOD club and it’s almost a no-brainer; MDM controls their equipment within the corporate sphere of influence and their signature on a corporate policy then passes on any breaches of security to the appropriate team, namely, HR.

My advice to my IT colleagues would be – trust your HR team to do their job (they are worth their weight in gold and very often under-appreciated), but don’t give them a ‘hospital pass’ by not doing everything within your control to protect the physical IT environment of BYOD kit.

 

What’s the most challenging part about setting up a hybrid cloud architecture?

Predicting the future. It’s so, so, so easy to map the current operating environment in your business to a hybrid environment (“They can have that, we need to keep this…”) but constraining the environment by creating immovable and impermeable glass walls at the start of the project is an absolutely, 100 per cent easy way to lead to frustration with a vendor in future and we must be honest and accept that by creating these glass walls we were the architect of our own demise. I can’t mention any names, but a former colleague of mine has found this out to his company’s metaphorical and bottom-line cost. They sought to preserve their operating environment in aspic and have since found it almost soul-destroying to start all over again to move to an environment which supported their new aspirations.

Reading between the lines, I believe that they are now moving because there is a stubbornness on both sides and my friend’s company has made it more of a pain to retain their business than a benefit. They are constrained by a mindset, a ‘groupthink’ which has bred bull-headedness and very constrained thinking. An ounce of consideration of potential future requirements could have built in some considerable flexibility to achieve the aims of the business in changing trading environments. Now? They are undertaking a costly migration in the midst of a potentially high-risk programme of work; it has created stress and heartache within the business which might have been avoided if the initial move to a hybrid environment had considered the future, rather than almost constrained the business to five years of what was a la mode at the time they migrated.

 

What’s the best part about attending Cloud World Forum?

Learning that my answers above may need to be re-appraised because the clever people in our industry have anticipated and resolved my concerns.

15591-CWF15-web-banner 2

Box touts new customers as the battle to differentiate continues

Box co-founder and chief executive Aaron Levie briefing journalists and analysts in London this week

Box co-founder and chief executive Aaron Levie briefing journalists and analysts in London this week

Cloud storage incumbent Box announced a slew of new customers this week as the company, which was recently taken public, continues to nudge its balance sheet into the black. Despite strong competition in the segment and the added pressure that comes with being a public company Box continues to differentiate from both traditional and non-traditional competition, said co-founder and chief executive officer Aaron Levie.

Box announced this week it had inked large deployment deals with home and body cosmetics brand Rituals Cosmetics, the University of Dundee and Lancaster University, which cumulatively total close to 50,000 new seats on the cloud storage and collaboration platform.

“You’re seeing all of this disruption from new devices, new employees entering the workforce, new ways of working, new customer and consumer expectations about how they want to interact with your services. Customers really have to go digital with their enterprises,” said Levie said.

“From the inside, companies need to get more collaborative, move more quickly, make decisions faster, be able to have better technology for the workforce. It also means you’re going to have all new digital experiences to create your products, and offer an omnichannel customer experience – if you’re in retail, healthcare, this will drive fundamentally new business models.”

The company said it now has over 34 million users and 45,000 organisations globally using its service, with those companies belonging to a broad range of sectors – transportation, logistics utilities, healthcare, retail, the charity sector, and many more.

It’s planning a big push into Europe. Its UK office its fastest growing outfit with over 140 employees, and it recently hired former Microsoft cloud sales exec Jeremy Grinbaum to lead the company’s commercial expansion efforts in France and southern Europe. It’s also looking to deploy international datacentres to power its services outside the US within the next 12 to 18 months.

One of the big areas it’s trying to break into is financial services. The company recently introduced Box for Financial Services as part of its Box for Industries offerings, a growing portfolio of vertically-integrated cloud-based storage and collaboration platforms that bake industry-specific data management, security capabilities and workflow management requirements right into the service.

“Financial services has been slower to adopt the cloud, mostly because of an unclear regulatory environment,” Levie told BCN. “We’ve been working with financial services customers around the regulatory and compliance aspect, and with our encryption key technology we’ve gotten much farther along in terms of giving financial services firms the ability to adhere to their data security controls.”

Levie said the company has recently had some fairly big wins in the financial services space – none that he can mention publicly yet, of course – but some of the company’s customers in the sector already include US AA, US Bank, and T. Rowe Price to name a few.

Box for Industries (it already offers Box for Healthcare and Box for Retail) is central to how the company intends to differentiate itself among a growing sea of competitors – that, and its security investments. Levie said Box is more enterprise-y than Dropbox, widely viewed as one of its largest competitors, and more vertically-integrated than UK-based Huddle. But when asked about competition from non-traditional competitors like banks, some of which are using their substantial datacentre, security and digital service UX investments to provide their own cloud-based storage services to customers, he said he sees Box as more of a partner than rival.

The company recently launched Box Developer Edition, a software development kit that lets partners and customers use APIs to integrate Box’s technology into their own applications, Levie said banks can become Box partners and effectively white label its offering.

“Box ends up being a natural back-end service in that process. So instead of them having to build out all of the infrastructure, manage all the systems and then essentially recreate what our hundreds of engineers are doing,” he said. “The value proposition for [banks] is going to be the digital experience that allows them to interact with their customers.”

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Pivotal punts Geode to ASF to consolidate leadership in open source big data

Pivotal is looking to position itself as a front runner in open source big data

Pivotal is looking to position itself as a front runner in open source big data

Pivotal has proposed “Project Geode” for incubation by the Apache Software Foundation, which would focus on developing the Geode in-memory database technology – the technology at the core of Pivotal’s GemFire offering.

Geode can support ACID transactions for large scaled applications such as those used for stock trading, financial payments and ticket sales, and the company said the technology is already proven in customer deployments of more than 10 million user transactions a day.

In February Pivotal announced it would open source much of its big data suite including GemFire, which the company will continue to support commercially. The move is part of a broader plan to consolidate its leadership in the open source big data ecosystem, where companies like Hortonworks are also trying to make waves.

The company also recently helped launch the Open Data Platform, which seeks to promote big data tech standardisation, and combat fragmentation around how Hadoop is deployed in enterprises and built upon by ISVs.

In the meantime, while the company said it would wait for the ASF’s decision Pivotal has already put out a call to developers as it seeks early contributions to ensure the project gets a head start.

“The opening sourcing of core components of products in the Pivotal Big Data Suite heralds a new era of how big data is done in the enterprise. Starting with core code in Pivotal GemFire, the components we intend to contribute to the open source community are already performing in the most hardened and demanding enterprise environments,” said Sundeep Madra, vice president, Data Product Group at Pivotal.

“Geode is an important part of building solutions for next generation data infrastructures and we welcome the community to join us in furthering Geode’s already compelling capabilities,” Madra said.

Why did anyone think HP was in it for public cloud?

HP president and chief executive officer Meg Whitman (right) is leading HP's largest restructuring ever

HP president and chief executive officer Meg Whitman (pictured right) is leading HP’s largest restructuring ever

Many have jumped on a recently published interview with Bill Hilf, the head of HP’s cloud business, as a sign HP is finally coming to terms with its inability to make a dent in Amazon’s public cloud business. But what had me scratching my head is not that HP would so blatantly seem to cede ground in this segment – but why many assume it wanted to in the first place.

For those of you that didn’t see the NYT piece, or the subsequent pieces from the hordes of tech insiders and journalists more or less towing the “I told you so” line, Hilf was quoted as candidly saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head [with AWS].”

HP has made mistakes in this space – the list is long, and others have done a wonderful job at fleshing out the classic “large incumbent struggles to adapt to new paradigm” narrative the company’s story, so far, smacks of.

I would only add that it’s a shame HP didn’t pull a “Dell” and publicly get out of the business of directly offering public cloud services to enterprise users, which was a good move. Standing up public cloud services is by most accounts an extremely capitally intensive exercise that a company like HP, given its current state, is simply not best positioned to see through.

But it’s also worth pointing out that a number of interrelated factors have been pushing HP towards private and hybrid cloud for some time now, and despite HP’s insistence that it still runs the largest OpenStack public cloud – a claim other vendors have made in the past – its dedication to public cloud has always seemed superficial at best (particularly if you’ve had the, um, privilege, of sitting through years of sermons from HP executives at conferences and exhibitions).

HP’s heritage is in hardware – desktops, printers and servers, and servers still present a reasonably large chunk of the company’s revenue, something it has no choice but to keep in mind as it seeks to move up the stack in other areas (its NFV and cloud workload management-focused acquisitions as of late attest to this, beyond the broader industry trend). According to the latest Synergy Research figures the company still has a lead in the cloud infrastructure market, but primarily in private cloud.

It wants to keep that lead in private cloud, no doubt, but it also wants to bolster its pitch to the scale-out market exclusively (where telcos are quite keen to play) without alienating its enterprise customers. This also means delivering capabilities that are starting to see increased demand among that segment, like hybrid cloud workload management, security and compliance tools, and offering a platform that has enough buy-in to ensure a large ecosystem of applications and services will be developed for it.

Whether OpenStack is the best way of hitting those sometimes competing objectives remains to be seen – HP hasn’t had these products in the market very long, and take-up has been slow – but that’s exactly what Helion is to HP.

Still, it’s worth pointing out that OpenStack, while trying to evolve capabilities that would whet the appetites of communications services providers and others in the scale-out segment (NFV, object storage, etc.), is seeing much more takeup from the private cloud crowd. Indeed one of the key benefits of OpenStack is easy burstability into, and (more of a work in progress), federatability between OpenStack-based public and private clouds, respectively. The latter, by the way, is definitely consistent with the logic underpinning HP’s latest cloud partnership with the European Commission, which looks at – among other things – the potential federatability of regional clouds that have strong security and governance requirements.

Even HP’s acquisition strategy – particularly its purchase of Eucalyptus, a software platform that makes it easy to shift workloads between on premise systems and AWS – seems in line with the view that a private cloud needs to be able to lean on someone else’s datacentre from time to time.

HP has clearly chosen its mechanism for doing just that, just as VMware looked at the public cloud and thought much the same in terms of extending vSphere and other legacy offerings. Like HP, it wanted to hedge its bets stand up its own public cloud platform because, apart from the “me too” aspect, it thought doing so was in line with where users were heading, and to a much more minimal extent didn’t want to let AWS, Microsoft and Google have all the fun if it didn’t have to. But public cloud definitely doesn’t seem front-of-mind for HP, or VMware, or most other vendors coming at this from an on-premise heritage (HP’s executives mentioned “public cloud” just once in the past three quarterly results calls with journalists and analysts).

Funnily enough, even VMware has come up with its own OpenStack distribution, and now touts a kind of “one cloud, any app, any device” mantra that has hybrid cloud written all of it – ‘hybrid cloud service’ being what the previous incarnation of its public cloud service was called.

All of this is of course happening against the backdrop of the slow crawl up the stack with NFV, SDN, cloud resource management software, PaaS, and so forth  – not just at HP. Cisco, Dell, and IBM, are all looking to make inroads in software, while at the same time on the hardware side fighting off lower-cost Asian ODMs that are – with the exception of IBM – starting to significantly encroach on their turf, particularly in the scale-out markets.

The point is, HP, like many old-hat enterprise vendors, know that what ultimately makes AWS so appealing isn’t its cost (it can actually be quite expensive, though prices – and margins – are dropping) or ease of procurement as an elastic hosting provider. It’s the massive ecosystem of services that give the platform so much value, and the ability to tap into them fairly quickly. HP has bet the farm on OpenStack’s capacity to evolve into a formidable competitor to AWS in that sense (IBM and Cisco also, with varying degrees, towing a similar line), and it shouldn’t be dismissed outright given the massive buy-in that open source community has.

But – and some would view this as part of the company’s problem – HP’s bread and butter has been and continues to be in offering the technologies and tools to stand up predominately private clouds, or in the case of service providers, very large private clouds (it’s also big on converged infrastructure), and to support those technologies and tools, which really isn’t – directly – the business that AWS is in, despite there being substantial overlap in the enterprise customers they go after.

However, while it started in this space as an elastic hosting provider offering CDN and storage services, AWS, on the other hand, has more or less evolved into a kind of application marketplace, where any app can be deployed on almost infinitely scalable compute and storage platforms. Interestingly, AWS’s messaging has shifted from outright hostility towards the private cloud crowd (and private cloud vendors) towards being more open to the idea some enterprises simply don’t want to expose their workloads or host them on shared infrastructure – in part because it understands there’s growing overlap, and because it wants them to on-board their workloads onto AWS.

HP’s problem isn’t that it tried and failed at the public cloud game – you can’t really fail at something if you don’t have a proper go at it; and on the private cloud front, Helion is still quite young, as is OpenStack, Cloud Foundry, and many of the technologies at the core of its revamped strategy.

Rather, it’s that HP, for all its restructuring efforts, talk of change and trumpeting of cloud, still risks getting stuck in its old-world thinking, which could ultimately hinder the company further as it seeks to transform itself. AWS senior vice president Andy Jassy, who hit out at tech companies like HP at the unveiling of Amazon’s Frankfurt-based cloud service last year, hit the nail on the head: “They’re pushing private cloud because it’s not all that different from their existing operating model. But now people are voting with their workloads… It remains to see how quickly [these companies] will change, because you can’t simply change your operating model overnight.”

Can the cloud save Hollywood?

The film and TV industry is warming to cloud

The film and TV industry is warming to cloud

You don’t have to watch the latest ‘Avengers’ film to get the sense the storage and computational requirements of film and television production are continuing their steady increase. But Guillaume Aubichon, chief technology officer of post-production and visual effects firm DigitalFilm Tree (DFT) says production and post-production outfits may find use in the latest and greatest in open source cloud technologies to help plug the growing gap between technical needs and capabilities – and unlock new possibilities for the medium in the process.

Since its founding in 2000, DFT has done post-production work for a number of motion pictures as well as television shows airing on some of the largest networks in America including ABC, TNT and TBS. And Aubichon says that like many in the industry DFT’s embrace of cloud came about because the company was trying to address a number of pain points.

“The first and the most pressing pain point in the entertainment industry right now is storage – inexpensive, commodity storage that is also internet ready. With 4K becoming more prominent we have some projects that generate about 12TB of content a day,” he says. “The others are cost and flexibility.”

This article appeared in the March/April issue of the BCN Magazine. Click here to download your copy today.

Aubichon explains three big trends are converging in the entertainment and media industry right now that are getting stakeholders from production to distribution interested in cloud.

4K broadcast, a massive step up from High– Definition in terms of the resources required for rendering, transmission and storage, is becoming more prominent.

Next, IP broadcasters are supplanting traditional broadcasters – Netflix, Amazon or Hulu are taking the place of CBS, ABC, and slowly displacing the traditional content distribution model.

And, films are no longer exclusively filmed in the Los Angeles area – with preferential tax regimes and other cost-based incentives driving production of English-speaking motion pictures outward into Canada, the UK, Central Europe and parts of New Zealand and Australia.

“With production and notably post-production costs increasing – both in terms of dollars and time – creatives want to be able to make more decisions in real time, or as close to real time as possible, about how a shot will look,” he says.

Can Cloud Save Hollywood?

DFT runs a hybrid cloud architecture based on OpenStack and depending on the project can link up to other private OpenStack clouds as well as OpenStack-based public cloud platforms. For instance, in doing some of the post-production work for Spike Jonze’s HER the company used a combination of Rackspace’s public cloud and its own private cloud instances, including Swift for object storage as well as a video review and approval application and virtual file system application – enabling creatives to review and approve shots quickly.

The company runs most of its application landscape off a combination of Linux and Microsoft virtualised environments, but is also a heavy user of Linux containers – which has benefits as a transmission format and also offers some added flexibility, like the ability run simple compute processes directly within a storage node.

Processes like video and audio transcoding are a perfect fit for containers because they don’t necessarily warrant an entire virtual machine, and because the compute and storage can be kept so close to one another.

Aubichon: 'My goal is to help make the media and entertainment industry avoid what the music industry did'

Aubichon: ‘My goal is to help make the media and entertainment industry avoid what the music industry did’

“Any TV show or film production and post-production process involves multiple vendors. For instance, on both Mistresses and Perception there was an outside visual effects facility involved as well. So instead of having to take the shots, pull it off an LTO tape, put it on a drive, and send it over to the visual effects company, they can send us a request and we can send them an authorised link that connects back to our Swift object storage, which allows them to pull whatever file we authorise. So there’s a tremendous amount of efficiency gained,” he explains.

For an industry just starting to come out of physical transmission, that kind of workflow can bring tremendous benefits to a project. Although much of the post-production work for film and television still happens in LA an increasing number of shows aren’t shot there; DFT for instance is currently working on shows shot in Vancouver, Toronto, and Virginia. So what the company does is run an instance of OpenStack on-site where the shooting occurs and feed the raw camera footage into an object storage instance, which is then container-sunk back to Los Angeles.

“We’ve even been toying with the idea of pushing raw camera files into OpenStack instances, and have those instances transcode those files into an H.265 resolution that could theoretically be pushed over a mobile data connection back to the editor in Los Angeles. The editor could then start cutting in proxies, and 12 to 18 hours later, when the two OpenStack instances have then sunk that material, you can then merge the data to the higher resolution version,” he says.

“We get these kinds of requests often, like when a director is shooting on location and he’s getting really nervous that his editor isn’t seeing the material before he has to move on from the location and finish shooting.”

So for DFT, he says, cloud is solving a transport issue, and a storage issue. “What we’re trying to push into now is solving the compute issue. Ideally we’d like to push all of this content to one single place, have this close to the compute and then all of your manipulation just happens via an automated process in the cloud or via VDI. That’s where we really see this going.”

The other element here, and one that’s undoubtedly sitting heavily on the minds of the film industry in recent months more than ever, is the security issue. Aubichon says that because the information, where it’s stored and how secure that information is, changes over the lifecycle of a project, a hybrid cloud model – or connectable cloud platforms with varying degrees of exposure – is required to support them. That’s where features like federated identity, which in OpenStack is still quite nascent, comes into play. It offers a mechanism for linking clouds, granting and authenticating user identity quickly (and taking access away equally fast), and leaves a trail revealing who touches what content.

“You need to be able to migrate authentication and data from a very closed instance out to something more open, and eventually out to public,” he says, adding that he has spent many of the past few years trying to convince the industry to eliminate any distinction between public and private clouds.

“In an industry that’s so paranoid about security, I’ve been trying to say ‘well, if you run an OpenStack instance in Rackspace, that’s really a private instance; they’re a trusted provider, that’s a private instance.’ To me, it’s just about how many people need to touch that material. If you have a huge amount of material then you’re naturally going to move to a public cloud vendor, but just because you’re on a public cloud vendor doesn’t mean that your instance is public.”

“I spend a lot of time just convincing the entertainment industry that this isn’t banking,” he adds. “They are slowly starting to come around; but it takes time.”

It All Comes Back To Data

Aubichon says the company is looking at ways to add value beyond simply cost and time reduction, with data and metadata aggregation figuring front and centre in that pursuit. The company did a proof of concept for Cougar Town where it showed how people watching the show on their iPads could interact with that content – a “second screen” interactive experience of sorts, but on the same viewing platform.

“Maybe a viewer likes the shirt one of the actresses is wearing on the show – they can click on it, and the Amazon or Target website comes up,” he says, adding that it could be a big source of revenue for online commerce channels as well as the networks. “This kind of stuff has been talked about for a while, but metadata aggregation and the process of dynamically seeking correlations in the data, where there have always been bottlenecks, has matured to the point where we can prove to studios they can aggregate all of this information without incurring extra costs on the production side. It’s going to take a while until it is fully mature, but it’s definitely coming.”

This kind of service assumes there exists loads of metadata on what’s happening in a shot (or the ability to dynamically detect and translate that into metadata) and, critically, the ability to detect correlations in data that are tagged differently.

The company runs a big MongoDB backend but has added capabilities from an open source project called Karma, which is an ontology mapping service that originally came out of museums. It’s a method of taking two MySQL databases and presenting to users correlations in data that are tagged differently.

DFT took that and married it with the text search function in MongoDB, a NoSQL paltform, which basically allows it to push unstructured data into the system and find correlations there (the company plans to seed this capability back into the open source MongoDB community).

“Ultimately we can use all of this metadata to create efficiencies in the post-production process, and help generate revenue for stakeholders, which is fairly compelling,” Aubichon says. “My goal is to help make the media and entertainment industry avoid what the music industry did, and to become a more unified industry through software, through everyone contributing. The more information is shared, the more money is made, and everyone is happy. That’s something that philosophically, in the entertainment industry, is only now starting to come to fruition.”

It would seem open source cloud technologies like OpenStack as well as innovations in the Linux kernel, which helped birth Docker and similar containerisation technologies, are also playing a leading role in bringing this kind of change about.

Every little helps: How Tesco is bringing the online food retail experience back in-store

Tesco is in the midst of overhauling its connectivity and IT services

Tesco is in the midst of overhauling its connectivity and IT services

Food retailers in the UK have for years spent millions of pounds on going digital and cultivating a web presence, which includes the digitisation of product catalogues and all of the other necessary tools on the backend to support online shopping, customer service and food delivery. But Tomas Kadlec, group infrastructure IT director at Tesco tells BCN more emphasis is now being place on bringing the online experience back into physical stores, which is forcing the company to completely rethink how it structures and handles data.

Kadlec, who is responsible for Tesco’s IT infrastructure strategy globally, has spent the better part of the past few years building a private cloud deployment model the company could easily drop into regional datacentres that power its European operations and beyond. This has largely been to improve the services it can provide to clients and colleagues within the company’s brick and mortar shops, and support a growing range of internal applications.

“If you look at what food retailers have been doing for the past few years it was all about building out an online extension to the store. But that trend is reversing, and there’s now a kind of ‘back to store’ movement brewing,” Kadlec says.

“If we have 30,000 to 50,000 SKUs in one store at any given time, how do you handle all of that data in a way that can contribute digital feature-rich services for customers? And how do you offer digital services to customers in Tesco stores that cater to the nuances in how people act in both environments?  For instance, people like to browse more in-store, sometimes calling a friend or colleague to ask for advice on what to get or recipes; in a digital environment people are usually just in a rush to head for the checkout. These are all fairly big, critical questions.”

Some of the digital services envisioned are fairly ambitious and include being able to queue up tons of product information – recipes, related products and so forth – on mobile devices by scanning items with built-in cameras, and even, down the line, paying for items on those devices. But the food retail sector is one of the most competitive in the world, and it’s possible these kinds of services could be a competitive differentiator for the firm.

“You should be able to create a shopping list on your phone and reach all of those items in-store easily,” he says. “When you’re online you have plenty of information about those products at your fingertips, but far less when you’re in a physical store. So for instance, if you have special dietary requirement we should be able to illuminate and guide the store experience on these mobile platforms with this in mind.”

Tomas_Kadlec“The problem is that in food retail the app economy doesn’t really exist yet. It exists everywhere else, and in food retail the app economy will come – it’s just that we as an industry have failed to make the data accessible so applications aren’t being developed.”

To achieve this vision, Tesco had to drastically change its approach to data and how it’s deployed across the organisation. The company originally started down the path of building its own API and offering internal users a platform-as-a-service to enable more agile app development, but Kadlec says the project quickly morphed into something much larger.

“It’s one thing to provide an elastic compute environment and a platform for development and APIs – something we can solve in a fairly straightforward way. It’s another thing entirely to expose the information you need for these services to work effectively in such a scalable system.”

Tesco’s systems handle and structure data the way many traditional enterprises within and outside food retail do – segmenting it by department, by function, and in alignment with the specific questions the data needs to answer. But the company is trying to move closer to a ‘store and stream now, ask questions later’ type of data model, which isn’t particularly straightforward.

“Data used to be purpose-built; it had a clearly defined consumer, like ERP data for example. But now the services we want to develop require us to mash up Tesco data and open data in more compelling ways, which forces us to completely re-think the way we store, categorise and stream data,” he explains. “It’s simply not appropriate to just drag and drop our databases into a cloud platform – which is why we’re dropping some of our data systems vendors and starting from scratch.”

Kadlec says the debate now centres on how the company can effectively democratise data while keeping critical kinds of information – like consumers’ personal information – secure and private: “There should only be two types of data. Data that should be open, and we should make sure we make that accessible, and then there’s the type of data that’s so private people get fired for having made it accessible – and setting up very specific architectural guidelines along with this.”

The company hasn’t yet had the security discussion with its customers yet, which is why Kadlec says the systems Tesco puts in place initially will likely focus on improving internal efficiency and productivity – “so we don’t have to get into the privacy data nightmare”.

The company also wants to improve connectivity to its stores to better service both employees and customers. Over the next 18 months the company will implement a complete overhaul of store connectivity and infrastructure, which will centre on delivering low latency bandwidth for in-store wifi and quadrupling the amount of access points. It also plans to install 4G signal booster cells in its stores to improve GSM-based connectivity. Making sure that infrastructure will be secure so that customer data isn’t leaked is top priority, he says.

Tesco is among a number of retailers to make headlines as of late – though not because of datacentre security or customer data loss, but because the company, having significantly inflated its profits by roughly £250m, is in serious financial trouble. But Kadlec says what many may see as a challenge is in fact an opportunity for the company.

One of the things the company is doing is piloting OmniTrail’s indoor location awareness technology to improve how Tesco employees are deployed in stores and optimise how they respond to changes in demand.

“If anything this is an opportunity for IT. If you look at the costs within the store today, there are great opportunities to automate stuff in-store and make colleagues within our stores more focused on customer services. If for instance we’re looking at using location-based services in the store, why do you expect people to clock in and clock out? We still use paper ledgers for holidays – why can’t we move this to the cloud? The opportunities we have in Tesco to optimise efficiency are immense.”

“This will inevitably come back to profits and margins, and the way we do this is to look at how we run operations and save using automation,” he says.

Tomas is speaking at the Telco Cloud Forum in London April 27-29, 2015. To register click here.