Archivo de la categoría: Interviews

The cloud beyond x86: How old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there's good reason to believe old architectures are making a comeback

x86 i undeniably the king of datacentre compute architecture, but there’s good reason to believe old architectures are making a comeback

When you ask IT pros to think of cloud the first thing that often comes to mind is web-delivered, meter-billed virtualised compute (and increasingly storage and networking) environments which, today, tends to imply an x86-centric stack built to serve up mostly any workload. But anyone watching this space closely will see x86 isn’t the only kid on the block, with SPARC, ARM and Power all vying for a large chunk of the scale-out market, as enterprises seek to squeeze more power out of their cloud hardware. What will the cloud stack of tomorrow look like?

Despite the dominance of x86 in the datacentre it is difficult to ignore the noise vendors have been making over the past couple of years around non-x86 architectures like ARM (ARM), SPARC (Oracle) and Power (IBM), but it’s easy to understand why: simply put, the cloud datacentre market is currently the dominant server market, with enterprises looking to consume more software as a service and outsource more of their datacentre operations than ever before.

Sameh Boujelbene, director of server research at Dell’Oro Group says over 50 per cent of all servers will ship to cloud service providers by 2018, and the size of the market (over $40bn annually by some estimates) creates a massive opportunity for new – and in some cases old non-x86 vendors aiming to nab a large chunk of it.

The nature and number of workloads is also changing. The number of connected devices sending or requesting data that needs to be stored or analysed, along with

the number and nature of workloads processed by datacentres, will more than double in the next five years, Boujelbene explains. This increase in connected devices and workloads will drive the need for more computing capacity and more physical servers, while driving exploration of more performant architectures to support this growing workload heterogeneity.

This article appeared in the March/April edition of BCN Magazine. Click here to download the issue today.

But it’s also important to recognise how migration to the cloud is impacting the choice of server form factors, choice of server brand and the choice of CPU architecture from the datacentre or cloud service provider perspective. Needless to say, cloud service providers have to optimise their datacentre efficiency at every turn.

“Generally, they are moving from general purpose servers to workload optimised servers,” Boujelbene explains. “We see cloud accounts going directly to white box servers shipped by ODMs directly to cloud accounts not only to cut costs but also because ODMs allow customisation; traditional server OEMs such as Dell, HP and IBM simply didn’t want to provide customised servers few years ago.”

Boujelbene sees big opportunities for alternative architectures to x86 such as ARM, SPARC or Power because they provide better performance to run specific types of workloads, and Intel is reacting to that trend by making customised CPUs available to some large cloud accounts. The company has about 35 customised CPU SKUs, and growing, and late last year won a pretty large contract to supply Amazon Web Services, the largest and most established of the public cloud providers, with custom Intel Xeon E5-2666 v3 (Haswell) processors.

Others in the ecosystem, some likely to have joined the fray at some point and others less so, are being enticed to get involved. Mobile chip incumbent Qualcomm announced plans ‘with its own ARM-based offerings’ in November last year to enter the server chip market at some point over the next two years, which the company believes represents a $15bn opportunity over the next five years.

And about a month before the Qualcomm announcement HP unveiled what it called the first “enterprise-grade ARM-based server,” its Moonshot range – the first to support ARM’s v8 architecture. Around the same time, Dell’s chief executive officer and founder Michael Dell intimated to a room of journalists his company, a long time Intel partner, would not be opposed to putting ARM chips in its servers.

SPARC and Power are both very compelling options when it comes to high I/O data analytics – where they are notably more performant than commodity x86. ARM’s key selling points have more to do with the ability to effectively balance licensing, design and manufacturing flexibility with power efficiency and physical density, though the company’s director of server programmes Jeff Underhill says other optimisations – being driven by cloud – are making their way to the CPU level.

“Cloud infrastructure by its very nature is network and storage-centric. So it is essential it can handle large numbers of simultaneous interactions efficiently optimising for aggregate throughput rather than just focusing on the outright performance of a single server. Solutions with integrated high performance networking, as well as storage and domain specific accelerators augmenting their general processor capabilities, offer significantly improved throughput versus traditional general purpose approaches,” Underhill says.

Underhill explains that servers are actually becoming more specialised, though there is and will continue to be a need for general-purpose servers and architectures to support them.

“The really interesting thing to look at is the area where networking and server technologies are converging towards a more scalable, flexible and dynamic ‘infra- structure’. Servers are becoming more specialised with advanced networking and storage capabilities mixed with workload specific accelerators,” he says, adding that this is pushing consolidation of an increasing number of systems (particularly networking) onto the SoC.

Hedging Their Bets

Large cloud providers – those with enough resource to write their own software and stand up their own datacentres – are the primary candidates for making the architectural shift in the scale-out market because of the cost prohibitive nature of making such a move (and the millions of dollars in potential cost-savings if it can be pulled off well).

It’s no coincidence Google, Facebook and Amazon have, with varying degrees of openness, flirted with the idea of shifting their datacentres onto ARM-based or other chips. Google for instance is one of several service providers steering the direction of the OpenPower Foundation (Rackspace is another), a consortium set up by IBM in December 2013 to foster cross-industry open source development of the Power architecture.

Power, which for IBM is the core architecture under- lying its high-end servers and mainframes as well as its more recently introduced cognitive computing as a service platform Watson, is being pitched by the more than 80 consortium members as the cloud and big data architecture of choice. Brad McCredie, IBM fellow and vice president of IBM Power Systems Development and president of the OpenPower Foundation says there is a huge opportunity for the Power architecture to succeed because of barriers in how technology cost and performance at the CPU level is scaling.

“If you go back five or six years, when the base transistor was scaling so well and so fast, all you had to do was go to the next–gen processor to get those cost-to-performance takedowns you were looking for. The best thing you could do all things considered or remaining equal is hop onto the next gen processor. Now, service providers are not getting those cost take-down curves they were hoping for with cloud, and a lot of cloud services are run on massive amounts of older technology platforms.”

The result is that technology providers have to pull on more and more levers – like adding GPU acceleration or enabling GPU virtualisation, or enabling FPGA attachment – to get cost-to-performance to come down; that is driving much of the heterogeneity in the cloud – different types of heterogeneity, not just at the CPU level.

There’s also a classic procurement-related incentive for heterogeneity among providers. The diversity of suppliers means spreading that risk and increasing competitiveness in the cloud, which is another good thing for cost-to-performance too.

While McCredie says that it’s still early days for Power in the cloud, and that Power is well suited to a particular set of data-centric workloads, he acknowledges it’s very hard to stay small and niche on one hand and continue to drive down cost-to-performance. The Foundation is looking to drive at least 20 to 30 per cent of the scale- out market, which – considering x86 has about 95 per cent share of that market locked up – is fairly ambitious.

“We have our market share in our core business, which for IBM is in the enterprise, but we also want share in the scale-out market. To do that you have to activate the open ecosystem,” he says, alluding to the IBM-led consortium.

It’s clear the increasingly prevalent open source mantra in the tech sector is spreading to pretty much every level of the cloud stack. For instance Rackspace, which participates with both OpenStack and Open Compute Project, open source cloud software and hard- ware projects respectively, is actively working to port OpenStack over to the Power architecture, with the goal of having OpenStack running on OpenPower / Open Compute Project hardware in production sometime in the next couple of years. It’s that kind of open ecosystem McCredie says is essential in cloud today and, critically, that such openness need not come at the cost of loose integration or consequent performance tax.

SPARC, which has its roots in financial services, retail and manufacturing, is interesting in part because it remains a fairly closed ecosystem and largely ends up in machines finely-tuned to very specific database workloads. Yet despite incurring losses for several years following its acquisition of Sun Microsystems, the architecture’s progenitor (along with Motorola), Oracle’s hardware business mostly bucked that trend (one experienced by most high-end server vendors) throughout 2014 and continues to do so.

The company’s 2015 Q2 saw its hardware systems grow 4 per cent year on year to roughly $717m, with the SPARC-based Exalogic and SuperCluster systems achieving double-digit growth.

“We’ve actually seen a lot of customers that have gone from SPARC to x86 Linux now very strongly come back to SPARC Solaris, in part because the technology has the audit and compliance features built into the architecture, they can do one click reporting, and be- cause the virtualisation overhead with Solaris on SPARC is much lower when compared with other virtualisation platforms,” says Paul Flannery, senior director EMEA product management in Oracle’s server group.

Flannery says openness and heterogeneity don’t necessarily lead to the development of the most per- formant outcome. “The complexity of having multiple vendors in your stack and then having to worry about the patching, revision labels of each of those platforms is challenging. And in terms of integrating those technologies – the fact we have all of the databases and all of the middleware and the apps – to be able to look at that whole environment.”

Robert Jenkins, chief executive officer of CloudSigma, a cloud service provider that recently worked with Oracle to launch one of the first SPARC-as-a-Service platforms, says that ultimately computing is still very heterogeneous.

“The reality is a lot of people don’t get the quality and performance that they need from public cloud because they’re jammed through this very rigid frame- work, and computing is very heterogeneous –which hasn’t changed with cloud,” he says. “You can deploy simply, but inefficiently, and the reality is that’s not what most people want. As a result we’ve made efforts to go beyond x86.”

He says the company is currently hashing out a deal with a very large bank that wants to use the latest SPARC architecture as a cloud service – so without having to shell out half a million dollars per box, which is roughly what Oracle charges, or migrate off the architecture altogether, which is costly and risky. Besides capex, SPARC is well suited to be offered as a service because the kinds of workloads that run on the architecture tend to be more variable or run in batches.

“The enterprise and corporate world is still focused on SPARC and other older specialised architectures, mainframes for instance, but it’s managing that heterogeneous environment that can be difficult. Infrastructure as a service is still fairly immature, and combined with the fact that companies using older architectures like SPARC tend not to be first movers, you end up in this situation where there’s a gap in the tooling necessary to make resource and service management easier.”

Does It Stack Up For Enterprises?

Whereas datacentre modernisation during the 90s entailed, among other things, a transition away from expensive mainframes running Unix workloads towards lower-cost commodity x86 machines running Linux or Microsoft-based software packages on bare metal, for many large enterprises, much of the 2000s focused on virtualising the underlying hardware platforms in a bid to make them more elastic and more performant. Those hardware platforms were overwhelmingly x86-based.

But, many of those same enterprises refused to go “all- in” on virtualisation or x86, maintaining multiple compute architectures to support niche workloads that ultimately weren’t as performant on commodity kit; financial services and the aviation industry are great examples of sectors where one can still find plenty of workloads running on 40-50 year old mainframe technology.

Andrew Butler, research vice president focusing on servers and storage at Gartner and an IT industry veteran says the same trend is showing up in the cloud sector, as well as to some extent the same challenges.

“What is interesting is that you see a lot of enter- prises claiming to move wholesale into the cloud, which speaks to this drive towards commoditisation in hardware – x86 in other words – as well as services, fea- tures and decision-making more generally. But that’s definitely not to say there isn’t room for SPARC, Power, mainframes or ARM in the datacentre, despite most of those – if you look at the numbers – appearing to have had their day,” Butler says.

“At the end of the day, in order to be able to run the workloads that we can relate to, delivering a given amount of service level quality is the overriding priority – which in the modern datacentre primarily centres on uptime and reliability. But while many enterprises were driven towards embracing what at the time was this newer architecture because of flexibility or cost, performance in many cases still reigns supreme, and there are many pursuing the cloud-enablement of legacy workloads, wrapping some kind of cloud portal access layer around a mainframe application for instance.”

“The challenge then becomes maintaining this bi-mod- al framework of IT, and dealing with all of the technology and cultural challenges that come along with all of this; in other words, dealing with the implications of bringing things like mainframes into direct contact with things like the software defined datacentre,” he explains.

A senior datacentre architect working at a large American airline who insists on anonymity says the infrastructure management, technology and cultural challenges alluded to above are very real. But they can be overcome, particularly because some of these legacy vendors are trying to foster more open exposure of their APIs for management interfaces (easing the management and tech challenge), and because ops management teams do get refreshed from time to time.

What seems to have a large impact is the need to ensure the architectures don’t become too complex, which can occur when old legacy code takes priority simply because the initial investment was so great. This also makes it more challenging for newer generations of datacentre specialists coming into the fold.

“IT in our sector is changing dramatically but you’d be surprised how much of it still runs on mainframes,” he says. “There’s a common attitude towards tech – and reasonably so – in our industry that ‘if it ain’t broke don’t fix it’, but it can skew your teams towards feeling the need to maintain huge legacy code investments just because.”

As Butler alluded to earlier, this bi-modality isn’t particularly new, though there is a sense among some that the gap between all of the platforms and archi- tectures is growing when it comes to cloud due to the expectations people have on resilience and uptime but also ease of management, power efficiency, cost, and so forth. He says that with IBM’s attempts to gain mind- share around Power (in addition to developing more cloudy mainframes), ARM’s endeavour to do much the same around its processor architecture and Oracle’s cloud-based SPARC aspirations, things are likely to remain volatile for vendors, service providers and IT’ers for the foreseeable future.

“It’s an incredibly volatile period we’re entering, where this volatility will likely last between seven years possibly up to a decade before it settles down – if it settles down,” Butler concluded

Can the cloud save Hollywood?

The film and TV industry is warming to cloud

The film and TV industry is warming to cloud

You don’t have to watch the latest ‘Avengers’ film to get the sense the storage and computational requirements of film and television production are continuing their steady increase. But Guillaume Aubichon, chief technology officer of post-production and visual effects firm DigitalFilm Tree (DFT) says production and post-production outfits may find use in the latest and greatest in open source cloud technologies to help plug the growing gap between technical needs and capabilities – and unlock new possibilities for the medium in the process.

Since its founding in 2000, DFT has done post-production work for a number of motion pictures as well as television shows airing on some of the largest networks in America including ABC, TNT and TBS. And Aubichon says that like many in the industry DFT’s embrace of cloud came about because the company was trying to address a number of pain points.

“The first and the most pressing pain point in the entertainment industry right now is storage – inexpensive, commodity storage that is also internet ready. With 4K becoming more prominent we have some projects that generate about 12TB of content a day,” he says. “The others are cost and flexibility.”

This article appeared in the March/April issue of the BCN Magazine. Click here to download your copy today.

Aubichon explains three big trends are converging in the entertainment and media industry right now that are getting stakeholders from production to distribution interested in cloud.

4K broadcast, a massive step up from High– Definition in terms of the resources required for rendering, transmission and storage, is becoming more prominent.

Next, IP broadcasters are supplanting traditional broadcasters – Netflix, Amazon or Hulu are taking the place of CBS, ABC, and slowly displacing the traditional content distribution model.

And, films are no longer exclusively filmed in the Los Angeles area – with preferential tax regimes and other cost-based incentives driving production of English-speaking motion pictures outward into Canada, the UK, Central Europe and parts of New Zealand and Australia.

“With production and notably post-production costs increasing – both in terms of dollars and time – creatives want to be able to make more decisions in real time, or as close to real time as possible, about how a shot will look,” he says.

Can Cloud Save Hollywood?

DFT runs a hybrid cloud architecture based on OpenStack and depending on the project can link up to other private OpenStack clouds as well as OpenStack-based public cloud platforms. For instance, in doing some of the post-production work for Spike Jonze’s HER the company used a combination of Rackspace’s public cloud and its own private cloud instances, including Swift for object storage as well as a video review and approval application and virtual file system application – enabling creatives to review and approve shots quickly.

The company runs most of its application landscape off a combination of Linux and Microsoft virtualised environments, but is also a heavy user of Linux containers – which has benefits as a transmission format and also offers some added flexibility, like the ability run simple compute processes directly within a storage node.

Processes like video and audio transcoding are a perfect fit for containers because they don’t necessarily warrant an entire virtual machine, and because the compute and storage can be kept so close to one another.

Aubichon: 'My goal is to help make the media and entertainment industry avoid what the music industry did'

Aubichon: ‘My goal is to help make the media and entertainment industry avoid what the music industry did’

“Any TV show or film production and post-production process involves multiple vendors. For instance, on both Mistresses and Perception there was an outside visual effects facility involved as well. So instead of having to take the shots, pull it off an LTO tape, put it on a drive, and send it over to the visual effects company, they can send us a request and we can send them an authorised link that connects back to our Swift object storage, which allows them to pull whatever file we authorise. So there’s a tremendous amount of efficiency gained,” he explains.

For an industry just starting to come out of physical transmission, that kind of workflow can bring tremendous benefits to a project. Although much of the post-production work for film and television still happens in LA an increasing number of shows aren’t shot there; DFT for instance is currently working on shows shot in Vancouver, Toronto, and Virginia. So what the company does is run an instance of OpenStack on-site where the shooting occurs and feed the raw camera footage into an object storage instance, which is then container-sunk back to Los Angeles.

“We’ve even been toying with the idea of pushing raw camera files into OpenStack instances, and have those instances transcode those files into an H.265 resolution that could theoretically be pushed over a mobile data connection back to the editor in Los Angeles. The editor could then start cutting in proxies, and 12 to 18 hours later, when the two OpenStack instances have then sunk that material, you can then merge the data to the higher resolution version,” he says.

“We get these kinds of requests often, like when a director is shooting on location and he’s getting really nervous that his editor isn’t seeing the material before he has to move on from the location and finish shooting.”

So for DFT, he says, cloud is solving a transport issue, and a storage issue. “What we’re trying to push into now is solving the compute issue. Ideally we’d like to push all of this content to one single place, have this close to the compute and then all of your manipulation just happens via an automated process in the cloud or via VDI. That’s where we really see this going.”

The other element here, and one that’s undoubtedly sitting heavily on the minds of the film industry in recent months more than ever, is the security issue. Aubichon says that because the information, where it’s stored and how secure that information is, changes over the lifecycle of a project, a hybrid cloud model – or connectable cloud platforms with varying degrees of exposure – is required to support them. That’s where features like federated identity, which in OpenStack is still quite nascent, comes into play. It offers a mechanism for linking clouds, granting and authenticating user identity quickly (and taking access away equally fast), and leaves a trail revealing who touches what content.

“You need to be able to migrate authentication and data from a very closed instance out to something more open, and eventually out to public,” he says, adding that he has spent many of the past few years trying to convince the industry to eliminate any distinction between public and private clouds.

“In an industry that’s so paranoid about security, I’ve been trying to say ‘well, if you run an OpenStack instance in Rackspace, that’s really a private instance; they’re a trusted provider, that’s a private instance.’ To me, it’s just about how many people need to touch that material. If you have a huge amount of material then you’re naturally going to move to a public cloud vendor, but just because you’re on a public cloud vendor doesn’t mean that your instance is public.”

“I spend a lot of time just convincing the entertainment industry that this isn’t banking,” he adds. “They are slowly starting to come around; but it takes time.”

It All Comes Back To Data

Aubichon says the company is looking at ways to add value beyond simply cost and time reduction, with data and metadata aggregation figuring front and centre in that pursuit. The company did a proof of concept for Cougar Town where it showed how people watching the show on their iPads could interact with that content – a “second screen” interactive experience of sorts, but on the same viewing platform.

“Maybe a viewer likes the shirt one of the actresses is wearing on the show – they can click on it, and the Amazon or Target website comes up,” he says, adding that it could be a big source of revenue for online commerce channels as well as the networks. “This kind of stuff has been talked about for a while, but metadata aggregation and the process of dynamically seeking correlations in the data, where there have always been bottlenecks, has matured to the point where we can prove to studios they can aggregate all of this information without incurring extra costs on the production side. It’s going to take a while until it is fully mature, but it’s definitely coming.”

This kind of service assumes there exists loads of metadata on what’s happening in a shot (or the ability to dynamically detect and translate that into metadata) and, critically, the ability to detect correlations in data that are tagged differently.

The company runs a big MongoDB backend but has added capabilities from an open source project called Karma, which is an ontology mapping service that originally came out of museums. It’s a method of taking two MySQL databases and presenting to users correlations in data that are tagged differently.

DFT took that and married it with the text search function in MongoDB, a NoSQL paltform, which basically allows it to push unstructured data into the system and find correlations there (the company plans to seed this capability back into the open source MongoDB community).

“Ultimately we can use all of this metadata to create efficiencies in the post-production process, and help generate revenue for stakeholders, which is fairly compelling,” Aubichon says. “My goal is to help make the media and entertainment industry avoid what the music industry did, and to become a more unified industry through software, through everyone contributing. The more information is shared, the more money is made, and everyone is happy. That’s something that philosophically, in the entertainment industry, is only now starting to come to fruition.”

It would seem open source cloud technologies like OpenStack as well as innovations in the Linux kernel, which helped birth Docker and similar containerisation technologies, are also playing a leading role in bringing this kind of change about.

NXP: ‘Industry needs to ensure IoT is simple and secure’

Internet of Things devices need to be simple and secure if customers are to adopt

Internet of Things devices need to be simple and secure if customers are to adopt

The entire telecoms industry needs to focus on ensuring the IoT delivers real value to consumers, and the security and user simplicity of connected devices should be of paramount importance, said Jeff Fonseca, the regional sales director, Americas at chip vendor NXP in an interview with Telecoms.com.

As an NFC specialist whose customer case examples in the contactless payments space include the London Underground’s contactless travel, the badges at MWC, and several banks’ EMV cards, NXP is increasingly focusing on IoT. According to Fonseca, securing connected devices is something that has to happen for consumers to really get on board with the IoT.

“What we bring in terms of IoT is really the security. All the [secure] stuff we do in passports, all the stuff we do on bank cards, and secure payments, getting you securely onto trains, that type of secure technology, embedding that and infusing that into other categories like IoT [is on our agenda].”

But he said it is not yet clear what exactly is behind the much hyped term. “Honestly, IoT is a big word that I don’t know has a true definition of what’s going to be the one key thing that is IoT. There’s so many moving pieces and parts the difficulty is really unwrapping that, and then making sure we know where we need to be on the trajectory with the right players and partners.

“We need to have ways to execute upon very good security and connectivity that is simple for consumers to use, and that is scalable. It [IoT] shouldn’t be just a buzz word, it should actually have usable value for the consumer.”

Fonseca said there’s not much point in having numerous connected devices in the home unless there’s one common way to communicate with them. “You’re not gona have 10 different devices that all talk a different language in your home, that’s not gona scale in the IoT space. But if you have the ability to have a few devices that talk a similar language, then consumers start to see value from the perspective of managing your home with your smartphone, for example.”

But with having billions of devices connected to the internet come security implications, and Fonseca said ensuring consumers’ security is a key consideration. “How does that work, and how does that work securely? How do you take the cloud and connect it down to these end-point devices in your home and still manage them with your smartphone or your tablet.

“These are the difficult conversations we all have to have as an industry to move in that direction to make sure that in the end it’s all about the consumer, and making sure that there’s an extremely simple and usable product for them. Even though it’s complex underneath to do all this stuff that has to happen in IoT, the consumer doesn’t care, the consumer just wants it to work and they want it to be secure.”

At the MWC 2015 NXP was showcasing its product portfolio, which on top of the technology to secure bank cards and passports also includes solutions for connected car, wireless mobile charging, and ‘smart-audio’ solutions that enhance voice and call clarity based on information passed on by algorithms designed to recognise the environment from which the call is made. The firm has also developed wireless, magnetic inductance-based earbuds as part of a concept it calls ‘true mobility’.

At the beginning of the month NXP announced its plan to acquire competitor Freescale Semiconductor. “We are going to acquire them and the announcement so far has stated that part of that [acquisition] is this IoT convergence play,” Fonesca said. “Freescale is very strong in that category as well, and we’ll see some obvious synergies from taking what NXP has and from what they can bring to the table towards an IoT play.”

Visit the world’s largest & most comprehensive IoT event – Internet of Things World – this May

Every little helps: How Tesco is bringing the online food retail experience back in-store

Tesco is in the midst of overhauling its connectivity and IT services

Tesco is in the midst of overhauling its connectivity and IT services

Food retailers in the UK have for years spent millions of pounds on going digital and cultivating a web presence, which includes the digitisation of product catalogues and all of the other necessary tools on the backend to support online shopping, customer service and food delivery. But Tomas Kadlec, group infrastructure IT director at Tesco tells BCN more emphasis is now being place on bringing the online experience back into physical stores, which is forcing the company to completely rethink how it structures and handles data.

Kadlec, who is responsible for Tesco’s IT infrastructure strategy globally, has spent the better part of the past few years building a private cloud deployment model the company could easily drop into regional datacentres that power its European operations and beyond. This has largely been to improve the services it can provide to clients and colleagues within the company’s brick and mortar shops, and support a growing range of internal applications.

“If you look at what food retailers have been doing for the past few years it was all about building out an online extension to the store. But that trend is reversing, and there’s now a kind of ‘back to store’ movement brewing,” Kadlec says.

“If we have 30,000 to 50,000 SKUs in one store at any given time, how do you handle all of that data in a way that can contribute digital feature-rich services for customers? And how do you offer digital services to customers in Tesco stores that cater to the nuances in how people act in both environments?  For instance, people like to browse more in-store, sometimes calling a friend or colleague to ask for advice on what to get or recipes; in a digital environment people are usually just in a rush to head for the checkout. These are all fairly big, critical questions.”

Some of the digital services envisioned are fairly ambitious and include being able to queue up tons of product information – recipes, related products and so forth – on mobile devices by scanning items with built-in cameras, and even, down the line, paying for items on those devices. But the food retail sector is one of the most competitive in the world, and it’s possible these kinds of services could be a competitive differentiator for the firm.

“You should be able to create a shopping list on your phone and reach all of those items in-store easily,” he says. “When you’re online you have plenty of information about those products at your fingertips, but far less when you’re in a physical store. So for instance, if you have special dietary requirement we should be able to illuminate and guide the store experience on these mobile platforms with this in mind.”

Tomas_Kadlec“The problem is that in food retail the app economy doesn’t really exist yet. It exists everywhere else, and in food retail the app economy will come – it’s just that we as an industry have failed to make the data accessible so applications aren’t being developed.”

To achieve this vision, Tesco had to drastically change its approach to data and how it’s deployed across the organisation. The company originally started down the path of building its own API and offering internal users a platform-as-a-service to enable more agile app development, but Kadlec says the project quickly morphed into something much larger.

“It’s one thing to provide an elastic compute environment and a platform for development and APIs – something we can solve in a fairly straightforward way. It’s another thing entirely to expose the information you need for these services to work effectively in such a scalable system.”

Tesco’s systems handle and structure data the way many traditional enterprises within and outside food retail do – segmenting it by department, by function, and in alignment with the specific questions the data needs to answer. But the company is trying to move closer to a ‘store and stream now, ask questions later’ type of data model, which isn’t particularly straightforward.

“Data used to be purpose-built; it had a clearly defined consumer, like ERP data for example. But now the services we want to develop require us to mash up Tesco data and open data in more compelling ways, which forces us to completely re-think the way we store, categorise and stream data,” he explains. “It’s simply not appropriate to just drag and drop our databases into a cloud platform – which is why we’re dropping some of our data systems vendors and starting from scratch.”

Kadlec says the debate now centres on how the company can effectively democratise data while keeping critical kinds of information – like consumers’ personal information – secure and private: “There should only be two types of data. Data that should be open, and we should make sure we make that accessible, and then there’s the type of data that’s so private people get fired for having made it accessible – and setting up very specific architectural guidelines along with this.”

The company hasn’t yet had the security discussion with its customers yet, which is why Kadlec says the systems Tesco puts in place initially will likely focus on improving internal efficiency and productivity – “so we don’t have to get into the privacy data nightmare”.

The company also wants to improve connectivity to its stores to better service both employees and customers. Over the next 18 months the company will implement a complete overhaul of store connectivity and infrastructure, which will centre on delivering low latency bandwidth for in-store wifi and quadrupling the amount of access points. It also plans to install 4G signal booster cells in its stores to improve GSM-based connectivity. Making sure that infrastructure will be secure so that customer data isn’t leaked is top priority, he says.

Tesco is among a number of retailers to make headlines as of late – though not because of datacentre security or customer data loss, but because the company, having significantly inflated its profits by roughly £250m, is in serious financial trouble. But Kadlec says what many may see as a challenge is in fact an opportunity for the company.

One of the things the company is doing is piloting OmniTrail’s indoor location awareness technology to improve how Tesco employees are deployed in stores and optimise how they respond to changes in demand.

“If anything this is an opportunity for IT. If you look at the costs within the store today, there are great opportunities to automate stuff in-store and make colleagues within our stores more focused on customer services. If for instance we’re looking at using location-based services in the store, why do you expect people to clock in and clock out? We still use paper ledgers for holidays – why can’t we move this to the cloud? The opportunities we have in Tesco to optimise efficiency are immense.”

“This will inevitably come back to profits and margins, and the way we do this is to look at how we run operations and save using automation,” he says.

Tomas is speaking at the Telco Cloud Forum in London April 27-29, 2015. To register click here.