Archivo de la categoría: Interviews

Exclusive: How Virgin Active is getting fit with the Internet of Things

Virgin want to use IoT to make its service more holistic and improve customer retention

Virgin want to use IoT to make its service more holistic and improve customer retention

Virgin Active is embarking on an ambitious redesign of its facilities that uses the Internet of Things to improve the service it offers to customers and reduce subscriber attrition rates, explains Andy Caddy, chief information officer of Virgin Active.

“Five years ago you didn’t really need to be very sophisticated as a health club operator in terms of your IT and digital capability,” Caddy says. “But now I would argue that things have changed dramatically – and you have to be very smart about how you manage your relationship with customers.”

The health club sector is one of the most unique subscription-based businesses around, in part because the typical attrition rate is around 50 per cent – meaning by the end of the year the club has lost half of the members it started out with, and needs to gain new subscribers by at least as much in order to grow on aggregate. That’s quite a challenge to tackle.

Much of how Virgin Active intends to address this is through more clever use of data, and to use cloud-based software and IoT sensors to help better understand what its customers are doing inside and beyond the gym. The company’s vision involves creating once consolidated view of the customer, collating information stored on customers’ smartphones with health data generated from wearable sensors and gym machines being used by those customers.

The company is already in the process of trialling this vision with a new fitness club at Cannon Street, London, which opens later this month. Originally announced last year, the club, which Caddy says is to be Virgin Active’s flagship technology club, uses RFID chip-embedded membership wrist bands that can be used to do everything from entering the gym and logging cardiovascular data from the machines they use to buying drinks at the café, renting towels and accessing lockers.

“Now we start to see what people are doing in the clubs, which gives us a richer set of data to work with, and it starts to generate insights that are more relevant and engaging and perhaps also feeds our CRM and product marketing,” he says. “Over the next few months we’ll be able to compare this data with what we see at other clubs to find out a few important things – are we becoming more or less relevant to customers? Is customer retention improving?”

Combine that with IoT data from things like smartwatches that are worn outside the confines of the gym, and the company can get a better sense of how to improve what it suggests as a health or fitness activity from a holistic standpoint. It also means more effective marketing, which beckons a more sophisticated way of handling data and acting on it than it already does by Caddy’s admission.

“The kinds of questions I want to be able to answer for my customers are things like: What’s the kind of lunch I can eat tomorrow based on today’s activity? How should I change my calendar next week based on my current stress levels? These are the really interesting questions that would absolutely add value to [a customer’s] life and also create a reasonable extension of the role we’re already playing as a fitness provider.”

But Caddy says the vendors themselves, while pushing the boundaries in IoT from a technical standpoint, pose the biggest threats to the sector’s development.

“We want standards because it’s very hard to do anything when Nike want to talk about Fuel and Fitbit want to talk about Steps and Apple want to talk about Activity, and none of these things equal the same things,” he explains. “What we really want is some of these providers to start thinking about how you do something smart with that information, and what you need in order to do that, but I’m always surprised by how few vendors are asking those kinds of questions.”

“It’s an inevitable race to the bottom in sensor tech; the value is all in the data.”

Companies like Apple and Microsoft know this – and in health specifically are attempting to build out their own data services that developers can tap into for their own applications. But again, those are closed, proprietary systems, and it may be some time before the IoT sector opens up to effectively cater a multi-device, multi-cloud world.

“We’re lucky in a sense because health and fitness is one of the first places where IoT has taken off in a real sense. But to be honest, we’re still a good way from where we want to be,” he says.

Ericsson details strategic plans beyond telecoms sector

Swedish networking giant Ericsson has made no attempt to hide the fact that it needs to diversify in order to survive and the nature of that diversification just got a bit clearer, explains Telecoms.com.

In his exclusive interview with Telecoms.com late last year CEO Hans Vestberg detailed the five main areas of diversification his company has identified: IP networks, Cloud, OSS/BSS, TV & Media and Industry & Society.Ericsson has spoken freely about the first four but has chosen to keep quiet about its industry & society initiative until it was ready.

That moment has now arrived, so Telecoms.com spoke to Nadine Allen (pictured), who heads up Industry & Society for Ericsson in Western and Central Europe. She explained that Ericsson sees a massive opportunity in helping other industries to capitalize on the way the telecoms and IT industries are evolving and converging, with IoT being a prime example.

“The evolved use of ICT is becoming increasingly important to all industries as they address the opportunities and challenges that the networked society will bring,” said Allen. “There is a growing need for ICT connectivity and services in market segments outside the traditional customer base of Ericsson, such as: utilities, transport and public safety.”

Ericsson has identified five key industries to focus on: Automotive, Energy & Utilities, Road & Rail, Safety & Security and Shipping. As you can see these are mainly quite industrial sectors, and this is in keeping with how things like IoT are evolving, with the main commercial applications being of a B2B type.

Ericsson has been a transformation partner to our customers for many decades and supported them in shaping their strategies,” said Allen. “This is a key strength relevant to customers inside and outside the telco space as they develop their connected strategies.

“We are a leading software provider and developer across all areas of the network, including OSS and BSS – these capabilities we see as being key to what will be needed to flexibly support the plethora of future use cases, some of which we can only imagine right now.”

Allen brought our attention to some specific use-cases, illustrated in the slide below. In utilities, for example, things like smart grids and smart metering are already emerging as a way to increase efficiency, while intelligent transport systems are doing the same for that sector.

Ericsson industry & society slide

All of this makes a lot of sense on paper, and Ericsson unquestionably has a lot of tools at its disposal to help industries get smarter, but combining these capabilities into coherent solutions and competing against companies such as the big systems integration and consulting firms will be a challenge. The Ericsson brand is strong in telcos, but not necessarily in transport, and it still needs to establish its consulting credentials beyond its home territory.

To conclude we asked Allen how she sees these underlying trends evolving. We believe the Internet of Things will have a profound impact in the future, enabling anything to be connected and providing ’smartness’ to these connected things will bring value across many sectors,” she said.

“The vision of IoT is a key part of the networked society and in one line I would say it is well described by ‘where everything that can benefit from being connected will be connected’. For example in a world of connected things, value will shift from the physical properties of a product to the services that it provides.”

Living in a hybrid world: From public to private cloud and back again

Orlando Bayter, chief exec and founder of Ormuco

Orlando Bayter, chief exec and founder of Ormuco

The view often propagated by IT vendors is that public cloud is already capable of delivering a seamless extension between on-premise private cloud platforms and public, shared infrastructure. But Orlando Bayter, chief executive and founder of Ormuco, says the industry is only at the outset of delivering a deeply interwoven fabric of private and public cloud services.

Demand for that kind of seamlessness hasn’t been around for very long, admittedly. It’s no great secret that in the early days of cloud demand for public cloud services was spurred largely by the slow-moving pace traditional IT organisations are often set. As a result every time a developer wanted to build an application they would simply swipe the credit card and go, billing back to IT at some later point. So the first big use case for hybrid cloud emerged when developers then needed to bring their apps back in-house, where they would live and probably die.

But as the security practices of cloud service providers continue to improve, along with enterprise confidence in cloud more broadly, cloud bursting – the ability to use a mix of public and private cloud resources to fit the utilisation needs of an app – became more widely talked about. It’s usually cost prohibitive and far too time consuming to scale private cloud resources quick enough to meet the changing demands of today’s increasingly web-based apps, so cloud bursting has become the natural next step in the hybrid cloud world.

Orlando will be speaking at the Cloud World Forum in London June 24-25. Click here to register.

There are, however, still preciously few platforms that offer this kind of capability in a fast and dynamic way. Open source projects like OpenStack or more proprietary variants like VMware’s vCloud or Microsoft’s Azure Stack (and all the tooling around these platforms or architectures) are at the end of the day all being developed with a view towards supporting the deployment and management of workloads that can exist in as many places as possible, whether on-premise or in a cloud vendor’s datacentre.

“Let’s say as a developer you want to take an application you’ve developed in a private cloud in Germany and move it onto a public cloud platform in the US. Even for the more monolithic migration jobs you’re still going to have to do all sorts of re-coding, re-mapping and security upgrades, to make the move,” Bayter says.

“Then when you actually go live, and have apps running in both the private and public cloud, the harsh reality is most enterprises have multiple management and orchestration tools – usually one for the public cloud and one for the private; it’s redundant, and inefficient.”

Ormuco is one company trying to solve these challenges. It has built a platform based on HP Helion OpenStack and offers both private and public instances, which can both be managed in a single pane of glass; it has built its own layer in between to abstract resources underneath).

It has multiple datacentres in the US and Europe from which it offers both private and public instances, as well as the ability to burst into its cloud platform using on-premise OpenStack-based clouds. The company is also a member of the HP Helion Network, which Bayter says gives it a growing channel and the ability to offer more granular data protection tools to customers.

“The OpenStack community has been trying to bake some of these capabilities into the core open source code, but the reality is it only achieved a sliver of these capabilities by May this year,” he said, alluding to the recent OpenStack Summit in Vancouver where new capabilities around federated cloud identity were announced and demoed.

“The other issue is simplicity. A year and a half ago, everyone was talking about OpenStack but nobody was buying it. Now service providers are buying but enterprises are not. Specifically with enterprises, the belief is that OpenStack will be easier and easier as time goes on, but I don’t think that’s necessarily going to be the case,” he explains.

“The core features may become a bit easier but the whole solution may not, but there are so many things going into it that it’s likely going to get clunkier, more complex, and more difficult to manage. It could become prohibitively complex.”

That’s not to say federated identity or cloud federation is a lost cause – on the contrary, Bayter says it’s the next horizon for cloud. The company is currently working a set of technologies that would enable any organisation with infrastructure that lies significantly underutilised for long periods to rent out their infrastructure in a federated model.

Ormuco would verify and certify the infrastructure, and allocate a performance rating that would change dynamically along with the demands being placed on that infrastructure – like an AirBnB for OpenStack cloud users. Customers renting cloud resources in this market could also choose where their data is hosted.

“Imagine a university or a science lab that scales and uses its infrastructure at very particular times; the rest of the time that infrastructure is fairly underused. What if they could make money from that?”

There are still many unanswered questions – like whether the returns for renting organisations would justify the extra costs (i.e. energy) associate with running that infrastructure, or where the burden of support lies (enterprises need solid SLAs for production workloads) and how that influences what kinds of workloads ends up on rented kit, but the idea is interesting and definitely consistent with the line of thinking being promoted by the OpenStack community among others in open source cloud.

“Imagine the power, the size of that cloud,” says Bayter . “That’s the cloud that will win out.”

This interview was produced in partnership with Ormuco

Food retail, robotics, cloud and the Internet of Things

Ocado is developing a white-label grocery delivery service

Ocado is developing a white-label grocery delivery service

With a varied and fast moving supply chain, loads of stock moving quickly through warehouses, delivery trucks, stores, and an increasingly digital mandate, the food retail sector is unlike any other retail segment. Paul Clarke, director of technology at Ocado, a leading online food retailer, explains how the cloud, robotics, and the Internet of Things is increasingly at the heart of everything the company does.

Ocado started 13 years ago as an online supermarket where consumers could quickly and easily order food goods. It does not own or operate any brick-and-mortar stores, though it effectively competes with all other food retailers, in some ways now more than ever because of how supermarkets have evolved in the UK. Most of them offer online ordering and food delivery services.

But in 2013 the company struck a £216m deal with Morrisons that would see Ocado effectively operate as the Morrisons online food store, a shift from its previous strategy of offering a standalone end-to-end grocery service with its own brand on the front-end – and a move that would become central to its growth strategy moving forward. The day the Morrisons platform went live in early 2014 the company set to work on re-platforming the Ocado service and turning it into the Ocado Smart Platform (OSP), a white-label end-to-end grocery service that can be deployed by food retailers globally. Clarke was fairly tight-lipped about some of the details for commercial reasons, but suggested “there isn’t a continent where the company is not currently in discussions” with a major food retailers to deliver OSP.

The central idea behind this is that standing up a grocery delivery service – the technical infrastructure as well as support services – is hugely expensive for food retailers and involves lots of technical integration, so why not simply deploy a white label end-to-end service that will still retain the branding of said retailer but offer all the benefits?

Paul Clarke is speaking at the Cloud World Forum in London June 24-25. Click here to register!

“In new territories you don’t need the size of facilities that we have here in the Midlands. For instance, our site in the Midlands costs over £230m, and that is fine for the UK which has an established online grocery business and our customer base, but it wouldn’t fit well in a new territory where you’re starting from scratch, nor is there the willingness to spend such sums,” he explains.

The food delivery service operates in a hub-and-spoke model. The cloud service being developed by Ocado connects the ‘spokes’, smaller food depots (which could even be large food delivery trucks) to customer fulfilment centres, which are larger warehouses that house the majority of the stock (the ‘hub’).

The company is developing and hosting the service on a combination of AWS and Google’s cloud platforms – for the compute and data side, respectively.

“The breadth and depth of our estate is huge. You have robotics systems, vision systems, simulation systems, data science applications, and the number of different kinds of use cases we’re putting in the cloud is significant. It’s a microservices architecture that we’re building with hundreds of different microservices. A lot of emphasis is being put on security through design, and robust APIs so it can be integrated with third party products – it’s an end-to-end solution but many of those incumbents will have other supply chain or ERP solutions and will want to integrate it with those.”

AWS and Google complement eachother well, he says. “We’re using most things that both of those companies have in their toolbox; there’s probably not much that we’re not using there.”

The warehousing element including the data systems will run on a private cloud in the actual product warehouses, so low latency real-time control systems will run in the private cloud, but pretty much everything else will run in the public cloud.

The company is also looking at technologies like OpenStack, Apache Mesos and CoreOS because it wants to run as many workloads as possible in Linux containers; they’re more portable than VMs and because of the variation between the regions (legislation and performance) where it will operate the company may have to change whether it deploys certain workloads in a public cloud or private cloud quite quickly.

The Internet of Things and the Great Data Lakes

IoT is very important for the company in several areas. Its warehouses are like little IoT worlds all on their own, Clarke says, with lots of M2M, hundreds of kilometres of conveyor, and thousands of things on the move at any one time including automated cranes and robotics.

Then there’s all of the data the company collects from drivers for things like route optimisation and operational improvement – things like wheel speed, tire pressure, road speed, engine revs, fuel consumption, cornering performance, which are all fed back to the company in real-time and used to track driver performance.

There’s also a big role for wearables in those warehouses. Clarke says down the line wearables have the potential to help it improve safety and productivity (“we’re not there yet but there is so much potential.”)

But where IoT can have the biggest impact in food retail, and where it’s most underestimated, Clarke explains, is the customer element: “This is where many companies underestimate the scale of transformation IoT is going to bring, the intersection of IoT and smart machines. In our space we see that in terms of the smart home, smart appliances, smart packaging, it’s all very relevant. The customers living in this world are going to demand this kind of smartness from all the systems they use, so it’s going to raise the bar for all the mobile apps and service we build.”

“Predictive analytics are going to play a big part there, as will machine learning, to help them do their shop up in our case, or knowing what they want before they even have a clue themselves. IoT has a very important part to play in that in terms of delivering that kind of information to the customer to the extent that they wish to share it,” he says.

But challenges, ones that straddle the legal, technical and cultural, persist in this nascent space. One of them, largely technical, is data management, which isn’t insurmountable. The company has implemented a data lake built on Google BigQuery, where it publishes a log of pretty much every business event onto a backbone that it persists through that service as well as data exhaust from its warehouse logs, alerts, driver monitoring information, clickstream data and front-end supply chain information (at the point of order), and it uses technologies like Dataflow and Hadoop for number crunching.

Generally speaking, Clarke says, grocery is just fundamentally different to non-grocery or food in ways that have data-specific implications. “When you go buy stationary or a printer cartridge you usually buy one or two items. With grocery there can often be upwards of 50 items, there are multiple suppliers and multiple people involved, sometimes at different places, often on different devices and different checkouts. So that journey of stitching that order, that journey together, is a challenge from a data perspective in itself.”

Bigger challenges in the IoT arena, where more unanswered questions lie, include security and identity management, discoverability, data privacy and standards – or the lack of. These are the problems that aren’t so straightforward.

“A machine is going to have to have an identity. That whole identity management question for these devices is key and so far goes unanswered. It’s also linked to discoverability. How do you find out what the device functions are? Discovery is going to get far too complex for humans. You get into a train station these days and there are already 40 different Wi-Fi networks, and hundreds of Bluetooth devices visible. So the big question is: How do you curate this, on a much larger scale, for the IoT world?”

“The type of service that creates parameters around who you’re willing to talk to as a device, how much you’re willing to pay for communications, who you want to be masked from, and so forth – that’s going to be really key, as well as how you implement this so that you don’t make a mistake and share the wrong kinds of information with the wrong device. It’s core to the privacy issue.”

“The last piece is standardisation. How these devices talk to one another – or don’t – is going to be key. What is very exciting is the role that all the platforms like Intel Edison, Arduino, BeagleBone have played in lowering the barrier by providing amazing Lego with which to prototype, and in some cases build these systems; it has allowed so many people to get involved,” he concluded.

Food retail doesn’t have a large industry-specific app ecosystem, which in some ways has benefited a company like Ocado. And as it makes the transition away from being the sole vendor of its product towards being a platform business, Clarke said the company will inevitably have to develop some new capabilities, from sales to support and consultancy, which it didn’t previously depend so strongly upon. But its core development efforts will only accelerate as it ramps up to launch the platform. It has 610 developers and is looking to expand to 750 by January next year across its main development centre in Hatfield and two others in Poland, one of which is being set up at the moment.

“I see no reason why it has to stop there,” he concludes.

Real-time cloud monitoring too challenging for most providers, TFL tech lead says

Reed says TFL wants to encourage greater greater use of its data

Reed says TFL wants to encourage greater greater use of its data

Getting solid data on what’s happening in your application in real-time seems to be a fairly big challenge for most cloud services providers out there explains Simon Reed, head of bus systems & technology at Transport for London (TFL).

TFL, the executive agency responsible for transport planning and delivery for the city of London, manages a slew of technologies designed to support over 10 million passenger journeys each day. These include back office ERP, routing and planning systems, mammoth databases tapped in to line-of-business applications as well as customer-facing app (i.e. real-time travel planning apps, and the journey planner website), line-of-business apps, as well as all the vehicle telematics, monitoring and tracking technologies.

A few years ago TFL moved its customer facing platforms – the journey planner, the TFL website, and the travel journey databases – over to a scalable cloud-based platform in a bid to ensure it could deal with massive spikes in demand. The key was to get much of that work completed before the Olympics, including a massive data syndication project so that app developers could more easily tap into all of TFL’s journey data.

“Around the Olympics you have this massive spike in traffic hitting our databases and our website, which required highly scalable front and back-ends,” Reed said. “Typically when we have industrial action or a snowstorm we end up with 10 to 20 times the normal use, often triggered in less than half an hour.”

Simon Reed is speaking at the Cloud World Forum in London June 24-25. Register for the event here.

The organisation processes bus arrival predications for all 19,000 bus stops in London which is constantly dumped into the cloud in a leaky-tap model, and there’s a simple cloud application that allows subscribers to download the data in a number of formats, and APIs to build access to that data directly into applications. “As long as developers aren’t asking for predictions nanoseconds apart, the service doesn’t really break down – so it’s about designing that out and setting strict parameters on how the data can be accessed and at what frequency.”

But Reed said gaining visibility into the performance of a cloud service out of the box seems to be a surprisingly difficult thing to do.

“I’m always stunned about how little information there is out of the box though when it comes to monitoring in the cloud. You can always add something in, but really, should I have to? Surely everyone else is in the same position where monitoring actual usage in real-time is fairly important. The way you often have to do this is to specify what you want and then script it, which is a difficult approach to scale,” he said. “You can’t help but think surely this was a ‘must-have’ when people had UNIX systems.”

Monitoring (and analytics) will be important for Reed’s team as they expand their use of the cloud, particularly within the context of the journey data TFL publishes. Reed said its likely those systems, while in a strong position currently, will likely see much more action as TFL pursues a strategy of encouraging use of the data outside the traditional transport or journey planning app context.

“What else can we do to that data? How can we turn it around in other ways? How can other partners do the same? For us it’s a question of exploiting the data capability we have and moving it into new areas,” he said.

“I’m still not convinced of the need to come out of whatever app you’re in – if you’re looking at cinema times you should be able to get the transportation route that gets you to the cinema on time, and not have to come out of the cinema listings app. I shouldn’t have to match the result I get in both apps in order to plan that event – it should all happen in one place. It’s that kind of thinking we’re currently trying to promote, to think more broadly than single purpose apps, which is where the market is currently.”

BMJ CTO: ‘Consumerisation of IT brings massive risks’

Sharon Cooper, CTO of BMJ

Sharon Cooper, CTO of BMJ

As we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Sharon Cooper, chief technology officer of BMJ to discuss her views on the risks brought about by the consumerisation of IT.

What do you see as the most disruptive trend in enterprise IT today?

For me it is the consumerisation of IT, but not because I’m worried that IT department is being put out of business, or because business users don’t know what tools they need to run their business. My concern about the disruption is that there is a hidden risk and potential massive costs and unknown danger because many of today’s applications and tools are so deceptively simple to use that business users are not aware of things that might be critical to them, in part because the IT department always controlled everything, and hid much of the complexity from them.

Tools are so easy to use, someone just sign ups with their email address, uploads a large spreadsheet full of personal customer data, and then they leave, they forget to tell anyone that they have that account, it might even be under their personal email address. So the company has no idea where its corporate assets are being stored, you have no idea where they are being stored, and when that customer asks to be removed from the company’s databases, nobody has any idea that the customers details are hidden away in locally used Google Drives, Dropboxes, or other applications.

If nobody in the company has a view over what tools are used, by whom and what’s in them, is the company even aware of the risk, or its individual employees who are using these tools? Business users are reasonably savvy people but they probably won’t check the T&Cs or remember that extremely boring information governance mandatory training module they had to complete last year.

I really encourage people in my organisation to find good tools, SaaS, cloud based, apps, but I ask them to ensure that my team knows what they are, give them a quick review to see if they are genuine and not some sort of route for activists, has checked over the T&Cs, remind them about the fact that they are now totally responsible for any personal customer data or sensitive corporate information in those applications, and they will be the ones that will be impacted if the ICO comes calling.

What do you think the industry needs to work on in terms of cloud service evolution?

Trying to get legislation to catch up with the tech, or even be in the same century.

What does BMJ’s IT estate look like? What are the major services needing support?

We have a bit of everything, like most companies, although I believe we have made fairly significant moves into cloud and SaaS/managed services.

Our desktop IT, which is provided by our parent company is very much traditional/on-premise, although we have migrated our part of the business to Google Apps for business, which has dramatically transformed staff’s ability to work anywhere. We’re migrating legacy bespoke CRM systems to cloud-based solutions, and use a number of industry specific managed services to provide the back office systems that we use directly, rather than via our parent.

Our business is in digital publishing and the tools that we use to create the IP and the products that drive our revenue are predominantly open source, cloud-based, and moving increasingly that way. Our current datacentre estate includes private cloud, with some public cloud, and we believe we will move more towards public over the next 2-3 years.

Can you describe some of the unique IT constraints or features particular to your company or the publishing sector? How are you addressing these?

Our parent company is in effect a UK trade union, its needs are very, very different from ours; we were originally their publishing department and now an international publisher with the majority of our revenues coming from outside the UK. There is some overlap but it is diminishing over time.

Our market is relatively slow to change in some ways, so our products are not always driven as fast by changes in technology or in the consumer IT markets.

Traditionally academic publishing is not seen as a huge target for attack, but the nature of what we publish, which can be considered by some to be dangerous, has the potential to increase our risks above that of some of our peers – for example, controversies over the accuracy of medical treatments, we were the Journal that produced the evidence that Andrew Wakefields research into MMR was wrong, and he has pursued us through the courts for years. If that story had broken today, would we have been a target of trolling or even hacktivists. We sell products into the Middle East that contain information on alcohol related diseases, and we’ve been asked to remove them because there is not alcoholic disease in those countries (we have not bowed to this government pressure),

As the use of knowledge at the point of care becomes ever more available via the use of devices that can be used by anyone, anywhere, so does the additional burden of medical device regulation and other challenges, which coming from a print publishing background, were never relevant before.

Are there any big IT initiatives on the horizon at BMJ? What are the main drivers of those?

We have probably under invested in many applications over the last several years, a policy to really sweat an IT asset was in place for years – and we have a range of systems we will be replacing over time, consolidating – for example we have 5 different e-commerce systems, revenue is processed in more than 3 applications.

As with most companies a focus on data and analytics in all of its guises will be critical as we move forward.

Why do you think it’s important to attend Cloud World Forum?

It’s always good to see what vendors are offering and to hear what others have done to solve problems in their industries which might have relevance to yours, quite often it means you don’t feel quite so bad about your own situation when you hear other people’s tales.

Phil Carnelley, research director at IDC on cloud, big data, Internet of Things

Philip Carnelley shares his views on the big disrupters in IT

Philip Carnelley shares his views on the big disrupters in IT

As we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Philip Carnelley, software research director at IDC Europe to discuss his views on the most disruptive trends in IT today.

What do you see as the most disruptive trend in enterprise IT today?

This is a tricky one but I think it’s got to be the Internet of Things – extending the edge of the network, we’re expecting a dramatic rise in internet-connected cars, buildings, homes, sensors for health and industrial equipment, wearables and more.

IDC expects some 28 billion IoT devices to be operational by 2020. Amongst other things, this will change the way a lot of companies operate, changing from device providers to service providers, and allowing device manufacturers to directly sell to, and service, their end customers in the way they didn’t before.

What do you think is lacking in the cloud sector today?

There are 2 things. First, many organizations still have concerns about security, privacy and compliance in a cloud-centric world. The industry needs to make sure that organizations understand that these needs can be met by today’s solutions.

Second, while most people buy into the cloud vision, it’s often not easy to get to there from where they are today. The industry must make it easy as possible, with simple solutions that don’t require fleets of highly trained people to understand and implement.

Are you seeing more enterprises look to non-relational database tech for transactional uses?

Absolutely. We’re seeing a definite rise in the use of NoSQL databases, as IT and DB architects become much more ready to choose databases on a use-base basis rather than just going for the default choice. A good example is the use of Basho Riak at the National Health Service.

Is cloud changing the way mobile apps and services are developed in enterprises?

Yes, there is a change towards creating mobile apps and services that draw on ‘mobile back-end-as-a-service’ technologies for their creation and operation

Why do you think it’s important to attend Cloud World Forum?

Because cloud is the fundamental platform for what IDC calls the 3rd Platform of Computing. We are in the middle of a complete paradigm shift to cloud-centric computing – with the associated technologies of mobile, social and big data – which is driving profound changes in business processes and even business models (think Uber, AirBnB, Netflix). Any company that wants to remain competitive in this new era needs to embrace these technologies, to learn more about them, in the way it develops and runs its operations for B2E, B2B and B2C processes.

Bring Your Own Encryption: The case for standards

BYOE is the new black

BYOE is the new black

Being free to choose the most suitable encryption for your business seems like a good idea. But it will only work in a context of recognised standards across encryption systems and providers’ security platforms. Since the start of the 21st century, security has emerged from scare-story status to become one of IT users’ biggest issues – as survey after survey confirms. Along the way a number of uncomfortable lessons are still being learned.

The first lesson is that security technology must always be considered in a human context. No one still believes in a technological fix that will put an end to all security problems, because time and again we hear news of new types of cyber attack that bypass sophisticated and secure technology by targeting human nature – from alarming e-mails ostensibly from official sources, to friendly social invitations to share a funny download; from a harmless-looking USB stick ‘accidentally’ dropped by the office entrance, to the fake policeman demanding a few personal details to verify that you are not criminally liable.

And that explains the article’s heading: a balance must be struck between achieving the desired level of protection against keeping all protection procedures quick and simple. Every minute spent making things secure is a minute lost to productivity – so the heading could equally have said “balancing security with efficiency”.

The second lesson still being learned is never to fully trust to instinct in security matters. It is instinctive to obey instructions that appear to come from an authoritative source, or to respond in an open, friendly manner to a friendly approach – and those are just the sort of instincts that are exploited by IT scams. Instincts can open us to attack, and they can also evoke inappropriate caution.

In the first years of major cloud uptake there was the oft-repeated advice to business that the sensible course would be to use public cloud services to simplify mundane operations, but that critical or high priority data should not be trusted to a public cloud service but kept under control in a private cloud. Instinctively this made sense: you should not allow your secrets to float about in a cloud where you have no idea where they are stored or who is in charge of them.

The irony is that the cloud – being so obviously vulnerable and inviting to attackers – is constantly being reinforced with the most sophisticated security measures: so data in the cloud is probably far better protected than any SME could afford to secure its own data internally. It is like air travel: because flying is instinctively scary, so much has been spent to make it safe that you are

less likely to die on a flight than you are driving the same journey in the “safety” of your own car. The biggest risk in air travel is in the journey to the airport, just as the biggest risk in cloud computing lies in the data’s passage to the cloud – hence the importance of a secure line to a cloud service.

So let us look at encryption in the light of those two lessons. Instinctively it makes sense to keep full control of your own encryption and keys, rather than let them get into any stranger’s hands – so how far do we trust that instinct, bearing in mind the need also to balance security against efficiency?

BYOK

Hot on the heels of BYOD – or “Bring Your Own Device” to the workplace – come the acronym for Bring Your Own Key (BYOK).

The idea of encryption is as old as the concept of written language: if a message might fall into enemy hands, then it is important to ensure that they will not be able to read it. We have recently been told that US forces used Native American communicators in WW2 because the chances of anyone in Japan understanding their language was near zero. More typically, encryption relies on some sort of “key” to unlock and make sense of the message it contains, and that transfers the problem of security to a new level: now the message is secure, the focus shifts to protecting the key.

In the case of access to cloud services: if we are encrypting data because we are worried about its security in an unknown cloud, why then should we trust the same cloud to hold the encryption keys?

Microsoft for instance recently announced a new solution to this dilemma using HSMs (Hardware Security Modules) within their Windows Azure cloud – so that an enterprise customer can use its own internal HSM to produce a master key that is then transmitted to the HSM within the Windows Azure cloud. This provides secure encryption when in the cloud, but it also means that not even Microsoft itself can read it, because they do not have the master key hidden in the enterprise HSM.

It is not so much that the enterprise cannot trust Microsoft to protect its data from attack, it is more to do with growing legal complexities. In the wake of Snowden revelations, it is becoming known that even the most well protected data might be at risk from a government or legal subpoena demanding to reveal its content. Under this BYOK system, however, Microsoft cannot be forced to reveal the enterprise’s secrets because it cannot access them itself, and the responsibility lies only with the owner.

This is increasingly important because of other legal pressures that insist on restricting access to certain types of data. A government can, for example, forbid anyone from allowing data of national importance to leave the country – not a simple matter in a globally connected IP network. There are also increasing legal pressures on holders of personal data to guarantee levels of privacy.

Instinctively it feels a lot more secure to manage your own key and use BYOK instead of leaving it to the cloud provider. As long as that instinct is backed by a suitable and strict in-house HSM based security policy, these instincts can be trusted.

BYOE

BYOK makes the best of the cloud provider’s encryption offering, by giving the customer ultimate control over its key. But is the customer happy with the encryption provided?

Bearing in mind that balance between security and efficiency, you might prefer a higher level of encryption than that used by the cloud provider’s security system, or you might find the encryption mechanism is adding latency or inconvenience and would rather opt for greater nimbleness at the cost of lighter encryption. In this case you could go a step further and employ your own encryption algorithms or processes. Welcome to the domain of BYOE (Bring Your Own Encryption).

Again, we must balance security against efficiency. Take the example of an enterprise using the cloud for deep mining its sensitive customer data. This requires so much computing power that only a cloud provider can do the job, and that means trusting private data to be processed in a cloud service. This could infringe regulations, unless the data is protected by suitable encryption. But how can the data be processed if the provider cannot read it?

Taking the WW2 example above: if a Japanese wireless operator was asked to edit the Native American message so a shortened version could be sent to HQ for cryptanalysis, any attempt to edit an unknown language would create gobbledygook, because translation is not a “homomorphic mapping”.

Homomorphic encryption means that one can perform certain processes on the encrypted data, and the same processes will be performed on the source data without any need to de-crypt the encrypted data. This usually implies arithmetical processes: so the data mining software can do its mining on the encrypted data file while it remains encrypted, and the output data, when decrypted, will be the same output as if the data had been processed without any intervening encryption.

It is like operating one of those automatic coffee vendors that grinds the beans, heats the water and adds milk and sugar according to which button was pressed: you do not know what type of coffee bean is used, whether tap, filtered or spring water or whether the milk is whole cream, skimmed or soya. All you know is that what comes out will be a cappuccino with no sugar. In the data mining example: what comes out might be a neat spread-sheet summary of customers average buying habits based on millions of past transactions, without a single personal transaction detail being visible to the cloud’s provider.

The problem with the cloud provider allowing the users to choose their own encryption, is that the provider’s security platform has to be able to support the chosen encryption system. As an interim measure, the provider might offer a choice from a range of encryption offerings that have been tested for compatibility with the cloud offering, but that still requires one to trust another’s choice of encryption algorithms. A full homomorphic offering might be vital for one operation, but a waste of money and effort for a whole lot of other processes.

The call for standards

So what is needed for BOYE to become a practical solution is a global standard cloud security platform that any encryption offering can be registered for support by that platform. The customer chooses a cloud offering for its services and for its certified “XYZ standard” security platform, then the customer goes shopping for an “XYZ certified” encryption system that matches its particular balance between security and practicality.

Just as in the BYOD revolution, this decision need not be made at an enterprise level, or even by the IT department. BYOE, if sufficiently standardised, could become the responsibility of the department, team or individual user: just as you can bring your own device to the office, you could ultimately take personal responsibility for your own data security.

What if you prefer to use your very own implementation of your own encryption algorithms? All the more reason to want a standard interface! This approach is not so new for those of us who remember the Java J2EE Crypto library – as long as we complied with the published interfaces, anyone could use their own crypto functions. This “the network is the computer” ideology becomes all the more relevant in the cloud age. As the computer industry has learned over the past 40 years, commonly accepted standards and architecture (for example the Von Neumamm model or J2EE Crypto) play a key role in enabling progress.

BYOE could prove every bit as disruptive as BYOD – unless the industry can ensure that users choose their encryption from a set of globally sanctioned and standardised encryption systems or processes. If business is to reap the full benefits promised by cloud services, it must have the foundation of such an open cloud environment.

Written by Dr. Hongwen Zhang, chair security working group, CloudEthernet Forum.

Q&A with Mark Evans, head of IT, RLB

Mark EvansAs we approach Cloud World Forum in London this June BCN had the opportunity to catch up with one of the conference speakers, Mark Evans, head of IT at global property and construction practice Rider Levett Bucknall (RLB) to discuss supporting BYOD, the need for standards in the cloud sector and the impact of working with large data models on the technology choices the firm has to make.

 

What do you see as the most disruptive trend in enterprise IT today?

I’m not entirely sure that the most disruptive trend in enterprise IT is entirely technical. Admittedly, the driving impetus for change is coming from technology, but it is being driven by non-IT people who are equipping their homes, cars and any one of a multitude of other environments with technology which works for them. The disruption manifests itself in the attitude which is brought to business from these domestic environments; people no longer see the bastion of “Corporate IT” as unassailable as it once was, before the commoditisation of IT equipment became the norm. Domestic procurement cycles are driven in a different manner to those of any business – it’s what the likes of Apple thrive on.

There’s more of a “heart” aspiration than a “head” decision when it comes to buying IT at home. Let’s be honest? Who – at home – works out depreciation of an asset when a loved one is being tugged at by slick marketing and peer pressure? Maybe I’m a misanthrope, but this sort of pressure has a knock-on effect with a lot of people and they seek the flexibility, the performance, the ease of use and (let’s be honest) the flashiness of new toys at work. The person with the keys to the “toy box”, the erstwhile IT director, is seen as a barrier to that oft-quoted, rarely well-informed concept of ‘agility’.

So… BYOD. People bring their home kit to work and expect it to work and to offer them an ‘edge’. I think the disruption is bigger than Dave from Accounts bringing in his shiny new laptop (with added speed stripes). It is the expectation that this is acceptable in the face of business-wide legal constraints of liability, compliance and business planning – the directors of a business set the rules and this new, almost frivolous attitude to the complexity and requirements of corporate IT is a “wolf in sheep’s clothing” in terms of the risk it brings to a business. Where do I sit on this? I say, “bring it on”.

 

What do you think the industry needs to work on in terms of cloud service evolution?

Portability. Standards. Standards of portability. I still believe that there is a general complicity between vendors and purchasers to create a “handcuffs” relationship (“Fifty Shades of Big Blue”?) which is absolutely fine in the early part of a business relationship as it provides a predictable environment from the outset, but this predictability can become moribund and in an era where business models flex and morph at previously alarming rates, the “handcuffs” agreement can become shackles. If the agreement is on a month-by-month basis, it is rarely easy to migrate across Cloud platforms. Ignoring the potential volumes of data which may need to be moved, there is no lingua franca for Cloud services to facilitate a “switch on/switch off” ease-of-migration one might expect in the Cloud environment, predicated as it is on ease-of-use and implementation.

Data tends to move slowly in terms of development (after all, that’s where the value is), so maybe as an industry we need to consider a Data Cloud Service which doesn’t require massive agility, but a front-end application environment which is bound by standards of migratability (is that a word? If it isn’t – it should be!) to offer front-end flexibility against a background of data security and accessibility. In that way, adopting new front-end processes would be easier as there would be no requirement to haul terabytes of data across data centres. Two different procurement cycles, aligned to the specific vagaries of their environments.

 

Can you describe some of the unique IT constraints or features particular to your sector?

Acres of huge data structures. When one of the major software suppliers in your industry (AutoDESK and Construction, respectively) admit that the new modelling environment for buildings goes beyond the computing and data capability in the current market – there are alarm bells. This leads to an environment where the client front end ‘does the walking’ and the data stays in a data centre or the Cloud. Models which my colleagues need to use have a “starting price” of 2Gb and escalate incredibly as the model seeks to more accurately represent the intended construction project. In an environment where colleagues would once carry portfolios of A1 or A0 drawings, they now have requirements for portable access to drawings which are beyond the capabilities of even workstation-class laptop equipment. Construction and, weirdly enough, Formula One motorsport, are pushing the development of Cloud and virtualisation to accommodate these huge, data-rich, often highly graphical models. Have you ever tried 3D rendering on a standard x64 VMWare or Hyper-V box? We needed Nvidia to sort out the graphics environment in the hardware environment and even that isn’t the ‘done deal’ we had hoped.

 

Is the combination of cloud and BYOD challenging your organisation from a security perspective? What kind of advice would you offer to other enterprises looking to secure their perimeter within this context?

Not really. We have a strong, professional and pragmatic HR team who have put in place the necessary constraints to ensure that staff are fully aware of their responsibilities in a BYOD environment. We have backed this up with decent MDM control. Beyond that? I honestly believe that “where there’s a will, there’s a way” and that if MI5 operatives can leave laptops in taxis we can’t legislate for human frailties and failings. Our staff know that there is a ‘cost of admission’ to the BYOD club and it’s almost a no-brainer; MDM controls their equipment within the corporate sphere of influence and their signature on a corporate policy then passes on any breaches of security to the appropriate team, namely, HR.

My advice to my IT colleagues would be – trust your HR team to do their job (they are worth their weight in gold and very often under-appreciated), but don’t give them a ‘hospital pass’ by not doing everything within your control to protect the physical IT environment of BYOD kit.

 

What’s the most challenging part about setting up a hybrid cloud architecture?

Predicting the future. It’s so, so, so easy to map the current operating environment in your business to a hybrid environment (“They can have that, we need to keep this…”) but constraining the environment by creating immovable and impermeable glass walls at the start of the project is an absolutely, 100 per cent easy way to lead to frustration with a vendor in future and we must be honest and accept that by creating these glass walls we were the architect of our own demise. I can’t mention any names, but a former colleague of mine has found this out to his company’s metaphorical and bottom-line cost. They sought to preserve their operating environment in aspic and have since found it almost soul-destroying to start all over again to move to an environment which supported their new aspirations.

Reading between the lines, I believe that they are now moving because there is a stubbornness on both sides and my friend’s company has made it more of a pain to retain their business than a benefit. They are constrained by a mindset, a ‘groupthink’ which has bred bull-headedness and very constrained thinking. An ounce of consideration of potential future requirements could have built in some considerable flexibility to achieve the aims of the business in changing trading environments. Now? They are undertaking a costly migration in the midst of a potentially high-risk programme of work; it has created stress and heartache within the business which might have been avoided if the initial move to a hybrid environment had considered the future, rather than almost constrained the business to five years of what was a la mode at the time they migrated.

 

What’s the best part about attending Cloud World Forum?

Learning that my answers above may need to be re-appraised because the clever people in our industry have anticipated and resolved my concerns.

15591-CWF15-web-banner 2

CenturyLink acquires Orchestrate to strengthen DBaaS offering

CenturyLink has acquired Orchestrate to strengthen its database-as-a-service proposition

CenturyLink has acquired Orchestrate to strengthen its database-as-a-service proposition

CenturyLink has acquired Orchestrate, a database-as-a-service provider specialising in delivering fully managed, high performance, fault tolerant NoSQL database technologies.

CenturyLink said that Orchestrate, which partners with AWS on public cloud hosting for its clients’ datasets, will help bolster its cloud-based database and managed services propositions.

“CenturyLink’s customers, like most enterprises, are expressing interest in solutions that help them meet the performance, scalability and agile development needs of large-scale big data analytics,” said Glen Post, chief executive officer and president of CenturyLink.

“The Orchestrate database service’s ease of use and ability to support multiple database technologies have emerged as key differentiators that we are eager to offer our customers through the CenturyLink Cloud platform,” Post said.

As for drivers of the acquisition, the company said growing use cases around the Internet of Things is creating more demand for fully-managed NoSQL technologies. Orchestrate offers a managed service that basically abstracts many of the underlying hardware and database-specific coding away and delivers an API that enables developers to store and query JSON data easily.

The acquisition will see the Orchestrate services team join CenturyLink’s product development and technology organisation, with Orchestrate co-founders Antony Falco and Ian Plosker as well as vice president of engineering Dave Smith joining the company.

“CenturyLink Cloud features one of the most sophisticated service infrastructures in the market, with a great interface and lots of options for managing complex workflow and third-party applications in the cloud,” Falco said. “Orchestrate’s database service takes the same approach to delivering cost efficiency and ease of use. Enterprise customers are increasingly expecting one global platform to provide these services.”