All posts by Jonathan Brandon

Hybrid cloud issues are cultural first, technical second – Ovum

CIOs are still struggling with their hybrid cloud strategies

CIOs are still struggling with their hybrid cloud strategies

This week has seen a number of hybrid cloud deals which would suggest the industry is making significant progress delivering the platforms, services and tools necessary to make hybrid cloud practical. But if anything they also serve as a reminder that IT will forever be multimodal which creates challenges that begin with people, not technology, explains Ovum’s principle analyst of infrastructure solutions Roy Illsley.

There has been no shortage of hybrid cloud deals this week.

Rackspace and Microsoft announced a deal that would see the hosting and cloud provider expand its Fanatical Support to Microsoft Azure-based hybrid cloud platforms.

Google both announced it would support Windows technologies on its cloud platform, and that it would formally sponsor the OpenStack foundation – a move aimed at supporting container portability between multiple cloud platforms.

HP announced it would expand its cloud partner programme to include CenturyLink, which runs much of its cloud platform on HP technology, in a move aimed at bolstering HP’s hybrid cloud business and CenturyLink’s customer reach.

But one of the more interesting hybrid cloud stories this week came from the enterprise side of the industry. Copper and gold producer Freeport-McMoRan announced it is embarking on a massive overhaul of its IT systems. In a bid to become more agile the firm said it would deploy its entire application estate on a combination of private and public cloud platforms – though, and somewhat ironically, the company said the entire project would wrap up in five years (which, being pragmatic about IT overhauls, could mean far later).

“The biggest challenge with hybrid cloud isn’t the technology per se – okay, so you need to be able to have one version of the truth, one place where you can manage most the platforms and applications, one place where to the best of your abilities you can orchestrate resources, and so forth,” Illsley explains.

Of course you need all of those things, he says. There will be some systems that won’t fit into that technology model, that will likely be left out (i.e. mainframes). But there are tools out there to fit current hybrid use cases.

“When most organisations ‘do’ hybrid cloud, they tend to choose where their workloads will sit depending on their performance needs, scaling needs, cost and application architecture – and then the workloads sit there, with very little live migration of VMs or containers. Managing them while they sit there isn’t the major pain point. It’s about the business processes; it’s the organisational and cultural shifts in the IT department that are required in order to manage IT in a multimodal world.”

“What’s happening in hybrid cloud isn’t terribly different from what’s happening with DevOps. You have developers and you have operations, and sandwiching them together in one unit doesn’t change the fact that they look at the world – and the day-to-day issues they need to manage or solve – in their own developer or operations-centric ways. In effect they’re still siloed.”

The way IT is financed can also create headaches for CIOs intent on delivering a hybrid cloud strategy. Typically IT is funded in an ‘everyone pitches into the pot’ sort of way, but one of the things that led to the rise of cloud in the first place is line of businesses allocating their own budgets and going out to procure their own services.

“This can cause both a systems challenge – shadow IT and the security, visibility and management issues that come with that – and a cultural challenge, one where LOB heads see little need to fund a central organisation that is deemed too slow or inflexible to respond to customer needs. So as a result, the central pot doesn’t grow.”

While vendors continue to ease hybrid cloud headaches on the technology front with resource and financial (i.e. chargeback) management tools, app stores or catalogues, and standardised platforms that bridge the on-prem and public cloud divide, it’s less likely the cultural challenges associated with hybrid cloud will find any straightforward solutions in the short term.

“It will be like this for the next ten or fifteen years at least. And the way CIOs work with the rest of the business as well as the IT department will define how successful that hybrid strategy will be, and if you don’t do this well then whatever technologies you put in place will be totally redundant,” Illsley says.

CSA lends prototype compliance tool to six-year cloud security project

The CSA is part of the STRATUS project, a six-year cybersecurity project

The CSA is part of the STRATUS project, a six-year cybersecurity project

The Cloud Security Alliance (CSA) said this week that it is lending a prototype data auditing and compliance regulation tool to the STRATUS initiative, a six-year multi-million dollar cybersecurity project funded by New Zealand’s Ministry of Business, Innovation, and Employment.

STRATUS, which stands for Security Technologies Returning Accountability, Transparency and User-centric Services in the Cloud, is a project being led by the University of Waikato intends to develop a series of security tools, techniques and capabilities to help give cloud users more control over how they secure the cloud services they use.

As part of the project the CSA showed how cloud data governance could be automated by applying auditing guidelines (CSA Cloud Control Matrix, ISO standards, etc.) and compliance regulations using a recently developed online tool.

The organisation, which is leading the data governance and accountability subproject within STRATUS, said it would also help support STRATUS’ commercialisation efforts.

“STRATUS’ approach to research commercialisation is different from typical scientific research grants,” said Dr. Ryan Ko, principal investigator of STRATUS, and CSA APAC research advisor.

“STRATUS understands that for cloud security innovation to reach a global audience, it will require a platform which will allow these cutting-edge cloud services to quickly align to global best practices and requirements – a core CSA strength given its strong research outputs such as the Cloud Controls Matrix and the Cloud Data Governance Working Group,” Ko said.

Aloysius Cheang, managing director for CSA APAC: “We have developed a prototype tool based on our work so far, that has received positive reviews. In addition, we are working to connect STRATUS and New Zealand to the CSA eco-system through our local chapter. More importantly, we are beginning to see some preliminary results of the efforts to connect to dots to commercialisation efforts as well as standardization efforts.”

The organisation reckons it should be able to show off the “fruit of these efforts” in November this year.

Living in a hybrid world: From public to private cloud and back again

Orlando Bayter, chief exec and founder of Ormuco

Orlando Bayter, chief exec and founder of Ormuco

The view often propagated by IT vendors is that public cloud is already capable of delivering a seamless extension between on-premise private cloud platforms and public, shared infrastructure. But Orlando Bayter, chief executive and founder of Ormuco, says the industry is only at the outset of delivering a deeply interwoven fabric of private and public cloud services.

Demand for that kind of seamlessness hasn’t been around for very long, admittedly. It’s no great secret that in the early days of cloud demand for public cloud services was spurred largely by the slow-moving pace traditional IT organisations are often set. As a result every time a developer wanted to build an application they would simply swipe the credit card and go, billing back to IT at some later point. So the first big use case for hybrid cloud emerged when developers then needed to bring their apps back in-house, where they would live and probably die.

But as the security practices of cloud service providers continue to improve, along with enterprise confidence in cloud more broadly, cloud bursting – the ability to use a mix of public and private cloud resources to fit the utilisation needs of an app – became more widely talked about. It’s usually cost prohibitive and far too time consuming to scale private cloud resources quick enough to meet the changing demands of today’s increasingly web-based apps, so cloud bursting has become the natural next step in the hybrid cloud world.

Orlando will be speaking at the Cloud World Forum in London June 24-25. Click here to register.

There are, however, still preciously few platforms that offer this kind of capability in a fast and dynamic way. Open source projects like OpenStack or more proprietary variants like VMware’s vCloud or Microsoft’s Azure Stack (and all the tooling around these platforms or architectures) are at the end of the day all being developed with a view towards supporting the deployment and management of workloads that can exist in as many places as possible, whether on-premise or in a cloud vendor’s datacentre.

“Let’s say as a developer you want to take an application you’ve developed in a private cloud in Germany and move it onto a public cloud platform in the US. Even for the more monolithic migration jobs you’re still going to have to do all sorts of re-coding, re-mapping and security upgrades, to make the move,” Bayter says.

“Then when you actually go live, and have apps running in both the private and public cloud, the harsh reality is most enterprises have multiple management and orchestration tools – usually one for the public cloud and one for the private; it’s redundant, and inefficient.”

Ormuco is one company trying to solve these challenges. It has built a platform based on HP Helion OpenStack and offers both private and public instances, which can both be managed in a single pane of glass; it has built its own layer in between to abstract resources underneath).

It has multiple datacentres in the US and Europe from which it offers both private and public instances, as well as the ability to burst into its cloud platform using on-premise OpenStack-based clouds. The company is also a member of the HP Helion Network, which Bayter says gives it a growing channel and the ability to offer more granular data protection tools to customers.

“The OpenStack community has been trying to bake some of these capabilities into the core open source code, but the reality is it only achieved a sliver of these capabilities by May this year,” he said, alluding to the recent OpenStack Summit in Vancouver where new capabilities around federated cloud identity were announced and demoed.

“The other issue is simplicity. A year and a half ago, everyone was talking about OpenStack but nobody was buying it. Now service providers are buying but enterprises are not. Specifically with enterprises, the belief is that OpenStack will be easier and easier as time goes on, but I don’t think that’s necessarily going to be the case,” he explains.

“The core features may become a bit easier but the whole solution may not, but there are so many things going into it that it’s likely going to get clunkier, more complex, and more difficult to manage. It could become prohibitively complex.”

That’s not to say federated identity or cloud federation is a lost cause – on the contrary, Bayter says it’s the next horizon for cloud. The company is currently working a set of technologies that would enable any organisation with infrastructure that lies significantly underutilised for long periods to rent out their infrastructure in a federated model.

Ormuco would verify and certify the infrastructure, and allocate a performance rating that would change dynamically along with the demands being placed on that infrastructure – like an AirBnB for OpenStack cloud users. Customers renting cloud resources in this market could also choose where their data is hosted.

“Imagine a university or a science lab that scales and uses its infrastructure at very particular times; the rest of the time that infrastructure is fairly underused. What if they could make money from that?”

There are still many unanswered questions – like whether the returns for renting organisations would justify the extra costs (i.e. energy) associate with running that infrastructure, or where the burden of support lies (enterprises need solid SLAs for production workloads) and how that influences what kinds of workloads ends up on rented kit, but the idea is interesting and definitely consistent with the line of thinking being promoted by the OpenStack community among others in open source cloud.

“Imagine the power, the size of that cloud,” says Bayter . “That’s the cloud that will win out.”

This interview was produced in partnership with Ormuco

Food retail, robotics, cloud and the Internet of Things

Ocado is developing a white-label grocery delivery service

Ocado is developing a white-label grocery delivery service

With a varied and fast moving supply chain, loads of stock moving quickly through warehouses, delivery trucks, stores, and an increasingly digital mandate, the food retail sector is unlike any other retail segment. Paul Clarke, director of technology at Ocado, a leading online food retailer, explains how the cloud, robotics, and the Internet of Things is increasingly at the heart of everything the company does.

Ocado started 13 years ago as an online supermarket where consumers could quickly and easily order food goods. It does not own or operate any brick-and-mortar stores, though it effectively competes with all other food retailers, in some ways now more than ever because of how supermarkets have evolved in the UK. Most of them offer online ordering and food delivery services.

But in 2013 the company struck a £216m deal with Morrisons that would see Ocado effectively operate as the Morrisons online food store, a shift from its previous strategy of offering a standalone end-to-end grocery service with its own brand on the front-end – and a move that would become central to its growth strategy moving forward. The day the Morrisons platform went live in early 2014 the company set to work on re-platforming the Ocado service and turning it into the Ocado Smart Platform (OSP), a white-label end-to-end grocery service that can be deployed by food retailers globally. Clarke was fairly tight-lipped about some of the details for commercial reasons, but suggested “there isn’t a continent where the company is not currently in discussions” with a major food retailers to deliver OSP.

The central idea behind this is that standing up a grocery delivery service – the technical infrastructure as well as support services – is hugely expensive for food retailers and involves lots of technical integration, so why not simply deploy a white label end-to-end service that will still retain the branding of said retailer but offer all the benefits?

Paul Clarke is speaking at the Cloud World Forum in London June 24-25. Click here to register!

“In new territories you don’t need the size of facilities that we have here in the Midlands. For instance, our site in the Midlands costs over £230m, and that is fine for the UK which has an established online grocery business and our customer base, but it wouldn’t fit well in a new territory where you’re starting from scratch, nor is there the willingness to spend such sums,” he explains.

The food delivery service operates in a hub-and-spoke model. The cloud service being developed by Ocado connects the ‘spokes’, smaller food depots (which could even be large food delivery trucks) to customer fulfilment centres, which are larger warehouses that house the majority of the stock (the ‘hub’).

The company is developing and hosting the service on a combination of AWS and Google’s cloud platforms – for the compute and data side, respectively.

“The breadth and depth of our estate is huge. You have robotics systems, vision systems, simulation systems, data science applications, and the number of different kinds of use cases we’re putting in the cloud is significant. It’s a microservices architecture that we’re building with hundreds of different microservices. A lot of emphasis is being put on security through design, and robust APIs so it can be integrated with third party products – it’s an end-to-end solution but many of those incumbents will have other supply chain or ERP solutions and will want to integrate it with those.”

AWS and Google complement eachother well, he says. “We’re using most things that both of those companies have in their toolbox; there’s probably not much that we’re not using there.”

The warehousing element including the data systems will run on a private cloud in the actual product warehouses, so low latency real-time control systems will run in the private cloud, but pretty much everything else will run in the public cloud.

The company is also looking at technologies like OpenStack, Apache Mesos and CoreOS because it wants to run as many workloads as possible in Linux containers; they’re more portable than VMs and because of the variation between the regions (legislation and performance) where it will operate the company may have to change whether it deploys certain workloads in a public cloud or private cloud quite quickly.

The Internet of Things and the Great Data Lakes

IoT is very important for the company in several areas. Its warehouses are like little IoT worlds all on their own, Clarke says, with lots of M2M, hundreds of kilometres of conveyor, and thousands of things on the move at any one time including automated cranes and robotics.

Then there’s all of the data the company collects from drivers for things like route optimisation and operational improvement – things like wheel speed, tire pressure, road speed, engine revs, fuel consumption, cornering performance, which are all fed back to the company in real-time and used to track driver performance.

There’s also a big role for wearables in those warehouses. Clarke says down the line wearables have the potential to help it improve safety and productivity (“we’re not there yet but there is so much potential.”)

But where IoT can have the biggest impact in food retail, and where it’s most underestimated, Clarke explains, is the customer element: “This is where many companies underestimate the scale of transformation IoT is going to bring, the intersection of IoT and smart machines. In our space we see that in terms of the smart home, smart appliances, smart packaging, it’s all very relevant. The customers living in this world are going to demand this kind of smartness from all the systems they use, so it’s going to raise the bar for all the mobile apps and service we build.”

“Predictive analytics are going to play a big part there, as will machine learning, to help them do their shop up in our case, or knowing what they want before they even have a clue themselves. IoT has a very important part to play in that in terms of delivering that kind of information to the customer to the extent that they wish to share it,” he says.

But challenges, ones that straddle the legal, technical and cultural, persist in this nascent space. One of them, largely technical, is data management, which isn’t insurmountable. The company has implemented a data lake built on Google BigQuery, where it publishes a log of pretty much every business event onto a backbone that it persists through that service as well as data exhaust from its warehouse logs, alerts, driver monitoring information, clickstream data and front-end supply chain information (at the point of order), and it uses technologies like Dataflow and Hadoop for number crunching.

Generally speaking, Clarke says, grocery is just fundamentally different to non-grocery or food in ways that have data-specific implications. “When you go buy stationary or a printer cartridge you usually buy one or two items. With grocery there can often be upwards of 50 items, there are multiple suppliers and multiple people involved, sometimes at different places, often on different devices and different checkouts. So that journey of stitching that order, that journey together, is a challenge from a data perspective in itself.”

Bigger challenges in the IoT arena, where more unanswered questions lie, include security and identity management, discoverability, data privacy and standards – or the lack of. These are the problems that aren’t so straightforward.

“A machine is going to have to have an identity. That whole identity management question for these devices is key and so far goes unanswered. It’s also linked to discoverability. How do you find out what the device functions are? Discovery is going to get far too complex for humans. You get into a train station these days and there are already 40 different Wi-Fi networks, and hundreds of Bluetooth devices visible. So the big question is: How do you curate this, on a much larger scale, for the IoT world?”

“The type of service that creates parameters around who you’re willing to talk to as a device, how much you’re willing to pay for communications, who you want to be masked from, and so forth – that’s going to be really key, as well as how you implement this so that you don’t make a mistake and share the wrong kinds of information with the wrong device. It’s core to the privacy issue.”

“The last piece is standardisation. How these devices talk to one another – or don’t – is going to be key. What is very exciting is the role that all the platforms like Intel Edison, Arduino, BeagleBone have played in lowering the barrier by providing amazing Lego with which to prototype, and in some cases build these systems; it has allowed so many people to get involved,” he concluded.

Food retail doesn’t have a large industry-specific app ecosystem, which in some ways has benefited a company like Ocado. And as it makes the transition away from being the sole vendor of its product towards being a platform business, Clarke said the company will inevitably have to develop some new capabilities, from sales to support and consultancy, which it didn’t previously depend so strongly upon. But its core development efforts will only accelerate as it ramps up to launch the platform. It has 610 developers and is looking to expand to 750 by January next year across its main development centre in Hatfield and two others in Poland, one of which is being set up at the moment.

“I see no reason why it has to stop there,” he concludes.

Philips health cloud lead: ‘Privacy, compliance, upgradability shaping IoT architecture’

Ad Dijkhoff says the company's healthcare cloud ingests petabytes of data, experiencing 140 million device calls on its servers each data

Ad Dijkhoff says the company’s healthcare cloud ingests petabytes of data, experiencing 140 million device calls on its servers each day

Data privacy, compliance and upgradeability are having a deep impact on the architectures being developed for the Internet of Things, according to Ad Dijkhoff, platform manager HealthSuite Device Cloud, Philips.

Dijkhoff, who formerly helped manage the electronics giant’s infrastructure as the company’s datacentre programme manager, helped develop and now manages the company’s HealthSuite device cloud, which links over 7 million healthcare devices and sensors in conjunction with social media and electronic medical health record data to a range of backend data stores and front-end applications for disease prevention and social healthcare provision.

It collects all of the data for analysis and to help generate algorithms to improve the quality of the medical advice that can be generated from it; it also opens those datastores to developers, which can tap into the cloud service using purpose-built APIs.

“People transform from being consumers to being patients, and then back to being consumers. This is a tricky problem – because how do you deal with privacy? How do you deal with identity? How do you manage all of the service providers?” Dijkhoff said.

On the infrastructure side for its healthcare cloud service Philips is working with Rackspace and Alibaba’s cloud computing unit; it started in London and the company also has small instances deployed in Chicago, Houston and Hong Kong. It started with a private cloud, in part because the technologies used meant the most straightforward transition from its hosting provider at the time, and because it was the most feasible way to adapt the company’s existing security and data privacy policies.

“These devices are all different but they all share similar challenges. They all need to be identified and authenticated, first of all. Another issue is firmware downloadability – what we saw with consumer devices and what we’re seeing in professional spaces is that these devices with be updated a number of times during a lifetime, so you need that process to be cheap and easy.”

“Data collection is the most important service of them all. It’s around getting the behaviour of the device, or sensor behavior, or the blood pressure reading or heart rate reading into a back end, but doing it in a safe and secure way.”

Dijkhoff told BCN that these issues had a deep influence architecturally, and explained that it had to adopt a more modular approach to how it deployed each component so that it could drop in cloud services where feasible – or use on-premise alternatives where necessary.

“Having to deal with legislation in different countries on data collection, how it can be handled, stored and processed, had to be built into the architecture from the very beginning, which created some pretty big challenges, and it’s probably going to be a big challenge for others moving forward with their own IoT plans,” he said. “How do you create something architecturally modular enough for that? We effectively treat data like a post office treats letters, but sometimes the addresses change and we have to account for that quickly.”

Containers ready for the primetime, Rackspace CTO says

John Engates was in London for the Rackspace Solve conference

John Engates was in London for the Rackspace Solve conference

Linux containers have been around for some time but only now is the technology reaching a level of maturity enterprise cloud developers are comfortable with, explained John Engates, Rackspace’s chief technology officer.

Linux containers have been all the rage the past year, and Engates told BCN the volume of the discussion is only likely to increase as the technology matures. But the technology is still young.

“We tried to bring support for containers to OpenStack around three or four years back,” Engates said. “But I think that containers are finally ready for cloud.”

One of the projects Engates cited to illustrate this is Project Magnum, a young sub-project within OpenStack building on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques); it effectively enables users and service providers to offer containers-as-a-service, and improves portability of containers between different cloud platforms.

“While containers have been around for a while they’ve only recently become the darling of the enterprise cloud developers, and part of that is because there’s a growing ecosystem out there working to build the tools needed to support them,” he said.

A range of use cases around Linux containers have emerged over the years – as a transport method, as a way of quickly deploying and porting apps between different sets of infrastructure, as a way of standing up a cloud service that offers greater billing granularity (more accurate / efficient usage) – the technology is still maturing and has suffered from a lack of tooling. Doing anything like complex service chaining is still challenging with existing tools, but that’s improving.

Beyond LXC, one of the earliest Linux container projects, there’s now CoreOS, Docker, Mesos, Kubernetes, and a whole host of container-like technologies that bring the microservices / OS ‘light’ architecture as well as deployment scheduling and cluster management tools to market.

“We’re certainly hearing more about how we can help support containers, so we see it as a pretty important from a service perspective moving forward,” he added.

Real-time cloud monitoring too challenging for most providers, TFL tech lead says

Reed says TFL wants to encourage greater greater use of its data

Reed says TFL wants to encourage greater greater use of its data

Getting solid data on what’s happening in your application in real-time seems to be a fairly big challenge for most cloud services providers out there explains Simon Reed, head of bus systems & technology at Transport for London (TFL).

TFL, the executive agency responsible for transport planning and delivery for the city of London, manages a slew of technologies designed to support over 10 million passenger journeys each day. These include back office ERP, routing and planning systems, mammoth databases tapped in to line-of-business applications as well as customer-facing app (i.e. real-time travel planning apps, and the journey planner website), line-of-business apps, as well as all the vehicle telematics, monitoring and tracking technologies.

A few years ago TFL moved its customer facing platforms – the journey planner, the TFL website, and the travel journey databases – over to a scalable cloud-based platform in a bid to ensure it could deal with massive spikes in demand. The key was to get much of that work completed before the Olympics, including a massive data syndication project so that app developers could more easily tap into all of TFL’s journey data.

“Around the Olympics you have this massive spike in traffic hitting our databases and our website, which required highly scalable front and back-ends,” Reed said. “Typically when we have industrial action or a snowstorm we end up with 10 to 20 times the normal use, often triggered in less than half an hour.”

Simon Reed is speaking at the Cloud World Forum in London June 24-25. Register for the event here.

The organisation processes bus arrival predications for all 19,000 bus stops in London which is constantly dumped into the cloud in a leaky-tap model, and there’s a simple cloud application that allows subscribers to download the data in a number of formats, and APIs to build access to that data directly into applications. “As long as developers aren’t asking for predictions nanoseconds apart, the service doesn’t really break down – so it’s about designing that out and setting strict parameters on how the data can be accessed and at what frequency.”

But Reed said gaining visibility into the performance of a cloud service out of the box seems to be a surprisingly difficult thing to do.

“I’m always stunned about how little information there is out of the box though when it comes to monitoring in the cloud. You can always add something in, but really, should I have to? Surely everyone else is in the same position where monitoring actual usage in real-time is fairly important. The way you often have to do this is to specify what you want and then script it, which is a difficult approach to scale,” he said. “You can’t help but think surely this was a ‘must-have’ when people had UNIX systems.”

Monitoring (and analytics) will be important for Reed’s team as they expand their use of the cloud, particularly within the context of the journey data TFL publishes. Reed said its likely those systems, while in a strong position currently, will likely see much more action as TFL pursues a strategy of encouraging use of the data outside the traditional transport or journey planning app context.

“What else can we do to that data? How can we turn it around in other ways? How can other partners do the same? For us it’s a question of exploiting the data capability we have and moving it into new areas,” he said.

“I’m still not convinced of the need to come out of whatever app you’re in – if you’re looking at cinema times you should be able to get the transportation route that gets you to the cinema on time, and not have to come out of the cinema listings app. I shouldn’t have to match the result I get in both apps in order to plan that event – it should all happen in one place. It’s that kind of thinking we’re currently trying to promote, to think more broadly than single purpose apps, which is where the market is currently.”

HP targets hybrid cloud users with CloudSystem, Helion updates

HP has updated its CloudSystem converged infrastructure offerings

HP has updated its CloudSystem converged infrastructure offerings

HP has updated its CloudSystem platform to include its distribution of OpenStack in a converged private cloud offering. Paul Morgan, HP’s cloud head in EMEA told BCN the company is looking to broaden its support for hybrid cloud deployments.

The HP Helion CloudSystem includes all of HP’s Helion software including CloudSystem (its own private cloud software), Helion OpenStack, Helion Development Platform (it’s Cloud Foundry distribution) and HP Helion Eucalyptus (for AWS workload portability).

The offering comes in two flavours: the CS200-HC, which is being pitched as an entry-level hyper converged system aimed at SMBs and enterprises and can scale up to 32 nodes (pricing will be announced later this year but Morgan suggested the cost would float around the $2,000 mark for three years with support and maintenance); and the CS700x/700, which is cabinet-sized, aimed at larger enterprises and can scale up to 100 blade servers.

The software loaded on top comes in two versions: Foundation, which includes the Development Pack and OpenStack; and Enterprise, which ships with everything the Foundation package offers as well as OneView, Matrix, and Eucalyptus service templates, and includes more robust architecting and publishing capabilities.

The company said it has expanded support for HyperV, enhanced VMware networking, and added a number of OpenStack ancillary services under the hood including Heat (for orchestration) and Horizon (for dashboarding). It’s also added OneView – its converged infrastructure operations management software – to the mix.

“We definitely think that down the road many of the applications and workloads we see today will be hosted in the public cloud,” Morgan explained. “But in reality many of those applications don’t move over so easily. The cloud journey really does start with hybrid, which is where we think we can add value.”

Morgan said that converged offerings can help IT departments save big because they improve automation and deliver orchestration and automation without needing to radically change applications. He added that some of its customers have saved upwards of 30 to 40 per cent using its CloudSystem offerings.

“That’s where these converged offerings can play a role – in delivering all of the agility and cost savings cloud brings and which enterprises are looking for when they refresh their hardware, but not necessarily forcing them to rush off and overhaul the application landscape at the same time.”

Google’s IoT land grab: will Brillo help or hinder?

Google's having a go at the Internet of Things, but how will it sit with developers and device manufacturers?

Google’s having a go at the Internet of Things, but how will it sit with developers and device manufacturers?

The long rumoured Project Brillo, Google’s answer to the Internet of Things, was finally unveiled this week at the company’s annual I/O conference, and while the project shows promise it comes at time when device manufacturers and developers are increasingly being forced to choose between IoT ecosystems. Contrary to Google’s stated aims, Brillo could – for the same reason – hinder interoperability and choice in IoT rather than facilitate it.

It’s difficult to see Project Brillo as anything more than it really is – an attempt at grabbing highly sought-after ground in the IoT space. It has two key components. There’s Brillo, which is essentially a mini Android OS (made up of some of the services the fully fledged OS abstracts) which Google claims can run on tiny embeddable IP-connected devices (critically, the company hasn’t revealed what the minimum specs for those devices are); and Weave, a proprietary set of APIs that help developers manage the communications layer linking apps on mobile phones to sensors via the cloud.

Brillo will also come with metrics and crash reporting to help developers test and de-bug their IoT services.

The company claims the Weave programme, which will see manufacturers certify to run Brillo on their embeddable devices in much the same Google works with handset makers to certify Android-based mobile devices, will help drive interoperability and quality – two things IoT desperately needs.

The challenge is it’s not entirely clear how Google’s Brillo will deliver on either front. Full-whack Android is almost a case-in-point in itself. Despite have more than a few years to mature, the Android ecosystem is still plagued by fragmentation, which produces its fair share of headaches for developers. As we recently alluded to in an article about Google trying to tackle this problem, developing for a multitude of platforms running Android can be a nightmare; an app running smoothly on an LG G3 can be prone to crashing on a Xiaomi or Sony device because of architectural or resource constraint differences.

This may be further complicated in the IoT space by the fact that embeddable software is, at least currently, much more difficult to upgrade than Android, likely leading to even more software heterogeneity than one currently finds in the Android ecosystem.

Another thing to consider is that most embeddable IoT devices currently in the market or planned for deployment are so computationally and power-constrained (particularly for industrial applications, which is where most IoT stuff is happening these days) that it’s unclear whether there will be a market for Brillo to tap into anytime soon. This  isn’t really much use for developers – the cohort Google’s trying to go after.

For device manufacturers, the challenge will be whether building to Google’s specs will be worth the added cost of building alongside ARM, Arduino, Intel Edison or other IoT architectures. History would suggest that it’s always cheaper to build to one architecture rather than multiple (which is what’s driving standards development in this nascent space), and while Google tries to ease the pain of dealing with different manufacturers on the developer side by abstracting lower level functions through APIs, it could create a situation where manufacturers will have to choose which ecosystem they play in – leading to more fragmentation and as a result more frustration for developers. For developers, at least those unfamiliar with Android, it comes at the cost of being locked into a slew of proprietary (or at least Google-owned) technologies and APIs rather than open technologies that could – in a truly interoperable way – weave Brillo and non-Brillo devices with cloud services and mobile apps.

Don’t get me wrong – Google’s reasoning is sound. The Internet of Things is the cool new kid on the block with forecast revenues so vast they could make a grown man weep. There are a fleet of developers building apps and services for Android and the company has great relationships with pretty much every silicon manufacturer on the planet. It seems reasonable to believe that the company which best captures the embeddable software space stands a pretty good chance at winning out at other levels of the IoT stack. But IoT craves interoperability and choice (and standards) more than anything, which even in the best of circumstances can create a tenuous relationship between developers and device manufacturers, where their respective needs stand in opposition. Unfortunately, it’s not quite clear whether Brillo or Weave will truly deliver on the needs of either camp.

Google, OpenStack target containers as Project Magnum gets first glimpse

Otto, Collier and

Otto, Collier and Parikh demoing Magnum at the OpenStack Summit in Vancouver this week

Google and OpenStack are working together to use Linux containers as a vehicle for integrating their respective cloud services and bolstering OpenStack’s appeal to hybrid cloud users.

The move follows a similar announcement made earlier this year by pure-play OpenStack vendor Mirantis and Google to commit to integrating Kubernetes with the OpenStack platform.

OpenStack chief operating officer Mark Collier said the platform needs to embrace heterogeneous workloads as it moves forward, with both containers and bare-metal solidly on the agenda for future iterations.

To that end, the company revealed Magnum, which in March became an official OpenStack project. Magnum builds on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques) that enable users and service providers to offer containers-as-a-service.

“As we think about Magnum and how that can take container support to the next level, you’ll hear more about all the different types of technologies available under one common set of APIs. And that’s what users are looking for,” Collier said. “You have a lot of workloads requiring a lot of different technologies to run them at their best, and putting them all together in one platform is a very powerful thing.”

Google’s technical solutions architect Sandeep Parikh and Magnum project leader Adrian Otto (an architect at Rackspace) were on hand to demo a kubernetes cluster deployment in both Google Compute Engine and the Rackspace public cloud using the exact same code and Keystone identity federation.

“We’ve had container support in OpenStack for some time now. Recently there’s been NovaDocker, which is for containers we treat as machines, and that’s fine if you just want a small place to put something,” Otto said.

Magnum uses the concept of a bay – where the orchestration layer goes – that Otto said can be used to manipulate pretty much any Linux container technology, whether its Docker, Kubernetes or Mesos.

“This gives us the ability to offer a hybrid approach. Not everything is great for private cloud, and not everything is great for public [cloud],” Parikh said. “If I want to run a highly available deployment, I can now run my workload in multiple places and if something were to go down the workload will still stay live.”