Category Archives: OpenStack

Will Microsoft’s ‘walled-garden’ approach to virtualisation pay off?

Microsoft's approach to virtualisation: Strategic intent or tunnel vision?

Microsoft’s approach to virtualisation: Strategic intent or tunnel vision?

While the data centre of old played host to an array of physical technologies, the data centre of today and of the future is based on virtualisation, public or private clouds, containers, converged servers, and other forms of software-defined solutions. Eighty percent of workloads are now virtualised with most companies using heterogeneous environments.

As the virtual revolution continues on, new industry players are emerging ready to take-on the market’s dominating forces. Now is the time for the innovators to strike and to stake a claim in this lucrative and growing movement.

Since its inception, VMware has been the 800 lb gorilla of virtualisation. Yet even VMware’s market dominance is under pressure from OpenSource offerings like KVM, RHEV-M, OpenStack, Linux Containers and Docker. There can be no doubting the challenge to VMware presented by purveyors of such open virtualisation options; among other things, they feature REST APIs that allow easy integration with other management tools and applications, regardless of platform.

I see it as a form of natural selection; new trends materialise every few years and throw down the gauntlet to prevailing organisations – adapt, innovate or die. Each time this happens, some new players will rise and other established players will sink.

VMware is determined to remain afloat and has responded to the challenge by creating an open REST API for VSphere and other components of the VMware stack.  While I don’t personally believe that this attempt has resulted in the most elegant API, there can be no arguing that it is at least accessible and well-documented, allowing for integration with almost anything in a heterogeneous data centre. For that, I must applaud them.

So what of the other giants of yore? Will Microsoft, for example, retain its regal status in the years to come? Not if the Windows-specific API it has lumbered itself with is anything to go by! While I understand why Microsoft has aspired to take on VMware in the enterprise data centre, its API, utilising WMI (Windows Management Instrumentation), only runs on Windows! As far as I’m concerned this makes it as useless as a chocolate teapot. What on earth is the organisation’s end-goal here?

There are two possible answers that spring to my mind, first that this is a strategic move or second that Microsoft’s eyesight is failing.

Could the Windows-only approach to integrating with Microsoft’s Hyper-V virtualisation platform be an intentional strategic move on its part? Is the long-game for Windows Server to take over the enterprise data centre?

In support of this, I have been taking note of Microsoft sales reps encouraging customers to switch from VMware products to Microsoft Hyper-V. In this exchange on Microsoft’s Technet forum, a forum user asked how to integrate Hyper-V with a product running on Linux.  A Microsoft representative then responded saying (albeit in a veiled way) that you can only interface with Hyper-V using WMI, which only runs on Windows…

But what if this isn’t one part of a much larger scheme? The only alternative I can fathom then is that this is a case of extreme tunnel vision, the outcome of a technology company that still doesn’t really get the tectonic IT disruptions and changes happening in the outside world. If it turns out that Microsoft really does want Windows Server to take over the enterprise data centre…well, all I can say is, good luck with that!

Don’t get me wrong. I am a great believer in competition, it is vital for the progression of both technology and markets. And it certainly is no bad thing when an alpha gorilla faces troop challenger. It’s what stops them getting stale, invigorating them and forcing them to prove why they deserve their silver back.

In reality, Microsoft probably is one of the few players that can seriously threaten VMWare’s near monopolistic market dominance of server virtualisation. But it won’t do it like this. So unless new CEO Satya Nadella’s company moves to provide platform-neutral APIs, I am sad to say that its offering will be relegated to the museum of IT applications.

To end with a bit of advice to all those building big data and web-scale applications, with auto-scaling orchestration between applications and virtualisation hypervisors: skip Hyper-V and don’t go near Microsoft until it “gets it” when it comes to open APIs.

Written by David Dennis, vice president, marketing & products, GroundWork

Rackspace, Intel to coordinate ‘world’s largest OpenStack dev team’

Intel and Rackspace claim the centre will house the world's largest OpenStack development team

Intel and Rackspace claim the centre will house the world’s largest OpenStack development team

Rackspace and Intel are teaming up to launch an OpenStack Innovation Centre aimed at bolstering upstream development of the cloud platform.

The centre, housed at Rackspace’s corporate HQ in San Antonio, Texas, will bring together technical specialists from Rackspace and Intel to co-develop new features and functions for OpenStack and fix bugs in the code base, with the fruits of their efforts being contributed back upstream.

The companies will also offer OpenStack training to engineers and developers; they claim the centre will house the world’s largest dedicated OpenStack development team.

“We are excited to collaborate with Intel and look forward to working with the OpenStack community to make the world’s leading open-source cloud operating system even stronger,” said Scott Crenshaw, senior vice president of product and strategy at Rackspace.

“We don’t create proprietary OpenStack distributions.  Rackspace delivers its customers four-nines availability using entirely upstream trunk code. All of the Innovation Centre’s contributions will be made available freely, to everyone,” Crenshaw said.

Jason Waxman, vice president of the Cloud Platforms Group at Intel said: “This announcement demonstrates our continued support and commitment to open source projects. Our ongoing collaboration with Rackspace and the OpenStack community represents an ideal opportunity to accelerate the enterprise appeal of OpenStack.”

OpenStack, which this week celebrated its fifth anniversary, was founded by a few engineers from Rackspace and NASA but has since swelled to more than 520 member companies and 27,000 individual contributors globally. While the open source cloud platform is young, it has matured significantly during its brief existence and has become a defacto cloud standard embraced by many if not most of the big IT incumbents.

“The community’s goal is to foster collaboration and spur innovation that drives broad adoption,” said Jonathan Bryce, executive director of the OpenStack Foundation. “The depth of experience and community engagement that Rackspace and Intel offer makes this an exciting project, as the code contributions and large-scale testing will benefit everyone who uses OpenStack.”

Google joins OpenStack to build bridges between public and private clouds

Google has joined the OpenStack Foundation, a big sign of support for the open source software organisation

Google has joined the OpenStack Foundation, a big sign of support for the open source software organisation

Google has officially signed up to sponsor the OpenStack Foundation, the first of the big three – Google, Microsoft and AWS – to formally throw its weight behind the open source cloud orchestration software. Analysts believe the move will improve support for Linux containers across public and private cloud environments.

Google has already set to work integrating Kubernetes with OpenStack with pure-play OpenStack software vendor Mirantis, a move the company said would help bolster its hybrid cloud capabilities.

While the company has had some engineers partnering with the Foundation on Magnum and Murano, container-focused toolsets baked into the open source platform, Google said it plans to significantly bolster the engineering resource it devotes to getting Linux containers – and particularly its open source scheduling and deployment platform Kubernetes – integrated with OpenStack.

The formal sign of support from such a big incumbent in the cloud space is a big win for OpenStack.

“We are excited about becoming active participants in the OpenStack community,” said Craig McLuckie, product manager at Google. “We look forward to sharing what we’ve learned and hearing how OpenStack users are thinking about containers and other technologies to support cloud-native apps.”

Mark Collier, chief operating officer of the OpenStack Foundation said: “OpenStack is a platform that frees users to run proven technologies like VMs as well as new technologies like containers. With Google committing unequaled container and container management engineering expertise to our community, the deployment of containers via proven orchestration engines like Kubernetes will accelerate rapidly.”

Although Google has a long history of open sourcing some of the tools it uses to stand up its own cloud and digital services like search it hasn’t always participated with many open source forums per se.

In a sense Kubernetes marked a departure from its previous trajectory, and as Ovum’s lead software analyst Laurent Lachal explained to BCN, it seems to be focusing on containers as a means of building a bridge between private and public clouds.

“Google knows that it needs to play nice with cloud platforms like OpenStack and VMware, two platforms that are primarily private cloud-centric, if it wants to get workloads onto its public cloud,” he explained.

“Joining OpenStack is exactly that – a means to building a bridge between private and public clouds, and supporting containers within the context of OpenStack may be both a means of doing that and generating consensus around how best to support containers in OpenStack, something that could also work in its favour.”

“There’s also a big need for that kind of consensus. Currently, everyone wants to join the containers initiatives in the open source project but there isn’t much backing for one particular way of delivering the container-related features users need,” he added.

Canonical appoints ex-Microsoft UK dev lead as EVP of cloud

Krishnan will lead Canonical's cloud efforts

Krishnan will lead Canonical’s cloud efforts

Canonical has appointed former Microsoft exec Anand Krishnan to the role of executive vice president for cloud, where he will lead most of the company’s cloud-related efforts globally including business-development, marketing, engineering and customer delivery activities.

Krishnan most recently served as UK General Manager for Microsoft’s Developer Platform division where he was responsible in part for scaling the Azure business, which by most measures seems to be growing at record pace. Before joining Microsoft in 2004 he spent about five years at Trilogy, a Texas-based software firm specialising in lead generation solutions for the automotive, insurance and telecoms sectors.

“Great businesses make an extraordinary difference to the customers they serve. Canonical has the products and the momentum to do exactly that,” Krishnan said.

“I couldn’t be more excited to be joining the team at this time and helping shape the next phase of our journey”

Canonical has in recent months moved to bolster its cloud strategy with BootSack, its managed private cloud offering, and its own distribution of OpenStack. Its Linux distro Ubuntu is the most popular OS in use on AWS EC2 (though other Linux incumbents have questioned those claims), and it also recently launched Ubuntu Core, a slimmed-down, re-architected version of the Ubuntu operating system that borrows from heavily from the Linux container (isolated frameworks) and mobile (transactional updates) worlds.

Living in a hybrid world: From public to private cloud and back again

Orlando Bayter, chief exec and founder of Ormuco

Orlando Bayter, chief exec and founder of Ormuco

The view often propagated by IT vendors is that public cloud is already capable of delivering a seamless extension between on-premise private cloud platforms and public, shared infrastructure. But Orlando Bayter, chief executive and founder of Ormuco, says the industry is only at the outset of delivering a deeply interwoven fabric of private and public cloud services.

Demand for that kind of seamlessness hasn’t been around for very long, admittedly. It’s no great secret that in the early days of cloud demand for public cloud services was spurred largely by the slow-moving pace traditional IT organisations are often set. As a result every time a developer wanted to build an application they would simply swipe the credit card and go, billing back to IT at some later point. So the first big use case for hybrid cloud emerged when developers then needed to bring their apps back in-house, where they would live and probably die.

But as the security practices of cloud service providers continue to improve, along with enterprise confidence in cloud more broadly, cloud bursting – the ability to use a mix of public and private cloud resources to fit the utilisation needs of an app – became more widely talked about. It’s usually cost prohibitive and far too time consuming to scale private cloud resources quick enough to meet the changing demands of today’s increasingly web-based apps, so cloud bursting has become the natural next step in the hybrid cloud world.

Orlando will be speaking at the Cloud World Forum in London June 24-25. Click here to register.

There are, however, still preciously few platforms that offer this kind of capability in a fast and dynamic way. Open source projects like OpenStack or more proprietary variants like VMware’s vCloud or Microsoft’s Azure Stack (and all the tooling around these platforms or architectures) are at the end of the day all being developed with a view towards supporting the deployment and management of workloads that can exist in as many places as possible, whether on-premise or in a cloud vendor’s datacentre.

“Let’s say as a developer you want to take an application you’ve developed in a private cloud in Germany and move it onto a public cloud platform in the US. Even for the more monolithic migration jobs you’re still going to have to do all sorts of re-coding, re-mapping and security upgrades, to make the move,” Bayter says.

“Then when you actually go live, and have apps running in both the private and public cloud, the harsh reality is most enterprises have multiple management and orchestration tools – usually one for the public cloud and one for the private; it’s redundant, and inefficient.”

Ormuco is one company trying to solve these challenges. It has built a platform based on HP Helion OpenStack and offers both private and public instances, which can both be managed in a single pane of glass; it has built its own layer in between to abstract resources underneath).

It has multiple datacentres in the US and Europe from which it offers both private and public instances, as well as the ability to burst into its cloud platform using on-premise OpenStack-based clouds. The company is also a member of the HP Helion Network, which Bayter says gives it a growing channel and the ability to offer more granular data protection tools to customers.

“The OpenStack community has been trying to bake some of these capabilities into the core open source code, but the reality is it only achieved a sliver of these capabilities by May this year,” he said, alluding to the recent OpenStack Summit in Vancouver where new capabilities around federated cloud identity were announced and demoed.

“The other issue is simplicity. A year and a half ago, everyone was talking about OpenStack but nobody was buying it. Now service providers are buying but enterprises are not. Specifically with enterprises, the belief is that OpenStack will be easier and easier as time goes on, but I don’t think that’s necessarily going to be the case,” he explains.

“The core features may become a bit easier but the whole solution may not, but there are so many things going into it that it’s likely going to get clunkier, more complex, and more difficult to manage. It could become prohibitively complex.”

That’s not to say federated identity or cloud federation is a lost cause – on the contrary, Bayter says it’s the next horizon for cloud. The company is currently working a set of technologies that would enable any organisation with infrastructure that lies significantly underutilised for long periods to rent out their infrastructure in a federated model.

Ormuco would verify and certify the infrastructure, and allocate a performance rating that would change dynamically along with the demands being placed on that infrastructure – like an AirBnB for OpenStack cloud users. Customers renting cloud resources in this market could also choose where their data is hosted.

“Imagine a university or a science lab that scales and uses its infrastructure at very particular times; the rest of the time that infrastructure is fairly underused. What if they could make money from that?”

There are still many unanswered questions – like whether the returns for renting organisations would justify the extra costs (i.e. energy) associate with running that infrastructure, or where the burden of support lies (enterprises need solid SLAs for production workloads) and how that influences what kinds of workloads ends up on rented kit, but the idea is interesting and definitely consistent with the line of thinking being promoted by the OpenStack community among others in open source cloud.

“Imagine the power, the size of that cloud,” says Bayter . “That’s the cloud that will win out.”

This interview was produced in partnership with Ormuco

IBM releases tool to advance cloud app development on OpenPower, OpenStack

IBM has announced a service to help other develop and test OpenPower-based apps

IBM has announced a service to help other develop and test OpenPower-based apps

IBM announced the launch of SuperVessel, an open access cloud service developed by the company’s China-based research outfit and designed for developing and testing cloud services based on the OpenPower architecture.

The service, developed by Beijing’s IBM Research and IBM Systems Labs, is open to business partners, application developers and university students for testing and piloting emerging applications that use deep analytics, machine learning and the Internet of Things.

The cloud service is based on the latest Power8 processors (with FPGAs and GPU-based acceleration) and uses OpenStack to orchestrate the underlying cloud resources. The SuperVessel service is sliced up into various “labs”, each focusing on a specific area, and is initially launching with four: Big Data, Internet of Things, Acceleration and Virtualization.

“With the SuperVessel open computing platform, students can experience cutting-edge technologies and turn their fancy ideas into reality. It also helps make our teaching content closer to real life,” said Tsinghua University faculty member Wei Xu. “We want to make better use of SuperVessel in many areas, such as on-line education.”

Terri Virnig, IBM Vice President of Power Ecosystem and Strategy said: “SuperVessel is a significant contribution by IBM Research and Development to OpenPower. Combining advanced technologies from IBM R&D labs and business partners, SuperVessel is becoming the industry’s leading OpenPower research and development environment. It is a way IBM commits to and supports OpenPower ecosystem development, talent growth and research innovation.”

The move is part of a broader effort to cultivate mindshare around IBM’s Power architecture, which it opensourced two years ago; it’s positioning the architecture as an ideal platform for cloud and big data services. Since the launch of the OpenPower Foundation, the group tasked with coordinating development with Power, it has also been actively working with vendors and cloud service provider to mashup a range of open source technologies – for instance, getting OpenStack to work on OpenPower and Open Compute-based hardware.

Nokia eyes the cloud infrastructure market with OpenStack, VMware-based servers

Nokia is offering up its own blade servers to the telco world

Nokia is offering up its own blade servers to the telco world

Nokia Networks revealed its AirFrame datacentre solutions this week, high-density blade servers running a combination of OpenStack and VMware software and designed to support Nokia’s virtualised network services for telcos.

“We are taking on the IT-telco convergence with a new solution to challenge the traditional IT approach of the datacentre,” said Marc Rouanne, executive vice president, Mobile Broadband at Nokia Networks.

“This newest solution brings telcos carrier-grade high availability, security-focused reliability as well as low latency, while leveraging the company’s deep networks expertise and strong business with operators to address an increasingly cloud-focused market valued in the tens of billions of euros.”

The servers, which come pre-integrated with Nokia’s own switches, are based on Intel’s x86 chips and run OpenStack as well as VMware, and can be managed using Nokia’s purpose-built cloud management solution. The platforms are ETSI NFV / OPNFV-certified, so they can run Nokia’s own VNFs as well as those developed by certified third parties.

The company’s orchestration software can also manage the split between both virtualised and network legacy functions in either centralised or distributed network architectures.

Phil Twist, vice president of Portfolio Marketing at Nokia Networks told BCN the company designed the servers specifically for the telco world, adding things like iNICs and accelerators to handle the security, encryption, virtual routing, digital signal processing (acceleration for radio) that otherwise would tie up processor capacity in a telco network.

But he also said the servers could be leveraged for standing up its own cloud services, or for the wider scale-out market.

“Our immediate ambition is clear: to offer a better alternative for the build-out of telco clouds optimized for that world.  But of course operators have other in-house IT requirements which could be hosted on this same cloud, and indeed they could then offer cloud services to their enterprise customers on this same cloud,” he explained.

“We could potentially build our own cloud to host SaaS propositions to our customers, or in theory potentially offer the servers for enterprise applications but that’s not our initial focus,” he added.

Though Twist didn’t confirm whether this was indeed Nokia’s first big move towards the broader IT infrastructure market outside networking, the announcement does mean the company will be brought into much closer competition with both familiar (Ericsson, Cisco) and less familiar (HP) incumbents offering their own OpenStack-integrated cloud kit.

CERN, Rackspace to harden federated cloud reference architecture

CERN and Rackspace want to create standard templates for an OpenStack cloud of clouds

CERN and Rackspace want to create standard templates for an OpenStack cloud of clouds

Rackspace and CERN openlab announced plans to redouble their efforts to create a reference architecture for a federated cloud service model.

The earliest implementations of Keystone – the mechanism in OpenStack for OpenStack-to-OpenStack identity authentication and cloud federation – came out of a collaboration between CERN and Rackspace, and now the two organisations plan to extend those efforts and create standardised templates for cloud orchestration.

“More companies are now looking to use multiple clouds to effectively serve the range of workloads they run – blending low-cost, high-performance, enhanced security and optimised environments,” says Giri Fox, Rackspace’s director of customer technology services. “But, we are still seeing the complexity businesses are facing to integrate just one cloud into their business. Federation is an opportunity to re-use that initial integration for future clouds you want to run your business on, making multi-cloud a business benefit choice rather than a business cost one.”

For those of you that aren’t familiar with CERN, the European Organization for Nuclear Research, it operates the Large Hadron Collider which during its tests (which take place intermittently) spits out over 30 petabytes of raw data per year, which then needs to be processed and made available in near real-time for physicists around the world.

But CERN is like many research organisations resource constrained, so relying on federated set of infrastructure to get all of that processing accomplished can help it overcome the capacity limitations of its own datacentres. The organisation relies on multiple OpenStack clouds based in Europe that need to be accessed by thousands of researcher, so it has a strong incentive to develop a robust open model for cloud federation.

“Our CERN openlab mission is to work with industry partners to develop open, standard solutions to the challenges faced by the worldwide LHC community. “These solutions also often play a key role in addressing tomorrow’s business challenges,” said Tim Bell, infrastructure manager in the IT department at CERN.

“After our work on identity federation with Rackspace, this is a very important step forward. For CERN, being able to move compute workloads around the world is essential for ongoing collaboration and discovery,” Bell said.

Google, OpenStack target containers as Project Magnum gets first glimpse

Otto, Collier and

Otto, Collier and Parikh demoing Magnum at the OpenStack Summit in Vancouver this week

Google and OpenStack are working together to use Linux containers as a vehicle for integrating their respective cloud services and bolstering OpenStack’s appeal to hybrid cloud users.

The move follows a similar announcement made earlier this year by pure-play OpenStack vendor Mirantis and Google to commit to integrating Kubernetes with the OpenStack platform.

OpenStack chief operating officer Mark Collier said the platform needs to embrace heterogeneous workloads as it moves forward, with both containers and bare-metal solidly on the agenda for future iterations.

To that end, the company revealed Magnum, which in March became an official OpenStack project. Magnum builds on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques) that enable users and service providers to offer containers-as-a-service.

“As we think about Magnum and how that can take container support to the next level, you’ll hear more about all the different types of technologies available under one common set of APIs. And that’s what users are looking for,” Collier said. “You have a lot of workloads requiring a lot of different technologies to run them at their best, and putting them all together in one platform is a very powerful thing.”

Google’s technical solutions architect Sandeep Parikh and Magnum project leader Adrian Otto (an architect at Rackspace) were on hand to demo a kubernetes cluster deployment in both Google Compute Engine and the Rackspace public cloud using the exact same code and Keystone identity federation.

“We’ve had container support in OpenStack for some time now. Recently there’s been NovaDocker, which is for containers we treat as machines, and that’s fine if you just want a small place to put something,” Otto said.

Magnum uses the concept of a bay – where the orchestration layer goes – that Otto said can be used to manipulate pretty much any Linux container technology, whether its Docker, Kubernetes or Mesos.

“This gives us the ability to offer a hybrid approach. Not everything is great for private cloud, and not everything is great for public [cloud],” Parikh said. “If I want to run a highly available deployment, I can now run my workload in multiple places and if something were to go down the workload will still stay live.”

eBay chief cloud engineer: ‘OpenStack needs to do more on scalability, upgradability’

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

OpenStack has improved leaps and bounds in the past four years but it still leaves much to be desired in terms of upgradability and manageability, according to Subbu Allamaraju, eBay’s top cloud engineer.

Allamaraju, who was speaking at the OpenStack Summit in Vancouver this week, said the ecommerce giant is a big believer in open source tech when it comes to building out its own internal, dev-and-test and customer-facing services.

In 2012 when the company, which is a 100 per cent KVM and OVS shop, started looking at OpenStack, it decided to deploy on around 300 servers. Now the company has deployed nearly 12,000 hypervisors on 300,000 cores, including 15 virtual private clouds, in 10 availability zones.

“In 2012 we had virtually no automation; in 2014 we still needed to worry about configuration drift to keep the fleet of hypervisors in sync. In 2012, there was also no monitoring,” he said. “We built tools to move workloads between deployments because in the early years there was no clear upgrade path.”

eBay has about 20 per cent of its customer-facing website running on OpenStack, and as of the holiday season this past year processed all PayPal transactions on applications deployed on the platform. The company also hosts significant amounts of data – Allamaraju claims eBay runs one of the largest Hadoop clusters in the world at around 120 petabytes.

But he said the company still faces concerns about deploying at scale, and about upgrading, adding that in 2012 eBay had to build a toolset just to migrate its workloads off the Essex release because no clear upgrade path presented itself.

“In most datacentre cloud is only running in part of it, but we want to go beyond that. We’re not there yet and we’re working on that,” he said, adding that the company’s goal is to go all-in on OpenStack within the next few years. “But at meetings we’re still hearing questions like ‘does Heat scale?’… these are worrying questions from the perspective of a large operator.”

He also said the data from recent user surveys suggest manageability and in particular upgradeability, long held to be a significant barrier to OpenStack adoption, are still huge issues.

“Production deployments went up, but 89 per cent are running a core base at least 6 months old, but 55 per cent of operators are running a year-old core base, and 18 per cent are running core bases older than 12 months,” he said. “Lots of people are coming to these summits, but the data suggests many are worried about the upgrading.”

“This is an example of manageability missing in action.  How do you manage large deployments? How do you manage upgradeability?”