Archivo de la categoría: Open Source

Google, OpenStack target containers as Project Magnum gets first glimpse

Otto, Collier and

Otto, Collier and Parikh demoing Magnum at the OpenStack Summit in Vancouver this week

Google and OpenStack are working together to use Linux containers as a vehicle for integrating their respective cloud services and bolstering OpenStack’s appeal to hybrid cloud users.

The move follows a similar announcement made earlier this year by pure-play OpenStack vendor Mirantis and Google to commit to integrating Kubernetes with the OpenStack platform.

OpenStack chief operating officer Mark Collier said the platform needs to embrace heterogeneous workloads as it moves forward, with both containers and bare-metal solidly on the agenda for future iterations.

To that end, the company revealed Magnum, which in March became an official OpenStack project. Magnum builds on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques) that enable users and service providers to offer containers-as-a-service.

“As we think about Magnum and how that can take container support to the next level, you’ll hear more about all the different types of technologies available under one common set of APIs. And that’s what users are looking for,” Collier said. “You have a lot of workloads requiring a lot of different technologies to run them at their best, and putting them all together in one platform is a very powerful thing.”

Google’s technical solutions architect Sandeep Parikh and Magnum project leader Adrian Otto (an architect at Rackspace) were on hand to demo a kubernetes cluster deployment in both Google Compute Engine and the Rackspace public cloud using the exact same code and Keystone identity federation.

“We’ve had container support in OpenStack for some time now. Recently there’s been NovaDocker, which is for containers we treat as machines, and that’s fine if you just want a small place to put something,” Otto said.

Magnum uses the concept of a bay – where the orchestration layer goes – that Otto said can be used to manipulate pretty much any Linux container technology, whether its Docker, Kubernetes or Mesos.

“This gives us the ability to offer a hybrid approach. Not everything is great for private cloud, and not everything is great for public [cloud],” Parikh said. “If I want to run a highly available deployment, I can now run my workload in multiple places and if something were to go down the workload will still stay live.”

eBay chief cloud engineer: ‘OpenStack needs to do more on scalability, upgradability’

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

OpenStack has improved leaps and bounds in the past four years but it still leaves much to be desired in terms of upgradability and manageability, according to Subbu Allamaraju, eBay’s top cloud engineer.

Allamaraju, who was speaking at the OpenStack Summit in Vancouver this week, said the ecommerce giant is a big believer in open source tech when it comes to building out its own internal, dev-and-test and customer-facing services.

In 2012 when the company, which is a 100 per cent KVM and OVS shop, started looking at OpenStack, it decided to deploy on around 300 servers. Now the company has deployed nearly 12,000 hypervisors on 300,000 cores, including 15 virtual private clouds, in 10 availability zones.

“In 2012 we had virtually no automation; in 2014 we still needed to worry about configuration drift to keep the fleet of hypervisors in sync. In 2012, there was also no monitoring,” he said. “We built tools to move workloads between deployments because in the early years there was no clear upgrade path.”

eBay has about 20 per cent of its customer-facing website running on OpenStack, and as of the holiday season this past year processed all PayPal transactions on applications deployed on the platform. The company also hosts significant amounts of data – Allamaraju claims eBay runs one of the largest Hadoop clusters in the world at around 120 petabytes.

But he said the company still faces concerns about deploying at scale, and about upgrading, adding that in 2012 eBay had to build a toolset just to migrate its workloads off the Essex release because no clear upgrade path presented itself.

“In most datacentre cloud is only running in part of it, but we want to go beyond that. We’re not there yet and we’re working on that,” he said, adding that the company’s goal is to go all-in on OpenStack within the next few years. “But at meetings we’re still hearing questions like ‘does Heat scale?’… these are worrying questions from the perspective of a large operator.”

He also said the data from recent user surveys suggest manageability and in particular upgradeability, long held to be a significant barrier to OpenStack adoption, are still huge issues.

“Production deployments went up, but 89 per cent are running a core base at least 6 months old, but 55 per cent of operators are running a year-old core base, and 18 per cent are running core bases older than 12 months,” he said. “Lots of people are coming to these summits, but the data suggests many are worried about the upgrading.”

“This is an example of manageability missing in action.  How do you manage large deployments? How do you manage upgradeability?”

OpenStack does some soul searching, finds its core self

Bryce: 'OpenStack will power the planet's clouds'

Bryce: ‘OpenStack will power the planet’s clouds’

The OpenStack Foundation announced new interoperability and testing requirements as well as enhancements to the software’s implementation of federated identity which the Foundation’s executive director Jonathan Bryce says will take the open source cloud platform one step closer to world domination.

OpenStack’s key pitch beyond being able to spin up scalable compute, storage and networking resources fairly quickly, is that OpenStack-based private clouds should be able to burst into the public cloud or some private cloud instances if need be. That kind of capability is essential if the company is going to take on companies like AWS, VMware and Microsoft, but has so far been quite basic in terms of implementation.

But for that kind of interoperability to happen you need three things: the ability to federate the identity of a cloud user so permissions and workloads can port over to whatever platforms are being deployed on (and to ensure those workloads are secure); a definition of what vendors, service providers and customers can reliably call core OpenStack, so they can all expect a standard collection of tools, services, and APIs to be found in every distribution; and, a way to test interoperability of OpenStack distributions and appliances.

To that end, the Foundation announced a new OpenStack Powered interoperability testing programme, so users can validate the interoperability of their own deployments as well as gain assurances from vendors that clouds and appliances branded as “OpenStack Powered” meet the same requirements. About 16 companies already have either certified cloud platforms or appliances available on the OpenStack Marketplace as of this week, and Bryce said there’s more to come.

The latest release of OpenStack, Kilo, also brings a number of improvements to federated identity, making it much easier to implement as well as more dynamic in terms of workload deployment, and Bryce said that over 30 companies have committed to implementing federated identity (which has been available since the Lighthouse release) by the end of this year – meaning the OpenStack cloud footprint just got a whole lot bigger.

“It has been a massive effort to come to an agreement on what we need to have in these clouds, how to test it,” Bryce said. “It’s a key step towards the goal of realising an OpenStack-powered planet.”

The challenge is, as the code gets bulkier and as groups add more services, joining all the bits and making sure they work together without one component or service breaking another becomes much more complex. That said, the move marks a significant milestone for the DefCore group, the internal committee in charge of setting base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. The group have been working for well over a year on developing a standard definition of what a core OpenStack deployment is.

Dell partners with Pivotal on Cloud Foundry

Dell Services will resell Pivotal CF and advise customers on implementation, app development and migration to different cloud platforms

Dell Services will resell Pivotal CF and advise customers on implementation, app development and migration to different cloud platforms

Dell Services announced a partnership with Pivotal this week that will see the company include Pivotal CF in its digital services portfolio.

The deal will see Dell Services resell Pivotal’s Cloud Foundry distribution as well as advise clients on application development, integration and multi-cloud migration using both Pivotal’s and open source Cloud Foundry.

The companies said the move will help customers enable a DevOps culture within their organisations and speed up application deployment.

“Digital transformation is driving enterprises to develop and deploy applications in an agile manner thereby creating the need for a new generation of application platforms,” Raman Sapra, executive director and global head, Dell Digital Business Services.

“Our collaboration with Pivotal expands our digital services portfolio to include development of next-generation, enterprise-class solutions using a leading platform like Pivotal Cloud Foundry to help customers unlock the power of innovation and fast track their digital transformation journey,” Sapra said.

Scott Aronson, senior vice president, worldwide field operations at Pivotal said: “Pivotal Cloud Foundry is emerging as a fundamental enabler of digital transformation as companies are under increased pressure to leverage software to differentiate their business models. Our partnership with Dell Services, a leading and trusted global services provider, will help our customers accelerate their digital transformation journey.”

Mirantis, Pivotal team up on OpenStack, Cloud Foundry integration

Mirantis and Pivotal are working to integrate their commercial deployments of OpenStack and Cloud Foundry, respectively

Mirantis and Pivotal are working to integrate their commercial deployments of OpenStack and Cloud Foundry, respectively

Pivotal and Mirantis announced this week that the two companies are teaming up to accelerate integration of Cloud Foundry and OpenStack.

As part of the move Pivotal will support Pivotal CF, the company’s commercial distribution of the open source platform-as-a-service, on Mirantis’ distribution of OpenStack.

“Our joint customers are seeking open, bleeding-edge technologies to accelerate their software development and bring new products to market faster,” said James Watters, vice president and general manager of the Cloud Platform Group at Pivotal.

“Now, with Pivotal Cloud Foundry and Mirantis OpenStack, enterprises across various industries can rapidly deliver cloud-native, scalable applications to their customers with minimal risk and maximum ROI,” Watters said.

The move comes just one month after Mirantis announced it would join the Cloud Foundry Foundation in a bid to help drive integration between the two open source platforms. At the time, Alex Freedland, Mirantis co-founder and chairman said an essential part of rolling out software to help organisations build their own clouds includes making it as easy as possible to deploy and manage technologies “higher up the stack” like Cloud Foundry.

“Enterprises everywhere are adopting a new generation of tools, processes and platforms to help them compete more effectively,” said Boris Renski, Mirantis chief marketing officer and co-founder. “Mirantis and Pivotal have made Pivotal Cloud Foundry deployable on Mirantis OpenStack at the click of a button, powering continuous innovation.”

Joint customers can install Pivotal Cloud Foundry onto Mirantis OpenStack using the companies’ deployment guide, but the two companies are working towards adding a full Pivotal CF installation into the application catalogue of the next OpenStack release, Murano.

Citrix, bowing to momentum, joins OpenStack

Citrix is rejoining OpenStack, the open source cloud project it abandoned for its own rival initiative

Citrix is rejoining OpenStack, the open source cloud project it abandoned for its own rival initiative

Virtualisation specialist Citrix has announced it is officially joining the OpenStack Foundation as a corporate sponsor, the open source organisation it left four years ago in order to pursue the rival Cloud Stack initiative.

Citrix said it had contributed to the OpenStack community fairly early on, but wanted to re-join the community in order to more formally demonstrate its commitment towards cloud interoperability and standards development.

As part of the announcement the company also said it has integrated the NetScaler and XenServer with OpenStack.

“We’re pleased to formally sponsor the OpenStack Foundation to help drive cloud interoperability standards. Citrix products like NetScaler, through the recently announced NetScaler Control Center, and XenServer, are already integrated with OpenStack,” said said Klaus Oestermann, senior vice president and general manager, delivery networks at Citrix.

“Our move to support the OpenStack community reflects the great customer and partner demand for Citrix to bring the value of our cloud and networking infrastructure products to customers running OpenStack,” Oestermann added.

Citrix is one of the biggest backers of CloudStack, an Apache open source project that rivals OpenStack. Citrix was aligned with OpenStack at the outset but in 2012 ended its commitment to that project in order to pursue CloudStack development.

That said, the move would suggest Citrix is aware it can’t continue going against the grain too long when it comes to vendor and customer mind-share. OpenStack, despite all of its own internal politics and technological gaps, seems to have far more developers involved than CloudStack. It also has more buy-in from vendors.

All of this is to say, going the CloudStack route exclusively is counterintuitive, especially in cloud – which is all about heterogeneity (which means interoperability is, or should be, among the top priorities of vendors involved). But, Citrix maintains that it will continue to invest in CloudStack development.

Laurent Lachal, lead analyst in Ovum’s software practice told BCN the move is a classic case of “if you can’t beat ‘em, join ‘em.”

“But there needs to be more clarity around how OpenStack fits with CloudStack,” he explained. “The CloudStack initiative is no longer as dependent on Citrix as it used to be, which is a good thing. But the project still needs to get its act together.”

VMware open sources IAM, cloud OS tools

VMware is open sourcing cloud tools

VMware is open sourcing cloud tools

VMware has open sourced two sets of tools the company said would accelerate cloud adoption in the enterprise and improve their security posture.

The company announced Project Lightwave, which the company is pitching as the industry’s first container identity and access management tool for cloud-native applications, and Project Photon, a lightweight Linux operating system optimised for running these kinds of apps in vSphere and vCloud Air.

The move follows Pivotal’s recent launch of Lattic, a container cluster scheduler for Cloud Foundry that the software firm is pitching as a more modular way of building apps exposing CF components as standalone microservices (thus making apps built with Lattice easier to scale).

“Through these projects VMware will deliver on its promise of support for any application in the enterprise – including cloud-native applications – by extending our unified platform with Project Lightwave and Project Photon,” said Kit Colbert, vice president and chief technology officer for Cloud-Native Applications, VMware.

“Used together, these new open source projects will provide enterprises with the best of both worlds. Developers benefit from the portability and speed of containerized applications, while IT operations teams can maintain the security and performance required in today’s business environment,” Colbert said.

Earlier this year VMware went on the container offensive, announcing an updated vSphere platform that would enable users to run Linux containers side by side with traditional VMs as well as its own distribution of OpenStack.

The latest announcement – particularly Lattice – is part of a broader industry trend that sees big virtualisation incumbents embrace a more modular, cloud-friendly architecture (which many view as synonymous with containers) in their offerings. This week one of VMware’s chief rivals in this area, Microsoft, announced its own container-like architecture for Azure following a series of moves to improve support for Docker on its on-premise and cloud platforms.

Microsoft debuts container-like architecture for cloud

Microsoft is trying to push more cloud-friendly architectures

Microsoft is trying to push more cloud-friendly architectures

Microsoft has announced Azure Service Fabric, a framework for ISVs and startups developing highly scalable cloud applications which combines a range of microservices, orchestration, automation and monitoring tools. The move comes as the software company looks to deepen its use of – and ties to – open source tech.

Azure Service Fabric, which is based in part on technology included in Azure App Fabric, breaks apart apps into a wide range of small, independently versioned microservices, so that apps created on the platform don’t need to be re-coded in order to scale past a certain point. The result, the company said, is the ability to develop highly scalable applications while enabling low-level automation and orchestration of its constituent services.

“Service Fabric was born from our years of experience delivering mission-critical cloud services and has been in production for more than five years. It provides the foundational technology upon which we run our Azure core infrastructure and also powers services like Skype for Business, InTune, Event Hubs, DocumentDB, Azure SQL Database (across more than 1.4 million customer databases) and Bing Cortana – which can scale to process more than 500 million evaluations per second,” explained Mark Russinovich, chief technology officer of Microsoft Azure.

“This experience has enabled us to design a platform that intrinsically understands the available infrastructure resources and needs of applications, enabling automatically updating, self-healing behaviour that is essential to delivering highly available and durable services at hyper-scale.”

A preview of the service will be released to developers at the company’s Build conference next week.

The move is part of a broader architectural shift in the software stack powering cloud services today. It’s clear the traditional OS / hypervisor model is limited in terms of its ability to ensure services are scalable and resilient for high I/O applications, which has manifested in among other things a shift towards breaking down applications into a series of connected microservices – something which many equate Docker and OpenStack with, among other open source software projects.

Speaking of open source, the move comes just days after Microsoft announced MS Open Tech, the standalone open source subsidiary of Microsoft, will re-join the company, in a move the company hopes will drive further engagement with open source communities.

“The goal of the organization was to accelerate Microsoft’s open collaboration with the industry by delivering critical interoperable technologies in partnership with open source and open standards communities. Today, MS Open Tech has reached its key goals, and open source technologies and engineering practices are rapidly becoming mainstream across Microsoft. It’s now time for MS Open Tech to rejoin Microsoft Corp, and help the company take its next steps in deepening its engagement with open source and open standards,” explained Jean Paoli, president of Microsoft Open Technologies

“As MS Open Tech rejoins Microsoft, team members will play a broader role in the open advocacy mission with teams across the company, including the creation of the Microsoft Open Technology Programs Office. The Programs Office will scale the learnings and practices in working with open source and open standards that have been developed in MS Open Tech across the whole company.”

Pivotal punts Geode to ASF to consolidate leadership in open source big data

Pivotal is looking to position itself as a front runner in open source big data

Pivotal is looking to position itself as a front runner in open source big data

Pivotal has proposed “Project Geode” for incubation by the Apache Software Foundation, which would focus on developing the Geode in-memory database technology – the technology at the core of Pivotal’s GemFire offering.

Geode can support ACID transactions for large scaled applications such as those used for stock trading, financial payments and ticket sales, and the company said the technology is already proven in customer deployments of more than 10 million user transactions a day.

In February Pivotal announced it would open source much of its big data suite including GemFire, which the company will continue to support commercially. The move is part of a broader plan to consolidate its leadership in the open source big data ecosystem, where companies like Hortonworks are also trying to make waves.

The company also recently helped launch the Open Data Platform, which seeks to promote big data tech standardisation, and combat fragmentation around how Hadoop is deployed in enterprises and built upon by ISVs.

In the meantime, while the company said it would wait for the ASF’s decision Pivotal has already put out a call to developers as it seeks early contributions to ensure the project gets a head start.

“The opening sourcing of core components of products in the Pivotal Big Data Suite heralds a new era of how big data is done in the enterprise. Starting with core code in Pivotal GemFire, the components we intend to contribute to the open source community are already performing in the most hardened and demanding enterprise environments,” said Sundeep Madra, vice president, Data Product Group at Pivotal.

“Geode is an important part of building solutions for next generation data infrastructures and we welcome the community to join us in furthering Geode’s already compelling capabilities,” Madra said.

Converged OpenStack cloud pioneer Nebula closes its doors

Nebula, an OpenStack pioneer, is closing its doors

Nebula, an OpenStack pioneer, is closing its doors

Converged infrastructure vendor Nebula, one of the first companies to pioneer integrated OpenStack-based private cloud hardware, announced it will close its doors this week.

A notice posted by the Nebula management team on its website says the company had no choice but to cease operations after exhaustively searching for alternative arrangements that would allow the company to keep operating.

“When we started this journey four years ago, we set out to usher in a new era of cloud computing by curating and productizing OpenStack for the enterprise. We are incredibly proud of the role we had in establishing Nebula as the leading enterprise cloud computing platform. At the same time, we are deeply disappointed that the market will likely take another several years to mature. As a venture backed start up, we did not have the resources to wait.”

“Nebula private clouds deployed at customer sites will continue to operate normally, however support will no longer be available. Nebula is based on OpenStack and is compatible with OpenStack products from vendors including Red Hat, IBM, HP and others, providing customers with a number of choices moving forward.”

One of the original players behind the OpenStack codebase, Nebula offered Nebula Cosmos, a fast and secure deployment, management, and monitoring tool for enterprise-grade OpenStack private clouds, and converged infrastructure solutions based on x86 servers running OpenStack- the Nebula One.

Nearly five years after the creation of OpenStack the market is clearly still in its early stages despite loads of vendor hype and a flurry of acquisitions in this space. Indeed, the first challenge for independents like Nebula is their ability to gain critical mass and maintain operations – at least before being acquired by firms like Cisco, Red Hat, HP and other IT vendors that have snapped OpenStack startups in recent years in a bid to grow their portfolios based on the open source platform; the second is, of course, competing with the Ciscos, Red Hats and HPs of the world, which is no small feat.