Archivo de la categoría: OpenStack

OpenStack does some soul searching, finds its core self

Bryce: 'OpenStack will power the planet's clouds'

Bryce: ‘OpenStack will power the planet’s clouds’

The OpenStack Foundation announced new interoperability and testing requirements as well as enhancements to the software’s implementation of federated identity which the Foundation’s executive director Jonathan Bryce says will take the open source cloud platform one step closer to world domination.

OpenStack’s key pitch beyond being able to spin up scalable compute, storage and networking resources fairly quickly, is that OpenStack-based private clouds should be able to burst into the public cloud or some private cloud instances if need be. That kind of capability is essential if the company is going to take on companies like AWS, VMware and Microsoft, but has so far been quite basic in terms of implementation.

But for that kind of interoperability to happen you need three things: the ability to federate the identity of a cloud user so permissions and workloads can port over to whatever platforms are being deployed on (and to ensure those workloads are secure); a definition of what vendors, service providers and customers can reliably call core OpenStack, so they can all expect a standard collection of tools, services, and APIs to be found in every distribution; and, a way to test interoperability of OpenStack distributions and appliances.

To that end, the Foundation announced a new OpenStack Powered interoperability testing programme, so users can validate the interoperability of their own deployments as well as gain assurances from vendors that clouds and appliances branded as “OpenStack Powered” meet the same requirements. About 16 companies already have either certified cloud platforms or appliances available on the OpenStack Marketplace as of this week, and Bryce said there’s more to come.

The latest release of OpenStack, Kilo, also brings a number of improvements to federated identity, making it much easier to implement as well as more dynamic in terms of workload deployment, and Bryce said that over 30 companies have committed to implementing federated identity (which has been available since the Lighthouse release) by the end of this year – meaning the OpenStack cloud footprint just got a whole lot bigger.

“It has been a massive effort to come to an agreement on what we need to have in these clouds, how to test it,” Bryce said. “It’s a key step towards the goal of realising an OpenStack-powered planet.”

The challenge is, as the code gets bulkier and as groups add more services, joining all the bits and making sure they work together without one component or service breaking another becomes much more complex. That said, the move marks a significant milestone for the DefCore group, the internal committee in charge of setting base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. The group have been working for well over a year on developing a standard definition of what a core OpenStack deployment is.

TD Bank uses cloud as catalyst for cultural change in IT

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.

Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.

“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.

But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.

“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.

“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”

Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.

In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.

The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.

“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.

“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

Mirantis, Pivotal team up on OpenStack, Cloud Foundry integration

Mirantis and Pivotal are working to integrate their commercial deployments of OpenStack and Cloud Foundry, respectively

Mirantis and Pivotal are working to integrate their commercial deployments of OpenStack and Cloud Foundry, respectively

Pivotal and Mirantis announced this week that the two companies are teaming up to accelerate integration of Cloud Foundry and OpenStack.

As part of the move Pivotal will support Pivotal CF, the company’s commercial distribution of the open source platform-as-a-service, on Mirantis’ distribution of OpenStack.

“Our joint customers are seeking open, bleeding-edge technologies to accelerate their software development and bring new products to market faster,” said James Watters, vice president and general manager of the Cloud Platform Group at Pivotal.

“Now, with Pivotal Cloud Foundry and Mirantis OpenStack, enterprises across various industries can rapidly deliver cloud-native, scalable applications to their customers with minimal risk and maximum ROI,” Watters said.

The move comes just one month after Mirantis announced it would join the Cloud Foundry Foundation in a bid to help drive integration between the two open source platforms. At the time, Alex Freedland, Mirantis co-founder and chairman said an essential part of rolling out software to help organisations build their own clouds includes making it as easy as possible to deploy and manage technologies “higher up the stack” like Cloud Foundry.

“Enterprises everywhere are adopting a new generation of tools, processes and platforms to help them compete more effectively,” said Boris Renski, Mirantis chief marketing officer and co-founder. “Mirantis and Pivotal have made Pivotal Cloud Foundry deployable on Mirantis OpenStack at the click of a button, powering continuous innovation.”

Joint customers can install Pivotal Cloud Foundry onto Mirantis OpenStack using the companies’ deployment guide, but the two companies are working towards adding a full Pivotal CF installation into the application catalogue of the next OpenStack release, Murano.

Microsoft targets customer datacentres with Azure Stack

Microsoft is bolstering its hybrid cloud appeal on the one hand, and going head to head with other large incumbents on the other

Microsoft is bolstering its hybrid cloud appeal on the one hand, and going head to head with other large incumbents on the other

Microsoft revealed a series of updates to its server and cloud technologies aimed at blending the divide between Azure and Windows Server.

The company announced Azure Stack, software that consists of the architecture and microservices deployed by Microsoft to run its public-cloud version of Azure, including some of the latest updates to the platform like Azure Service Fabric and Azure App Fabric – which have made the architecture much more container-like.

Built on the same core technology as Azure but deployed in a customer’s datacentre, the company said Azure Stack makes critical use of among other things some of the company’s investments in software-defined networking.

The company also said it worked a number of bugs out of the next version of Windows Server (2016), with the second preview being made available this week; the net version of Windows Server will include a number of updates announced last month including Hyper-V containers and nano-servers, which are effectively Dockerised and slimmed-down Windows Server images, respectively.

Azure Stack will preview this summer and Windows Server 2016 is already available for preview.

The company also announced, Microsoft Operations Management Suite (OMS), a hybrid cloud management service that supports Azure, AWS, Windows Server, Linux, VMware, and OpenStack.

For Microsoft the updates are a sign of a significant push into hybrid cloud as it looks to align it’s the architecture of its Windows Server and Azure offerings and help customers manage workloads and operations in a multi-cloud world. Interestingly, by taking the Azure architecture directly to customer datacentres it’s effectively going head-to-head with other IaaS software vendors selling alternatives like OpenStack and CloudStack – Dell, HP, Cisco, Red Hat, IBM and so forth – which is in some ways new territory for the cloud giant.

OpenStack Kilo ships with identity federation, storage improvements, bare-metal service

OpenStack Kilo is out, but some challenges persist

OpenStack Kilo is out, but some organisations may think twice about deployment

The OpenStack community released the eleventh version of the open source platform this week, codenamed Kilo, which ships with loads of improvements including new management APIs, security improvements for NFV, and the first full release of the bare metal cloud service. But a number of challenges still conspire to make the platform difficult to implement for some organisations.

The organisation has added improvements across the board, including:

  • Nova Compute: New API versioning management with v2.1 and microversions, which makes it easier to write long-lived applications against compute functionality. Operational improvements include live upgrades when a database schema change is required and better support for changing the resources of a running VM.
  • Swift Object Storage: Erasure coding provides efficient and cost-effective storage, and container-level temporary URLs allow time-limited access to a set of objects in a container. The latest release also brings improvements to global cluster replication and storage policy metrics.
  • Cinder Block Storage: Major updates to testing and validation requirements for backend storage systems across 70 options. Users can now attach a volume to multiple compute instances for high-availability and migration use cases.
  • Neutron Networking: The load-balancing-as-a-service API is now in its second version. The community also added additional features support NFV like port security for OpenVSwitch, VLAN transparency and MTU API extensions.
  • Ironic Bare-Metal Provisioning: The first full release of the Ironic bare-metal provisioning project with support for existing VM workloads and Linux containers, platform-as-a-service and NFV.
  • Keystone Identity Service: Identity federation enhancements to support hybrid workloads in multi-cloud environments.

This is the latest version of OpenStack since Juno, which was first released in October last year. Liberty, the version currently under development, is due to be released in October this year.

The OpenStack foundation said deployments of OpenStack are growing, with production deployments accounting for about half of those. But the project still needs to improve in some areas – which reflects the maturity of the platform more than anything.

“One thing I would definitely call out when considering OpenStack versus something like AWS is that with the former we have the ability to take advantage of innovation more rapidly than would be the case with the latter,” Michael Yoon, chief technology officer of MiMedia, an OpenStack user, told BCN. “So specifically, things like erasure coding, SMR technology, kinetic drive and object storage at the drive level, these are all making a very serious impact when comparing one solution from another.”

“But the container strategy hasn’t really been there in Swift. Also upgradability is certainly a challenge – one that’s been improved upon since the early releases, but still needs a lot of work.”

One of the biggest challenges, Yoon explained, is that there aren’t enough vendor-agnostic best-practice and architectural guides out there today. So the overhead, in terms of the initial research required to stand up an OpenStack cloud in the first place, is high.

“There’s a decent amount of research you need to do if you’re looking to get into this, and the problem is every vendor has their version, stocked with their own IP, of what makes a performant OpenStack distribution; each has a set of best practices and there’s still a fair amount of having to wade through it all to make things less vendor specific,” Yoon added.

This challenge seems to be highlighted in the latest cloud research from 451Research. According to the analyst house’s latest Cloud Price Index (CPI) report the TCO of proprietary commercial cloud management offerings is less than that of OpenStack distributions because of the cost of hiring additional manpower to implement them.

“The proprietary offering’s TCO benefit is simply the result of the high cost of OpenStack engineers – the distributions themselves are priced lower than the proprietary offerings,” the report explained. “With OpenStack, migration should, on paper, be less expensive, but it will be made more difficult than necessary due to a lack of federation among providers and the numerous OpenStack reference architectures.”

Nevertheless, the newly announced features may help improve the attractiveness of the platform among organisations.

“OpenStack continues to grow, and features like federated identity and bare metal provisioning support make the platform more compelling for enterprise IT leadership and application developers who want a stable, open source alternative to proprietary options,” said Al Sadowski, research director at 451 Research and one of the lead authors on the CPI report.

Telefónica: Unifying cloud and comms can make us more efficient

Juan Manuel Moreno, global cloud director at Telefónica

Juan Manuel Moreno, global cloud director at Telefónica

Telefónica is working to roll out a wide range of internally and customer-focused cloud services on both its own UNICA and OpenStack platforms, but the company’s global cloud director Juan Manuel Moreno said the challenges of shifting such a large company aren’t purely technical.

Moreno, who was speaking at the Telco Cloud Forum in London Tuesday, said the company has been in the process of transforming its technology platforms and internal operations for some time.

It’s no secret telcos like Telefónica have in recent years worked to strengthen their cloud capabilities in a bid to broaden their service portfolios and battle dwindling voice and data revenues – and to regain ground lost to more nimble OTT players.

“We are reliant on the full ecosystem of providers of technology… using technology partners to build these new services. But it’s a challenge because these players are moving so quickly,” Moreno said. “One of the biggest challenges in this space is that everyone and everything is moving so quickly.”

Much of what the company has done externally was intricately linked with its efforts to virtualize core elements of its own networks, plans it originally set out in 2014, with the aim of virtualising 30 per cent of all new infrastructure by 2016. The company took many of those platforms to use as the foundation for their customer-facing cloud services.

“Unifying communications and cloud or IT is an opportunity to be more efficient, to discover more services, and of course to develop a broad portfolio of services,” he explained.

But he was fairly candid about the challenges involved with moving Telefónica, a large, global service provider with very heterogeneous technology platforms and business processes, over to a more automated, cloud-centric operational model and product portfolio.

He said overcoming the business process-related challenges would also be key if the company is to effectively capture more revenue from its traditional base as well as new segments (he said Telefónica sees SMBs as the largest commercial opportunity for the foreseeable future).

“In cloud the technology is available for everyone now… But we still have to transform internally in order to really strengthen our market position,” he added.

Citrix, bowing to momentum, joins OpenStack

Citrix is rejoining OpenStack, the open source cloud project it abandoned for its own rival initiative

Citrix is rejoining OpenStack, the open source cloud project it abandoned for its own rival initiative

Virtualisation specialist Citrix has announced it is officially joining the OpenStack Foundation as a corporate sponsor, the open source organisation it left four years ago in order to pursue the rival Cloud Stack initiative.

Citrix said it had contributed to the OpenStack community fairly early on, but wanted to re-join the community in order to more formally demonstrate its commitment towards cloud interoperability and standards development.

As part of the announcement the company also said it has integrated the NetScaler and XenServer with OpenStack.

“We’re pleased to formally sponsor the OpenStack Foundation to help drive cloud interoperability standards. Citrix products like NetScaler, through the recently announced NetScaler Control Center, and XenServer, are already integrated with OpenStack,” said said Klaus Oestermann, senior vice president and general manager, delivery networks at Citrix.

“Our move to support the OpenStack community reflects the great customer and partner demand for Citrix to bring the value of our cloud and networking infrastructure products to customers running OpenStack,” Oestermann added.

Citrix is one of the biggest backers of CloudStack, an Apache open source project that rivals OpenStack. Citrix was aligned with OpenStack at the outset but in 2012 ended its commitment to that project in order to pursue CloudStack development.

That said, the move would suggest Citrix is aware it can’t continue going against the grain too long when it comes to vendor and customer mind-share. OpenStack, despite all of its own internal politics and technological gaps, seems to have far more developers involved than CloudStack. It also has more buy-in from vendors.

All of this is to say, going the CloudStack route exclusively is counterintuitive, especially in cloud – which is all about heterogeneity (which means interoperability is, or should be, among the top priorities of vendors involved). But, Citrix maintains that it will continue to invest in CloudStack development.

Laurent Lachal, lead analyst in Ovum’s software practice told BCN the move is a classic case of “if you can’t beat ‘em, join ‘em.”

“But there needs to be more clarity around how OpenStack fits with CloudStack,” he explained. “The CloudStack initiative is no longer as dependent on Citrix as it used to be, which is a good thing. But the project still needs to get its act together.”

VMware open sources IAM, cloud OS tools

VMware is open sourcing cloud tools

VMware is open sourcing cloud tools

VMware has open sourced two sets of tools the company said would accelerate cloud adoption in the enterprise and improve their security posture.

The company announced Project Lightwave, which the company is pitching as the industry’s first container identity and access management tool for cloud-native applications, and Project Photon, a lightweight Linux operating system optimised for running these kinds of apps in vSphere and vCloud Air.

The move follows Pivotal’s recent launch of Lattic, a container cluster scheduler for Cloud Foundry that the software firm is pitching as a more modular way of building apps exposing CF components as standalone microservices (thus making apps built with Lattice easier to scale).

“Through these projects VMware will deliver on its promise of support for any application in the enterprise – including cloud-native applications – by extending our unified platform with Project Lightwave and Project Photon,” said Kit Colbert, vice president and chief technology officer for Cloud-Native Applications, VMware.

“Used together, these new open source projects will provide enterprises with the best of both worlds. Developers benefit from the portability and speed of containerized applications, while IT operations teams can maintain the security and performance required in today’s business environment,” Colbert said.

Earlier this year VMware went on the container offensive, announcing an updated vSphere platform that would enable users to run Linux containers side by side with traditional VMs as well as its own distribution of OpenStack.

The latest announcement – particularly Lattice – is part of a broader industry trend that sees big virtualisation incumbents embrace a more modular, cloud-friendly architecture (which many view as synonymous with containers) in their offerings. This week one of VMware’s chief rivals in this area, Microsoft, announced its own container-like architecture for Azure following a series of moves to improve support for Docker on its on-premise and cloud platforms.

Microsoft debuts container-like architecture for cloud

Microsoft is trying to push more cloud-friendly architectures

Microsoft is trying to push more cloud-friendly architectures

Microsoft has announced Azure Service Fabric, a framework for ISVs and startups developing highly scalable cloud applications which combines a range of microservices, orchestration, automation and monitoring tools. The move comes as the software company looks to deepen its use of – and ties to – open source tech.

Azure Service Fabric, which is based in part on technology included in Azure App Fabric, breaks apart apps into a wide range of small, independently versioned microservices, so that apps created on the platform don’t need to be re-coded in order to scale past a certain point. The result, the company said, is the ability to develop highly scalable applications while enabling low-level automation and orchestration of its constituent services.

“Service Fabric was born from our years of experience delivering mission-critical cloud services and has been in production for more than five years. It provides the foundational technology upon which we run our Azure core infrastructure and also powers services like Skype for Business, InTune, Event Hubs, DocumentDB, Azure SQL Database (across more than 1.4 million customer databases) and Bing Cortana – which can scale to process more than 500 million evaluations per second,” explained Mark Russinovich, chief technology officer of Microsoft Azure.

“This experience has enabled us to design a platform that intrinsically understands the available infrastructure resources and needs of applications, enabling automatically updating, self-healing behaviour that is essential to delivering highly available and durable services at hyper-scale.”

A preview of the service will be released to developers at the company’s Build conference next week.

The move is part of a broader architectural shift in the software stack powering cloud services today. It’s clear the traditional OS / hypervisor model is limited in terms of its ability to ensure services are scalable and resilient for high I/O applications, which has manifested in among other things a shift towards breaking down applications into a series of connected microservices – something which many equate Docker and OpenStack with, among other open source software projects.

Speaking of open source, the move comes just days after Microsoft announced MS Open Tech, the standalone open source subsidiary of Microsoft, will re-join the company, in a move the company hopes will drive further engagement with open source communities.

“The goal of the organization was to accelerate Microsoft’s open collaboration with the industry by delivering critical interoperable technologies in partnership with open source and open standards communities. Today, MS Open Tech has reached its key goals, and open source technologies and engineering practices are rapidly becoming mainstream across Microsoft. It’s now time for MS Open Tech to rejoin Microsoft Corp, and help the company take its next steps in deepening its engagement with open source and open standards,” explained Jean Paoli, president of Microsoft Open Technologies

“As MS Open Tech rejoins Microsoft, team members will play a broader role in the open advocacy mission with teams across the company, including the creation of the Microsoft Open Technology Programs Office. The Programs Office will scale the learnings and practices in working with open source and open standards that have been developed in MS Open Tech across the whole company.”

Hortonworks buys SequenceIQ to speed up cloud deployment of Hadoop

CloudBreak

SequenceIQ will help boost Hortonworks’ position in the Hadoop ecosystem

Hortonworks has acquired SequenceIQ, a Hungary-based startup delivering infrastructure agnostic tools to improve Hadoop deployments. The company said the move will bolster its ability to offer speedy cloud deployments of Hadoop.

SequenceIQ’s flagship offering, Cloudbreak, is a Hadoop as a Service API for multi-tenant clusters that applies some of the capabilities of Blueprint (which lets you create a Hadoop cluster without having to use the Ambari Cluster Install Wizard) and Periscope (autoscaling for Hadoop YARN) to help speed up deployment of Hadoop on different cloud infrastructures.

The two companies have partnered extensively in the Hadoop community, and Hortonworks said the move will enhance its position among a growing number of Hadoop incumbents.

“This acquisition enriches our leadership position by providing technology that automates the launching of elastic Hadoop clusters with policy-based auto-scaling on the major cloud infrastructure platforms including Microsoft Azure, Amazon Web Services, Google Cloud Platform, and OpenStack, as well as platforms that support Docker containers. Put simply, we now provide our customers and partners with both the broadest set of deployment choices for Hadoop and quickest and easiest automation steps,” Tim Hall, vice president of product management at Hortonworks, explained.

“As Hortonworks continues to expand globally, the SequenceIQ team further expands our European presence and firmly establishes an engineering beachhead in Budapest. We are thrilled to have them join the Hortonworks team.”

Hall said the company also plans to contribute the Cloudbreak code back into the Apache Foundation sometime this year, though whether it will do so as part of an existing project or standalone one seems yet to be decided.

Hortonworks’ bread and butter is in supporting enterprise adoption of Hadoop and bringing the services component to the table, but it’s interesting to see the company commit to feeding the Cloudbreak code – which could, at least temporarily, give it a competitive edge – back into the ecosystem.

“This move is in line with our belief that the fastest path to innovation is through open source developed within an open community,” Hall explained.

The big data M&A space has seen more consolidation over the past few months, with Hitachi Data Systems acquiring big data and analytics specialist Pentaho and Infosys’ $200m acquisition of Panaya.