Category Archives: News & Analysis

CERN, Rackspace to harden federated cloud reference architecture

CERN and Rackspace want to create standard templates for an OpenStack cloud of clouds

CERN and Rackspace want to create standard templates for an OpenStack cloud of clouds

Rackspace and CERN openlab announced plans to redouble their efforts to create a reference architecture for a federated cloud service model.

The earliest implementations of Keystone – the mechanism in OpenStack for OpenStack-to-OpenStack identity authentication and cloud federation – came out of a collaboration between CERN and Rackspace, and now the two organisations plan to extend those efforts and create standardised templates for cloud orchestration.

“More companies are now looking to use multiple clouds to effectively serve the range of workloads they run – blending low-cost, high-performance, enhanced security and optimised environments,” says Giri Fox, Rackspace’s director of customer technology services. “But, we are still seeing the complexity businesses are facing to integrate just one cloud into their business. Federation is an opportunity to re-use that initial integration for future clouds you want to run your business on, making multi-cloud a business benefit choice rather than a business cost one.”

For those of you that aren’t familiar with CERN, the European Organization for Nuclear Research, it operates the Large Hadron Collider which during its tests (which take place intermittently) spits out over 30 petabytes of raw data per year, which then needs to be processed and made available in near real-time for physicists around the world.

But CERN is like many research organisations resource constrained, so relying on federated set of infrastructure to get all of that processing accomplished can help it overcome the capacity limitations of its own datacentres. The organisation relies on multiple OpenStack clouds based in Europe that need to be accessed by thousands of researcher, so it has a strong incentive to develop a robust open model for cloud federation.

“Our CERN openlab mission is to work with industry partners to develop open, standard solutions to the challenges faced by the worldwide LHC community. “These solutions also often play a key role in addressing tomorrow’s business challenges,” said Tim Bell, infrastructure manager in the IT department at CERN.

“After our work on identity federation with Rackspace, this is a very important step forward. For CERN, being able to move compute workloads around the world is essential for ongoing collaboration and discovery,” Bell said.

SingleHop buys Datagram to bolster enterprise private cloud strategy

SingleHop has acquired Datagram to strengthen its private cloud strategy

SingleHop has acquired Datagram to strengthen its private cloud strategy

Hosting and cloud service provider SingleHop has acquired infrastructure specialist Datagram this week, a move the company says will allow it to expand more quickly into the US hosted private cloud market.

The acquisition comes just a couple of months after SingleHop acquired a similar infrastructure specialist, Server Intellect, which gave the company strong expertise in Microsoft legacy server and cloud technology (where there is increasing confluence).

Datagram provides (mostly VMware-based) hosted private cloud services as well as disaster recovery and colocation, and operates out of five datacentres based in New York, Connecticut, Chicago, Phoenix and Amsterdam, with additional POPs in New Jersey and California.

“Datagram and SingleHop share the same vision of making best-of-breed technology easy to deploy and use for enterprise customers,” said Zak Boca, chief executive of SingleHop.

“The acquisition comes at a time when many enterprises, especially those in the media and entertainment space, are looking for ways to reduce their capital costs, increase their agility and offload routine IT functions to providers. As we move forward, SingleHop is actively considering additional acquisitions that add strategic, accretive benefits to our long term mission of providing the most complete suite of managed hosting and private cloud solutions,” Boca added.

The financial terms of the purchase were not disclosed, and SingleHop said once the deal closes Datagram will continue to operate as an independent business unit of the company, but with added sales and marketing resource from SingleHop.

Alex Reppen, chief executive of Datagram said: “Two decades of exponential growth in data creation has spurred the need for solutions that allow organizations to both store and manage increasing volumes of data in a cost-efficient manner. Together with SingleHop, we are now able to offer our customers a far greater degree of management and control over their data at a time when the demands of workload management is stifling innovation in many organisations.”

Google, OpenStack target containers as Project Magnum gets first glimpse

Otto, Collier and

Otto, Collier and Parikh demoing Magnum at the OpenStack Summit in Vancouver this week

Google and OpenStack are working together to use Linux containers as a vehicle for integrating their respective cloud services and bolstering OpenStack’s appeal to hybrid cloud users.

The move follows a similar announcement made earlier this year by pure-play OpenStack vendor Mirantis and Google to commit to integrating Kubernetes with the OpenStack platform.

OpenStack chief operating officer Mark Collier said the platform needs to embrace heterogeneous workloads as it moves forward, with both containers and bare-metal solidly on the agenda for future iterations.

To that end, the company revealed Magnum, which in March became an official OpenStack project. Magnum builds on Heat to produce Nova instances on which to run application containers, and it basically creates native capabilities (like support for different scheduling techniques) that enable users and service providers to offer containers-as-a-service.

“As we think about Magnum and how that can take container support to the next level, you’ll hear more about all the different types of technologies available under one common set of APIs. And that’s what users are looking for,” Collier said. “You have a lot of workloads requiring a lot of different technologies to run them at their best, and putting them all together in one platform is a very powerful thing.”

Google’s technical solutions architect Sandeep Parikh and Magnum project leader Adrian Otto (an architect at Rackspace) were on hand to demo a kubernetes cluster deployment in both Google Compute Engine and the Rackspace public cloud using the exact same code and Keystone identity federation.

“We’ve had container support in OpenStack for some time now. Recently there’s been NovaDocker, which is for containers we treat as machines, and that’s fine if you just want a small place to put something,” Otto said.

Magnum uses the concept of a bay – where the orchestration layer goes – that Otto said can be used to manipulate pretty much any Linux container technology, whether its Docker, Kubernetes or Mesos.

“This gives us the ability to offer a hybrid approach. Not everything is great for private cloud, and not everything is great for public [cloud],” Parikh said. “If I want to run a highly available deployment, I can now run my workload in multiple places and if something were to go down the workload will still stay live.”

eBay chief cloud engineer: ‘OpenStack needs to do more on scalability, upgradability’

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

eBay aims to move 100 per cent of its ebay.com service onto OpenStack

OpenStack has improved leaps and bounds in the past four years but it still leaves much to be desired in terms of upgradability and manageability, according to Subbu Allamaraju, eBay’s top cloud engineer.

Allamaraju, who was speaking at the OpenStack Summit in Vancouver this week, said the ecommerce giant is a big believer in open source tech when it comes to building out its own internal, dev-and-test and customer-facing services.

In 2012 when the company, which is a 100 per cent KVM and OVS shop, started looking at OpenStack, it decided to deploy on around 300 servers. Now the company has deployed nearly 12,000 hypervisors on 300,000 cores, including 15 virtual private clouds, in 10 availability zones.

“In 2012 we had virtually no automation; in 2014 we still needed to worry about configuration drift to keep the fleet of hypervisors in sync. In 2012, there was also no monitoring,” he said. “We built tools to move workloads between deployments because in the early years there was no clear upgrade path.”

eBay has about 20 per cent of its customer-facing website running on OpenStack, and as of the holiday season this past year processed all PayPal transactions on applications deployed on the platform. The company also hosts significant amounts of data – Allamaraju claims eBay runs one of the largest Hadoop clusters in the world at around 120 petabytes.

But he said the company still faces concerns about deploying at scale, and about upgrading, adding that in 2012 eBay had to build a toolset just to migrate its workloads off the Essex release because no clear upgrade path presented itself.

“In most datacentre cloud is only running in part of it, but we want to go beyond that. We’re not there yet and we’re working on that,” he said, adding that the company’s goal is to go all-in on OpenStack within the next few years. “But at meetings we’re still hearing questions like ‘does Heat scale?’… these are worrying questions from the perspective of a large operator.”

He also said the data from recent user surveys suggest manageability and in particular upgradeability, long held to be a significant barrier to OpenStack adoption, are still huge issues.

“Production deployments went up, but 89 per cent are running a core base at least 6 months old, but 55 per cent of operators are running a year-old core base, and 18 per cent are running core bases older than 12 months,” he said. “Lots of people are coming to these summits, but the data suggests many are worried about the upgrading.”

“This is an example of manageability missing in action.  How do you manage large deployments? How do you manage upgradeability?”

CloudBees buys ClinkerHQ to strengthen Jenkins cloud

CloudBees has acquired ClinkerHQ to strengthen its Jenkins-based cloud service

CloudBees has acquired ClinkerHQ to strengthen its cloud-based Jenkins CI service

Belgium-based CloudBees has acqui-hired ClinkerHQ, a continuous delivery and open source software development specialist based in Spain.

CloudBees was originally founded as a java platform as a service, but the company now focuses almost exclusively on providing solutions based on Jenkins CI, which includes a cloud-based version of the continuous integration and job execution monitoring platform. ClinkerHQ offers a software development and monitoring ecosystem that boasts strong native integration with Jenkins CI among other open source technologies, but both platforms are quite similar.

CloudBees said the two DevOps’y companies will complement one another and help bolster the reach of CloudBees’ service.

“To serve the growing requirements of our customers and meet the needs of organizations investing in continuous delivery, CloudBees needs to extend its talent base and development resources,” said Sacha Labourey, chief executive officer and founder of CloudBees.

“ClinkerHQ’s experience in product development and consulting on Jenkins and CD-related projects will bring a unique combination of deep industry experience to the CloudBees product management and engineering teams,” Labourey added.

The acquisition will see ClinkerHQ’s seven-person team join CloudBees, including the company’s founders Antonio Muniz and Manuel Recena.

“We are excited to join such a highly respected organization as CloudBees and contribute to the industry-leading work being done in the continuous delivery area with Jenkins,” Muniz said.

The companies said ClinkerHQ customers will be given the option of moving to the CloudBees or staying on ClinkerHQ until the end of their contract.

CloudBees’ $23.5m funding round in January this year put the company in a good position to make small acquisitions like ClinkerHQ, which has a very similar offering. In fact, back in February, when Jenkins celebrated its 10th anniversary as an open source project, a blog post penned by ClinkerHQ co-founder and chief executive Recena alludes to this very fact when he thanks companies committed to open source tech broadly speaking.

“And, on that last point, we must give a special mention to CloudBees. We say this last point in a low voice so no-one can hear us, as we say in Spain. More than a few times we’ve had to answer the questions: ‘What does ClinkerHQ provide compared to CloudBees?’ and ‘Why would ClinkerHQ be a better solution?’”

IBM backs WayBlazer, Sellpoints to show its commitment to Watson

IBM is investing in companies that use the Watson cloud service

IBM is investing in companies that use the Watson cloud service

IBM announced this week it has invested in two companies, WayBlazer and Sellpoints, which are using cognitive computing to enhance their travel planning and shopping applications. The move seems intended to show IBM’s commitment to Watson-as-a-service, the company’s cloud-based cognitive computing service which launched last year.

WayBlazer, a travel planning and shopping service, uses IBM’s Watson cloud service to help personalise holidays and create personalised travel recommendations for each customer from a slew of social and financial data.

Sellpoint uses Watson to do much the same thing, but for large retail and manufacturing firms looking to bolster their ecommerce sites without having to invest loads by way of internal development resource.

IBM said the investments in WayBlazer and Sellpoint were part of a $5m series A and $7.5m series C funding round, respectively, but the company declined to disclose the financial terms of its involvement.

“IBM is committed to helping our partners accelerate the development and delivery of Watson enabled apps into market where we see endless opportunities for cognitive computing to transform entire industries,” said Stephen Gold, vice president, IBM Watson.  “WayBlazer and Sellpoints are terrific examples of how cognitive computing technology can be used to help organizations redefine customer engagement and drive much deeper, meaningful and relevant consumer experiences.”

Brian O’Keefe, chief executive officer of Sellpoints said: “With the natural language and cognitive computing capabilities of Watson, we’re able to deliver a more personalized, relevant and enjoyable experience, and drive a much deeper level of engagement with customers.”

IBM said the investments were part of the $100m it committed to Watson last year. But it hasn’t always made it clear that it was pursuing direct investments into companies willing to use its technology, which could be an expensive proposition in the long run. The company continues to be relatively quiet on the financial performance of the Watson unit.

OpenStack does some soul searching, finds its core self

Bryce: 'OpenStack will power the planet's clouds'

Bryce: ‘OpenStack will power the planet’s clouds’

The OpenStack Foundation announced new interoperability and testing requirements as well as enhancements to the software’s implementation of federated identity which the Foundation’s executive director Jonathan Bryce says will take the open source cloud platform one step closer to world domination.

OpenStack’s key pitch beyond being able to spin up scalable compute, storage and networking resources fairly quickly, is that OpenStack-based private clouds should be able to burst into the public cloud or some private cloud instances if need be. That kind of capability is essential if the company is going to take on companies like AWS, VMware and Microsoft, but has so far been quite basic in terms of implementation.

But for that kind of interoperability to happen you need three things: the ability to federate the identity of a cloud user so permissions and workloads can port over to whatever platforms are being deployed on (and to ensure those workloads are secure); a definition of what vendors, service providers and customers can reliably call core OpenStack, so they can all expect a standard collection of tools, services, and APIs to be found in every distribution; and, a way to test interoperability of OpenStack distributions and appliances.

To that end, the Foundation announced a new OpenStack Powered interoperability testing programme, so users can validate the interoperability of their own deployments as well as gain assurances from vendors that clouds and appliances branded as “OpenStack Powered” meet the same requirements. About 16 companies already have either certified cloud platforms or appliances available on the OpenStack Marketplace as of this week, and Bryce said there’s more to come.

The latest release of OpenStack, Kilo, also brings a number of improvements to federated identity, making it much easier to implement as well as more dynamic in terms of workload deployment, and Bryce said that over 30 companies have committed to implementing federated identity (which has been available since the Lighthouse release) by the end of this year – meaning the OpenStack cloud footprint just got a whole lot bigger.

“It has been a massive effort to come to an agreement on what we need to have in these clouds, how to test it,” Bryce said. “It’s a key step towards the goal of realising an OpenStack-powered planet.”

The challenge is, as the code gets bulkier and as groups add more services, joining all the bits and making sure they work together without one component or service breaking another becomes much more complex. That said, the move marks a significant milestone for the DefCore group, the internal committee in charge of setting base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. The group have been working for well over a year on developing a standard definition of what a core OpenStack deployment is.

TD Bank uses cloud as catalyst for cultural change in IT

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

Peacock said TD Bank is using the cloud as a way to help catalyse cultural change at the firm

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.

Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.

“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.

But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.

“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.

“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”

Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.

In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.

The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.

“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.

“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

IBM, Deloitte to jointly develop risk management, compliance solutions

IBM and Deloitte are partnering to use big data for compliance in financial services

IBM and Deloitte are partnering to use big data for compliance in financial services

IBM and Deloitte are partnering to develop risk management and compliance solutions for the financial services sector, the companies said this week.

The partnership will see Deloitte offer up its financial services and risk management consulting expertise to help IBM develop a range of cloud-based risk management services that combine the technology firm’s big data analytics and Watson-as-a-Service cognitive computing platform.

Deloitte will also work with joint customers to help integrate the solutions into their organisation’s technology landscape.

“The global enterprise risk management domain is undergoing significant transformation, and emerging technologies like big data and predictive analytics can be used to address complex regulatory requirements,” said Tom Scampion, global risk analytics leader at Deloitte UK. “We are excited to be working with IBM to apply their market leading technologies and platforms to enable faster, more insightful business decisions. This alliance aims to completely re-frame and re-shape the risk space.”

“Financial services firms are under tremendous pressure, which has forced them to spend the majority of their IT budgets addressing regulatory requirements.  There is an opportunity to transform the approach organizations are taking and leverage the same investments to go beyond compliance and deliver real business value,” said Alistair Rennie, general manager of analytics solutions, IBM. “Combining [Deloitte’s] knowledge with our technology will provide our clients with breakthrough capabilities and deliver risk and regulatory intelligence in ways previously not possible.”

Deloitte’s no stranger to partnering with large incumbents to bolster its appeal to financial services clients. Last year the company partnered with SAP to help develop custom ERP platforms based on HANA for the financial services sector, and partnered with NetSuite to help it target industry verticals more effectively.

Dropbox the latest to adopt public cloud privacy standard

Dropbox is the latest to adopt one of the first public cloud-focused data privacy standards

Dropbox is the latest to adopt one of the first public cloud-focused data privacy standards

Cloud storage provider Dropbox said it has adopted ISO 27018, among the first international standards focusing on the protection of personal data in the public cloud.

The standard, published in August 2014, is aimed at clarifying the roles of data controllers and data processors in keeping Personally Identifiable Information (PII) private and secure in public cloud environments; it builds on other information security standards within the ISO 27000 family, and specifically, is an enhancement to the 27001 standard.

ISO 27018 also broadly requires adopting cloud providers to be more transparent about what they do with customer data and where they host it.

In a statement the company said the move would give users more confidence in its platform, particularly enterprise users.

“We’re pleased to be one of the first companies to achieve ISO 27018 certification. Privacy and data protection regulations and norms vary around the world, and we’re confident this certification will help our customers meet their global compliance needs,” it said.

Mark van der Linden, Dropbox country manager for the UK said: “Businesses in the UK and all over the world are trusting Dropbox to make collaboration easier and boost productivity. Our ISO 27018 accreditation shows we put users in control of their data, we are transparent about where we store it, and we operate to the highest standards of security.

Earlier this year Microsoft certified Azure, Intune, Office 365 and Dynamics CRM Online under the new ISO standard. At the time the company also said it was hopeful certifying under the standard would make it easier to satisfy compliance requirements, which can be trickier in some verticals than others.