Archivo de la categoría: network function virtualisation

Cisco to buy Embrane in NFV automation play

Cisco is consolidating its NFV portfolio with an increasing focus on automation

Cisco is consolidating its NFV portfolio with an increasing focus on automation

Networking giant Cisco announced its intent to acquire network function virtualisation (NFV) and Cisco tech specialist Embrane for an undisclosed sum this week, a move intended to bolster the company’s networking automation capabilities.

“With agility and automation as persistent drivers for IT teams, the need to simplify application deployment and build the cloud is crucial for the datacentre,” explained Cisco’s corporate development lead Hilton Romanski.

“As we continue to drive virtualization and automation, the unique skillset and talent of the Embrane team will allow us to move more quickly to meet customer demands. Together with Cisco’s engineering expertise, the Embrane team will help to expand our strategy of offering freedom of choice to our customers through the Nexus product portfolio and enhance the capabilities of Application Centric Infrastructure (ACI),” he said, adding that the purchase also builds on previous commitments to open standards, open APIs, and playing nicely in multi-vendor environments.

Beyond complimenting Cisco’s ACI efforts, Dante Malagrinò, one of the founders of Embrane and its chief product officer said the move will help further the company’s goal of driving software-hardware integration in the networking space, and offer Embrane an attractive level of scale few vendors playing in this space have.

“Joining Cisco gives us the opportunity to continue our journey and participate in one of the most significant shifts in the history of networking:  leading the industry to better serve application needs through integrated software-hardware models,” he explained.

“The networking DNA of Cisco and Embrane together drives our common vision for an Application Centric Infrastructure.  We both believe that innovation must be evolutionary and enable IT organizations to transition to their future state on their own terms – and with their own timelines.  It’s about coexistence of hardware with software and of new with legacy in a way that streamlines and simplifies operations.”

Cisco is quickly working to consolidate its NFV offerings, and more recently its OpenStack services, as the vendor continues to target cloud service providers and telcos looking to revamp their datacentres. In March it was revealed Cisco struck a big deal with T-Systems, Deutsche Telekom’s enterprise-focused subsidiary, that will see the German incumbent roll out Cisco’s OpenStack-based infrastructure in datacentre in Biere, near Magdeburg, as well as a virtual hotspot service for SMEs.

Open Networking Foundation wary of ‘big vendor’ influence on SDN

Pitt said networking has remained too proprietary for too long

Pitt said networking has remained too proprietary for too long

Dan Pitt, executive director of the Open Networking Foundation (ONF), has warned of the dangers of allowing the big networking vendors to have too much influence over the development of SDN, arguing they have a strong interest in maintaining the proprietary status quo.

In an exclusive interview with Telecoms.com, Pitt recalled the non-profit ONF was born of frustration at the proprietary nature of the networking industry. “We came out of research that was done at Stanford University and UC Berkeley that was trying to figure out why networking equipment isn’t programmable,” he said.

The networking industry has been back in the mainframe days; you buy a piece of equipment from one company and its hardware, chips, operating system are all proprietary. The computing industry got over that a long time ago – basically when the PC came out – but the networking industry hasn’t.

“So out of frustration at not being able to programme the switches and with faculties wanting to experiment with protocols beyond IP, they decided to break open the switching equipment and have a central place that sees the whole network, figures out how the traffic should be routed and tells the switches what to do.”

Disruptive change, by definition, is bound to threaten a lot of incumbents and Pitt identifies this as a major reason why Networking stayed in the proprietary era for so long. “Originally we were a bunch of people that had been meeting on Tuesday afternoons to work out this OpenFlow protocol and we said we should make it an industrial strength standard,” said Pitt. “But if we give it to the IETF they’re dominated by a small number of very large switching and routing companies and they will kill it.”

“This is very disruptive to some of the traditional vendors that have liked to maintain a proprietary system and lock in their customers to end-to-end solutions you have to buy from them. Some have jumped on it, but some of the big guys have held back. They’ve opened their own interfaces but they still define the interface and can make it so you still need their equipment. We’re very much the advocates of open SDN, where you don’t have a single party or little cabal that owns and controls something to disadvantage their competitors.”

Ultimately it’s hard to argue against open standards as they increase the size of the industry for everyone. But equally it’s not necessarily in the short term interest of companies already in a strong position in a sector to encourage its evolution. What is becoming increasingly clear, however, is that the software genie is out of the bottle in the networking space and the signs are that it’s a positive trend for all concerned.