Category Archives: Software Defined Networking

Software Defined Networking: Improved Compliance & Customer Experience

Software Defined Networking Upgrade

Check out the infographic below to learn about how GreenPages recently helped a global billion-dollar banking client provide more flexibility and support business scalability with a software defined networking upgrade. While also meeting strict compliance mandates around micro-segmentation. If you are interested in learning more, check out the full summary here:

Engagement Summary: Software Defined Networking

Learn how we can help you lower cost, reduce risk and increase services efficiency.

If you have any more questions, reach out to us.

Software Defined Networking

By Jake Cryan, Digital Marketing Specialist

How SD-WAN Enables Digital Transformation

Wide Area Networks are a critical component of today’s enterprise computing infrastructure. But WANs suffer from many problems, including latency, congestion, jitter, packet loss, and outages. Erratic performance frustrates users, especially for real time applications like VoIP calling, video conferencing, video streaming, and virtualized applications and desktops. And complex WANs are difficult to manage and troubleshoot. SD-WAN products address these problems.

Citrix does a fantastic job at explaining how Software-Defined WAN enables digital transformation and can securely deliver a consistent user experience.

To download the full white paper, What to Look For When Considering an SD-WAN Solution, click here!

SD-WAN

To download the full white paper, What to Look For When Considering an SD-WAN Solution, click here!

What is the role of SDN in data centre security?

Door to new opportunitySoftware Defined Networking (SDN) is a breakthrough which is seemingly in everyone’s technology roadmap, but not ‘sexy’ enough to command column inches in recent months. At Telco Cloud, Juniper Cloud Automation Architect Scott Alexander argued the use case for security.

Companies who are striving towards 100% secure are likely to be disappointed as most within the industry now accept this is not achievable. Irrelevant of how many advances are made to secure the data centre, there will always be a collection of individuals who dedicate time to find new weaknesses. The new objective for the majority is to remain as secure as possible, consistently, reacting as quickly as possible to new threats which may emerge.

One of the main challenges for the data centre is the traditional defence. A number of data centres have one large firewall around the perimeter, which can be effective at keeping out threats, but on the occasion one breaches defences, traditional data centres are very linear, allowing the threat to roam freely. Larger segments of the data centre will be ring fenced, however the same principle applies here; once you crack that defence you are once again free to roam.

Alexander highlighted once you write various SDN policies, you can define which applications can ‘talk’ to each other. Until this is defined through an effective SDN policy, an application can talk to any other application, create the free roaming problem. Once a threat is in the data centre damage control becomes very difficult.

If every application is a room with several doors, Alexander said though implementing SDN you can keep relevant doors open and close doors to areas a given applications has no need to have access to. Spinning up various applications allows you to retain internal perimeters and create a policy of damage control.

Virtualizing a company’s assets can be a painful process, as it has to be done application by application. This however can be an advantage as Alexander highlighted to understand what doors are open and closed, you have to analyse the applications individually; there isn’t currently a method to do a blanket risk assessment of your applications. As you are migrating the applications individually any case during the virtualization efforts, it shouldn’t be too much of a task to understand what doors are open.

For the most part, the concept of 100% secure has seemingly been irradiated from the industry; most have accepted it is almost impossible. However, segmented security can aid a team in driving towards the objective of remaining secure as possible, consistently.

SDN on the rise and cloud still not understood – survey

Dollar SignsResearch from Viavi Solutions has indicated SDN technologies are on the rise within enterprise organizations, but there also might be a number of organizations who are implementing the cloud for the wrong reasons.

In its ninth annual State of the Network study, the team highlighted enterprise organizations are increasing deployment of 100 Gigabit Ethernet (100 GbE), public and private cloud, and software-defined networking technologies. Two thirds of respondents indicated they had some kind of deployment in the production environment, and 35% have implemented SDN underlay.

“There is a growing trend of enterprise customers realizing how they can improve the operations of their network,” said Steve Brown, Director of Enterprise Solutions at Viavi. “It’s been a slow burner, but SDN is beginning to break through into the mainstream. While encouraging, the statistics are a little higher than we expected. After comparing the adoption rates from the last couple of years, we expected SDN to be around 50%, but the survey does highlight some real momentum in the industry on the whole.”

The findings also highlighted that while cloud adoption is continuing to rise, 90% have at least one application in the cloud and 28% said they have the majority, there is still a level of immaturity in understanding the perceived and realized benefits of cloud computing. Lower operating costs was listed as the top reason for the transition at 63%, while the faster delivery of new services and the ability to dynamically adapt to changes in business demands were the least popular reasons, both accounting for 39%.

“If you look at what the chief benefits which people are seeing, we’re seeing a lot of feedback on the CAPEX/OPEX reductions,” said Brown. “This is great, but that’s not really what the point of cloud is. Expenditure reduction is something which the top decision makers in the business want to see, it’s more of a tactical play. If this is the end objective these companies are not really seeing the promise of cloud and what makes me excited about cloud.

Deploying cloud

Top reasons for adopting the cloud

“The areas which I see the key benefits are the ones which are lowest on the results, delivering services faster and dynamically changing to meet the needs of the business. I found it quite surprising that these were quite low. The results show that the decision to enter the cloud for the majority of consumers is more tactical than a strategic decision.”

One conclusion that can be drawn from the findings is a lack of understanding of what the cloud can offer. Gartner’s ‘Cost Optimization Secrets’ highlighted the average cost reduction for companies implementing cloud propositions was just over 14%. While this is encouraging, whether cloud adoption would remain an attractive option for organizations if they knew expenditure would be reduced by just 14% remains to be seen.

“There’s more than just cost saving when adopting the cloud,” said Brown. “Are there savings, absolutely, but the majority don’t come upfront. If you’re going to be running applications which could see aggressive spikes, the flexibility of and agility of the cloud will reduce the cost. But at the same time it’s difficult to justify the cost savings because you may be taking on new projects due to the fact you have the ability to scale your capacity at a moment’s notice.

“Rather than thinking about it as a cost saver, hybrid cloud should be seen as an initiative enabler. Until this idea is recognised by the industry, adoption may continue to struggle to penetrate the mainstream.”

For Brown and the team at Viavi, the benefits of cloud computing are focused around the business capabilities which are enabled in the medium and long-term. Cloud offers companies the opportunity to react to diversifying market conditions faster and ensure products remain relevant on an on-going basis.

“In my own personal opinion, I would like to see people embrace a hybrid cloud model because it enables them to develop competitive edges,” said Brown. “This also justifies future investment in technology, it moves these new concepts and implementations from ‘nice to have’ to ‘must have’ as technology will then be one of the supporting pillars of the business strategy. Cloud has the ability to do this and to be a competitive enabler.”

China Mobile revamps private cloud with Nuage SDN

China Mobile, Alcatel Lucent and their respective subsidiaries are working together on SDN in many contexts

China Mobile, Alcatel Lucent and their respective subsidiaries are working together on SDN in many contexts

China Mobile’s IT subsidiary Suzhou Software Technology Company has baked Nuage Networks’ software-defined networking technology into its private cloud architecture to enable federation across multiple China Mobile subsidiaries. The move comes the same week both parent companies – China Mobile and Alcatel Lucent – demoed a virtualised radio access network (RAN), a core network component.

The company deployed Nuage’s Virtualised Services Platform (VSP) and Virtual Services Assurance Platform (VSAP) for its internal private cloud platform in a bid to improve the scalability and flexibility of its infrastructure, and enable infrastructure federation between the company’s various subsidiaries.

Each subsidiary is allocated its own virtual private cloud with its own segmented chunk of the network, but enabling infrastructure federation between them means underutilised assets can be deployed in other parts of the company as needed.

“China Mobile is taking a visionary approach in designing and building its new DevOps private cloud architecture,” said Nuage networks chief executive officer Sunil Khandekar.

“By combining open source software with Nuage Networks VSP, China Mobile is replacing and surpassing its previous legacy architecture in terms of power, sophistication and choice. It will change the way China Mobile operates internally and, ultimately, the cloud services they can provide to customers,” Khandekar said.

The move comes the same week China Mobile and Alcatel Lucent trialled what the companies claimed to be the industry’s first virtualised RAN, which for an operator with over 800 million subscribers has the potential to deliver significant new efficiencies across its datacentres if deployed at scale.

Reader Question: NSX Riding on Physical Infrastructure?

There’s been a lot of traction and interest around software defined networking lately. I posted a video blog last week comparing features and functionality of VMware NSX vs. Cisco ACI. A reader left a comment on the post with a really interesting question. Since I have heard similar questions lately, I figured it would be worth addressing it in it’s own post.

The question was:

“Great discussion – one area that needs more exploration is when NSX is riding on top of any physical infrastructure – how is the utilization and capacity of the physical network made known to NSX so that it can make intelligent decisions about routing to avoid congestion?”

Here was my response:

“You bring up an interesting point that I hear come up quite a bit lately. I say interesting because it seems like everyone has a different answer to this challenge and a couple of the major players in this space seem to think they have the only RIGHT answer.

If you talk to the NSX team at VMware, they would argue that since the hypervisor is the closest thing to your applications, you’d be better off determining network flow requirements there and dictating the behavior of that traffic over the network as opposed to reactive adjustments for what could be micro-burst type traffic that could lead to a lot of reaction and not much impact.

If you were to pose the same challenge to the ACI team at Cisco, they would argue that without intimate visibility, control and automated provisioning of active network traffic AND resources, you can’t make intelligent decisions about behavior of application flows, regardless of how close you are to the applications themselves.

I think the short answer, in my mind anyway, to the challenge you outline lies within the SDN/API integration side of the NSX controller. I always need to remind myself that NSX is a mix of SDN and SDN driven Network Virtualization (NV) and Network Function Virtualization (NFV). That being the case, the behavior of the NSX NV components can be influenced by more than just the NSX controller. Through mechanisms native to the network like Netflow, NBAR2, IPFIX, etc. we can get extremely granular application visibility and control throughout the network itself and, by combining that with API NSX integration, we can evolve the NSX solution to include intelligence from the physical network thereby enabling it to make decisions based on that information.”

Like I said, an interesting question. There’s a lot to talk about here and everyone (myself included) has a lot to learn. If you have any more questions around software defined networking, leave a comment or reach out to us at socialmedia@greenpages.com and I’ll get back to you.

 

 

By Nick Phelps, Principal Architect

Comcast, Lenovo join OpenDaylight SDN effort

Comcast and Lenovo have thrown their weight behind the OpenDaylight Project

Comcast and Lenovo have thrown their weight behind the OpenDaylight Project

Comcast and Lenovo have thrown their hats into the OpenDaylight Project, an open source collaboration between many of the industry’s major networking incumbents on the core architectures enabling software defined networking (SDN) and network function virtualisation (NFV).

The recent additions bring the OpenDaylight project, a Linux Foundation Colalborative Project, to just over the fifty member mark. The community is developing an open source SDN architecture and software (Helium) that supports a wide range of protocols including OpenFlow, the southbound protocol around which most vendors have consolidated.

“We’re seeing more end users starting to adopt OpenDaylight and participate in its development as the community sharpens its focus on stability, scalability, security and performance,” said Neela Jacques, executive director, OpenDaylight.

“Comcast has been testing ODL and working with our community since launch and the team at Lenovo were heavily involved in ODL’s foundation through their roots at IBM. Our members see the long-term value of creating a rich ecosystem around open systems and OpenDaylight,” Jacques said.

Igor Marty, chief technology officer, Lenovo Worldwide SDN and NFV said: “We believe that the open approach is the faster way to deploy solutions, and what we’ve seen OpenDaylight achieve in just two years has been impressive. The OpenDaylight community is truly leading the path toward interoperability by integrating legacy and emerging southbound protocols and defining northbound APIs for orchestration.”

The move will no doubt give the project more credibility in both carrier and enterprise segments.

Since Lenovo’s acquisition of IBM’s low-end x86 server unit it has been pushing heavily to establish itself as a serious player among global enterprises, where open standards continue to gain favour when it comes to pretty much every layer of the technology stack.

Comcast is also placing SDN at the core of its long-term network strategy and has already partnered with CableLabs, a non-profit R&D outfit investigating technology innovation and jointly owned by operators globally, on developing southbound plugins for OpenDaylight’s architecture.

“Like many service providers, Comcast is motivated to reduce the operational complexity of our networks. In the near-term this involves significant improvements to network automation under what we call our Programmable Network Platform. This framework outlines a stack of behaviors and abstraction layers that software uses to interact with the network,” explained Chris Luke, senior principal engineer, Comcast and OpenDaylight Advisory Group member.

“Some of our key objectives are to simplify the handoffs from the OSS/BSS systems, empower engineers to rapidly develop and deploy new services and to improve the operational support model. It is our hope that by harmonizing on a common framework and useful abstractions, more application groups within the company will be able to make use of better intelligence and more easily interact with the network.”

Luke said the company already has several proof-of-concepts in place, including an app that provides network intelligence abstraction in a way that allows it to treat its internal network like a highly elastic CDN, and mechanisms to integrate overlay edge services with legacy network architectures like MPLS.

“When ODL was launched we were excited to see that the industry was moving to a supportable open source model for SDN. There were a growing number of proprietary SDN controllers at the time and that had service providers like us questioning the direction of the market and whether it made sense to us. We were pleased to see an open source platform come forward aiming to provide a neutral playing field with support for more than just OpenFlow.”

How to Prepare Your Environment for the Software Defined Networking Era

Whether it’s VMware NSX or Cisco ACI, to adopt any software defined networking solution there is a lot of backend work that needs to be done. Before you get into the weeds around specific products, take a step back. To be successful, you’re going to need to have a level of understanding about your applications you’ve never needed before. The key is to take the proper steps now to make sure you can adopt software defined networking technologies when the time comes.

 

Preparing Your Environment for the Software Defined Networking Era

 

//www.youtube.com/watch?v=Y6pVmNrOnCA

 

 

If you’re interested in speaking to Nick in more detail about software defined technology, reach out!

 

 

By Nick Phelps, Principal Architect

Open Networking Foundation wary of ‘big vendor’ influence on SDN

Pitt said networking has remained too proprietary for too long

Pitt said networking has remained too proprietary for too long

Dan Pitt, executive director of the Open Networking Foundation (ONF), has warned of the dangers of allowing the big networking vendors to have too much influence over the development of SDN, arguing they have a strong interest in maintaining the proprietary status quo.

In an exclusive interview with Telecoms.com, Pitt recalled the non-profit ONF was born of frustration at the proprietary nature of the networking industry. “We came out of research that was done at Stanford University and UC Berkeley that was trying to figure out why networking equipment isn’t programmable,” he said.

The networking industry has been back in the mainframe days; you buy a piece of equipment from one company and its hardware, chips, operating system are all proprietary. The computing industry got over that a long time ago – basically when the PC came out – but the networking industry hasn’t.

“So out of frustration at not being able to programme the switches and with faculties wanting to experiment with protocols beyond IP, they decided to break open the switching equipment and have a central place that sees the whole network, figures out how the traffic should be routed and tells the switches what to do.”

Disruptive change, by definition, is bound to threaten a lot of incumbents and Pitt identifies this as a major reason why Networking stayed in the proprietary era for so long. “Originally we were a bunch of people that had been meeting on Tuesday afternoons to work out this OpenFlow protocol and we said we should make it an industrial strength standard,” said Pitt. “But if we give it to the IETF they’re dominated by a small number of very large switching and routing companies and they will kill it.”

“This is very disruptive to some of the traditional vendors that have liked to maintain a proprietary system and lock in their customers to end-to-end solutions you have to buy from them. Some have jumped on it, but some of the big guys have held back. They’ve opened their own interfaces but they still define the interface and can make it so you still need their equipment. We’re very much the advocates of open SDN, where you don’t have a single party or little cabal that owns and controls something to disadvantage their competitors.”

Ultimately it’s hard to argue against open standards as they increase the size of the industry for everyone. But equally it’s not necessarily in the short term interest of companies already in a strong position in a sector to encourage its evolution. What is becoming increasingly clear, however, is that the software genie is out of the bottle in the networking space and the signs are that it’s a positive trend for all concerned.

2015 Predictions: Cloud and Software-Defined Technologies

As we kick off the new year, it’s time for us to get our 2015 predictions in. Today, I’ll post predictions from John Dixon around the future of cloud computing as well as from our CTO Chris Ward about software-defined technologies. Later this week, we’ll get some more predictions around security, wireless, end-user computing& more from some of our other experts.

John Dixon, Director, Cloud Services
On the Internet of Things (IoT) and Experimentation…
In 2015, I expect to see more connected devices and discussion on IoT strategy. I think this is where cloud computing gets really interesting. The accessibility of compute and storage resources on the pay-as-you-go model supports experimentation with a variety of applications and devices. Will consumers want a connected toaster? In years past, companies might form focus groups, do some market research, etc. to pitch the idea to management, get funding, build a team, acquire equipment, then figure out the details of how to do this. Now, it’s entirely possible to assign one individual to experiment and prototype the connected toaster and associated cloud applications. Here’s the thing; the connected toaster probably has about zero interest in the market for consumer appliances. However, the experiment might have produced a pattern of a cloud-based application that authenticates and consumes data from a device with little or no compute power. And this pattern is perhaps useful for other products that DO have real applications. In fact, I put together a similar experiment last week with a $50 Raspberry Pi and about $10 of compute from AWS — the application reports on the temperature of my home-brew fermentation containers, and activates a heating source when needed. And, I did indeed discover that the pattern is really, really scalable and useful in general. Give me a call if you want to hear about the details!

On the declining interest in “raw” IaaS and the “cloud as a destination” perspective…

I’ve changed my opinion on this over the past year or so. I had thought that the declining price of commodity compute, network, and storage in the cloud meant that organizations would eventually prefer to “forklift move” their infrastructure to a cloud provider. To prepare for this, organizations should design their infrastructure with portability in mind, and NOT make use of proprietary features of certain cloud providers (like AWS). As of the end of 2014, I’m thinking differently on this — DO consider the tradeoff between portability and optimization, but… go with optimization. Optimization is more important than infrastructure portability. By optimization in AWS terms, I mean taking advantage of things like autoscaling, cloudwatch, S3, SQS, SNS, cloudfront, etc. Pivotal and CloudFoundry offer similar optimizations. Siding with optimization enables reliability, performance, fault tolerance, scalability, etc., that are not possible in a customer-owned datacenter. I think we’ll see more of this “how do I optimize for the cloud?” discussion in 2015.

2015 predictions

Chris & John presenting a breakout session at our 2014 Summit Event

Chris Ward, CTO

On SDN…

We’ll see much greater adoption of SDN solutions in 2015.   We are already seeing good adoption of VMware’s NSX solution in the 2nd half of 2014 around the micro segmentation use case.  I see that expanding in 2015 plus broader use cases with both NSX and Cisco’s ACI.  The expansion of SDN will drag with it an expansion of automation/orchestration adoption as these technologies are required to fully realize the benefits of broader SDN use cases.

On SDS…

Software defined storage solutions will become more mainstream by the end of 2015.  We’re already seeing a ton of new and interesting SDS solutions in the market and I see 2015 being a year of maturity.  We’ll see several of these solutions drop off the radar while others gain traction and I have no doubt it will be a very active M&A year in the storage space in general.

 

What do you think about Chris and John’s predictions?

If you would like to hear more from these guys, you can download Chris’ whitepaper on data center migrations and John’s eBook around the evolution of cloud.

 

By Ben Stephenson, Emerging Media Specialist