Archivo de la categoría: SDN

Part 2: Cisco Live 2015 Recap – AWS Direct Connect, VIRL Facelift & More!

It was another great Cisco Live event this year! My colleague Dan Allen wrote a post summarizing the key takeaways he got out of the event. I wanted to add in some of my own to supplement his. As you probably know, it was John Chambers last Cisco Live event as CEO – which makes it especially cool that I got this picture taken with him!

cisco live

Expanded DevNet Zone

Last year Cisco introduced the DevNet zone which was focused on giving people hands on access to Cisco’s most ground breaking technology that could be construed as science fiction unless they opened their toy box and let people see and touch what they’ve been hiding in it. This year we got to play with Internet of Things development environments, API driven SDN solutions, virtual network simulation toolkits and drone technologies hosted by the co-founder of iRobot. Last year, it was 4 little booths in between two restrooms with giveaways to get people to come in. This year, it consumed a whole section of the convention center with over 20 booths, 6 interactive labs and different exhibits and guest speakers delivering presentations on the future of technology.

Programmability and automation were a part of every session no matter what the topic was

It didn’t matter if you were attending entry-level or advanced breakout sessions, IT management track courses or developer workshops; everything you attended at Cisco Live this year had something to do with automation, programmability, cloud connectivity or application awareness. This was very different from any of the 8 Cisco Live events I’ve attended throughout my career. If you’re a technologist and have any doubt in your mind that this is where the industry is headed, you’d better start learning new skills because, like it or not, our customers and the customers of our customers are, or will soon be, believers and consumers of these technologies and consumption models.

Cisco and Amazon TEAM up to BEEF up AWS Direct Connect

AWS Direct Connect is a part of Amazon’s APN Partner program that consists of ISP’s that provide WAN circuits directly connected to AWS datacenters. That means if you’re a Level3 or AT&T MPLS customer and you have 10 offices and 2 datacenters on that MPLS network, Amazon AWS can now become another site on that private WAN. That’s HUGE! Just look at a small portion of their ISP partner list:

  • AT&T
  • Cinenet
  • Datapipe
  • Equinix, Inc.
  • FiberLight
  • Fiber Internet Center
  • First Communications
  • Global Capacity
  • Global Switch
  • Global Telecom & Technology, Inc. (GTT)
  • Interxion
  • InterCloud
  • Level 3 Communications, Inc.
  • Lightower
  • Masergy
  • Maxis
  • Megaport
  • MTN Business
  • NTT Communications Corporation
  • Sinnet
  • Sohonet
  • Switch SUPERNAP
  • Tata Communications
  • tw telecom
  • Verizon
  • Vocus
  • XO Communications

 

Combine that with a CSR1000v and an ASAv and you have a public cloud that can be managed and utilized exactly like a physical colo that is completely transparent to both your network teams and users.

ASAv in AWS

This little announcement slipped under the radar when it was made a week before Cisco Live but was definitely front and center in the Cisco Solutions Theater in the world of solutions. The ASA1000v has been Cisco’s only answer to a full featured virtual security appliance for the past two years or so. The only problem is that it required the Nexus1000v with which the industry as a whole has been reluctant to embrace (particularly in the public cloud space). Well good news, the ASAv doesn’t require the Nexus 1000v and, therefore, has opened the doors for the likes of Amazon AWS and Microsoft Azure to let us make use of an all Cisco Internet and WAN edge within an AWS Virtual Private Cloud (VPC). This means you can manage the edge of your AWS VPC the same way you manage the edge of your datacenters and offices. The ASAv supports everything an ASA supports which will soon include the full FirePower feature set. Have you ever tried building a VPN tunnel to an ASA at a customer’s datacenter from the AWS VPC Customer Gateway? I have – not the best experience. Well, not any more – it’s pretty cool!

ACI was big this year, but not as big as last year

I was expecting more of the same from last year on this one. Just about everywhere you looked last year, you saw something about ACI. This year was a more targeted effort both with the breakout session and in the Cisco Solutions Theater. I’m not saying it didn’t get a lot of attention, just not as much as last year and certainly not more. This shouldn’t come as too big of a surprise for anyone used to Cisco’s marketing and positioning tactics, however. Last year was geared toward awareness of the new technology and this year was more geared toward the application of the technology across very specific use cases and advances in it’s capabilities. The honeymoon is clearly over and everyone was focused on how to live every-day life with ACI being a part of it.

APIC can interact with ASA and other non-Cisco devices

The ACI APIC is slowly getting more and more abilities related to northbound programmatic interaction with other Cisco and non-Cisco appliances. For example, it can now instantiate policies and other configuration elements of ASA, Fortigate, F5 and Radware appliances as part of its policy driven infrastructures.

iWAN almost officially tested and supported on CSR1000v

As of next month, the iWAN suite of technologies will be officially tested and supported on the CSR1000v platform which means all of that functionality will now be available in public cloud environments. More to come on iWAN in another post.

CSR1000v

The CSR1000v (Cloud Services Router) is Cisco’s answer to a virtual router. Until now, it’s been sort of an “Oh ya? We can do that too” sort of project. Now it’s a full-fledged product with a dedicated product team. It’s supported across just about every public cloud provider and in every Cisco Powered Cloud partner (Cirrity, Peak 10, etc.).

Additionally, I managed to get the product team to pull back the covers on the roadmap a bit and reveal what Dynamic Multipoint VPN (DMVPN) will be supported on the CSR1000v soon along with a number of other ISR/ASR features which will make a truly seamless WAN that includes your public cloud resources.

Non-Cisco Cloud News – Azure Virtual Network now supports custom gateways

A big challenge in real adoption of non-Microsoft application workloads in Azure has been the inability to use anything but Azure’s gateway services at the edge of your Azure Virtual Network. Well, Cisco let the cat out of the bag on this one as Cisco CSR’s and ASR’s will soon be supported as gateway devices in Azure VN. For me, this really brings Azure into focus when selecting a public cloud partner.

APIC-EM has more uses than ever

Cisco Application Policy Infrastructure Controller Enterprise Module (rolls right off the tongue right?), or APIC-EM, is Cisco’s answer to an SDN controller. It’s part of Cisco’s ONE software portfolio and has more uses than ever. Don’t confuse the APIC-EM with the ACI APIC, however. The ACI APIC is the controller and central point of interaction for Cisco’s ACI solution and runs on Cisco C-Series servers. The APIC-EM, however, is truly an open source SDN controller that is free and can run as a VM and interact with just about anything that has an API. That’s right.

VIRL got a facelift

Cisco’s Virtual Internet Routing Lab (VIRL) is getting some real attention. It’s an application that was unveiled to Cisco DevNet partners last year that lets you virtually build Cisco networks with VM’s running real IOS and NX-OS code to simulate a design and test it’s functionality. As a partner, this is huge as we can virtually replicate customer environments as a proof of concept or troubleshooting tool. It’s getting more development support within Cisco.

 

A lot of crucial information and updates came out of this event. If you would like to discuss any in more detail, feel free to reach out!

 

By Nick Phelps, Principal Architect

Real World Example: Deploying VMware NSX in the Financial Sector

I recently finished up a project implementing VMware’s NSX and wanted to take a minute to recap my experience. The client I worked with provides call center services in the financial sector. They have to be able to securely access systems that have the ability to see credit card information along with other personal, sensitive information.

VMware NSXThe customer is building out new facilities to host their primary, PCI-related, applications.  In this environment, they have to be able to provide the highest levels of security, while providing high performing networking services. To achieve the necessary requirements, they have had to purchase new infrastructure: blade center systems, networking infrastructure (Nexus 5672s, Nexus 6000s, Nexus 7710s, Juniper SRXs, F5 load balancers, etc.), Software licensing, among other things.

They came across the need to purchase additional pairs of F5 load balancers but were up against their budget. When this happened, the Director / VP in charge of the project evaluated VMware’s NSX technology. After some initial discussions, he realized that NSX could not only provide the type of security the environment needed to drive higher efficiencies but could also provide some of the general networking services he was looking for.

Previous network designs included the need for complete isolation of some workloads and, to achieve this, the design called for trusted traffic to traverse a separate pair of distribution/access layer switches to reach external networks. This design also made it necessary to acquire separate F5 load balancers, as specific traffic was not allowed to comingle on the same physical infrastructure due to the way the security team wanted to steer trusted and untrusted traffic. This meant that the team was required to purchase twice the hardware; separate Nexus 6000s and separate F5 load balancers.

Because of the NSX Distributed Firewall capabilities, security teams have the ability to place required rules and policies closer to applications than has previously been achievable. Because of this, networking designs changed, and allowed for infrastructure requirements previously deemed necessary to be alleviated. The ability to stop untrusted traffic before it ever reaches a logical or physical wire gave the team the opportunity to converge more of their networking equipment; eliminating the need to utilize separate Nexus 6000s. In addition, with the NSX Edge Services Gateway having the ability to provide network load-balancing, they were no longer required to purchase additional physical equipment to provide this service. With the budget they put towards NSX licensing, they were able to get the all the security and load balancing services they were looking for and also put money back into their budget.

The Engagement:

Over the span of approximately one month, the security team, networking team, server / virtualization team, and an auditing team worked together in designing what the NSX solution needed to achieve and how it would be implemented. I believe this to be an important aspect of NSX projects because of the misconception that the server / virtualization teams are trying to take over everything. Without each team, this project would have been a disaster.

As requirements were put forth, we built out NSX in building blocks. First, we identified that we would utilize VXLAN as a means to achieve desired efficiencies: eliminating VLAN sprawl, segregating trusted traffic in the logical, software layer, and allowing Disaster Recovery designs to become easier when using the same IP address space. Once networks and routing were implemented, we were able to test connectivity from various sites, while achieving all requirements by the security team. The next item was implementing NSX security. This item required new ways of thinking for most teams. With VMware NSX, customers have the ability to manage security based on vCenter objects, which provides more flexibility. We had to walk through what the contents of each application were, what types of communications were necessary, what types of policies were required, and, in identifying these items, we were able to build dynamic and static Security Groups. We then built Security Policies (some basic that could apply to a majority of similar applications, some application specific) and were able to re-use these policies against various Security Groups, speeding the deployment of application security. We applied weights to these policies to ensure application specific policies took precedence over the generic. In addition to Netflow, we applied “Flow Monitoring” as a means for the networking and security teams to monitor traffic patterns within the NSX environment.

All in all, this was a very successful project. Our client can now better secure their internal applications as well as better secure sensitive customer data.

Remember, NSX can be mislabeled as a server team product, however, the network team and security team need to know how it works and need to be able to implement it.

Are you interested in learning more about how GreenPages can help with similar projects? Email us at socialmedia@greenpages.com

 

By Drew Kimmelman, Consultant

HP buys ConteXtreme in SDN, NFV play

HP is acquiring SDN specialist ConteXtreme

HP is acquiring SDN specialist ConteXtreme

HP has acquired software-defined networking (SDN) specialist ConteXtreme to strengthen its service provider business and network function virtualisation (NFV) offerings.

Founded in 2007, ConteXtream provides an OpenDaylight-based, carrier-grade SDN fabric controller that works on most hypervisors and commodity server infrastructure. It’s based on the IETF network virtualisation overlay (NVO3) architecture, which includes virtualised network edge nodes that aggregate flows and maps them to specific functions, a mapping subsystem based on the Location-Identity Separation Protocol (LISP), a set of application-specific flow handlers for service chaining, and a high-performance software flow switch.

The company also offers analytics that help monitor traffic and detect anomalies.

“We’re moving away from being tied to dedicated machines to having a resource pool with automated, self-service mechanisms. In the networking world, there are countless functions – firewall, caching, optimization, filtering etc. – and a bunch of inflexible hardware to do those things. NFV is about saying, ‘Why can’t we put these various functions in the cloud? Why does each function need to be on specialized and dedicated hardware?’,” explained HP’s telco business lead Saar Gillai.

“ConteXtream’s scalable and open and standards-based technology delivers innovative capabilities like advanced service function chaining, and is deployed at a number of major carrier networks across the globe. ConteXtream’s technology connects subscribers to services, enabling carriers to leverage their existing standard server hardware to virtualize functions and services.”

Gillai said the acquisition will accelerate its leadership in NFV, and that HP also plans to increase its involvement with OpenDaylight, an open source collaboration between many of the industry’s major networking incumbents on the core architectures enabling SDN and NFV.

The past year has seen HP slowly scale up its involvement with SDN and NFV initiatives.

In September last year the company announced the launch of an app store for HP customers to download SDN-enabled and virtual networking applications and tools – networking monitoring tools, virtual firewalls, virtual load balancers and the like – developed by HP as well third parties and open source communities. It also partnered with Wind River to integrate its NFV technologies with HP Helion OpenStack.

Ericsson cloud lab to focus on NFV, SDN

Ericsson is opening up a lab to help coordinate SDN and NFV research among telcos

Ericsson is opening up a lab to help coordinate SDN and NFV research among telcos

Ericsson has opened a lab in Italy which will coordinate research among telecoms operators on deploying software-defined networking (SDN) and network function virtualisation (NFV) in their datacentres.

The company said the lab, which will be based in Rome but will also have an associated cloud platform for data sharing and collaboration, will help develop multi-vendor SDN and NFV solutions that primarily address the needs of telcos.

Participating organisations will be able to link up to the cloud platform and share their results.

Nunzio Mirtillo, head of Ericsson in the Mediterranean region said: “Cloud will enable the biggest evolution of the telecom business and this new lab is an example of Ericsson’s passion for driving innovations in Italy.”

“As great ideas come from collaboration, operators can turn cloud-based approaches to their advantage and implement new architectures that provide network efficiency and shorter time to market for innovative services,” Mirtillo added.

The company said the lab is intended to help operators experiment with getting SDN and NFV technologies integrated into their existing infrastructure estate, which can be quite a challenge for most that aren’t refreshing their hardware for SDN or NFV compliance quickly enough. As a result many have been forced to take the overlay approach.

Ericsson is already working with a number of operators on SDN and NFV. Last year the company was tapped up by Telstra and AT&T to help virtualise key aspects of their networks.

Comcast, Lenovo join OpenDaylight SDN effort

Comcast and Lenovo have thrown their weight behind the OpenDaylight Project

Comcast and Lenovo have thrown their weight behind the OpenDaylight Project

Comcast and Lenovo have thrown their hats into the OpenDaylight Project, an open source collaboration between many of the industry’s major networking incumbents on the core architectures enabling software defined networking (SDN) and network function virtualisation (NFV).

The recent additions bring the OpenDaylight project, a Linux Foundation Colalborative Project, to just over the fifty member mark. The community is developing an open source SDN architecture and software (Helium) that supports a wide range of protocols including OpenFlow, the southbound protocol around which most vendors have consolidated.

“We’re seeing more end users starting to adopt OpenDaylight and participate in its development as the community sharpens its focus on stability, scalability, security and performance,” said Neela Jacques, executive director, OpenDaylight.

“Comcast has been testing ODL and working with our community since launch and the team at Lenovo were heavily involved in ODL’s foundation through their roots at IBM. Our members see the long-term value of creating a rich ecosystem around open systems and OpenDaylight,” Jacques said.

Igor Marty, chief technology officer, Lenovo Worldwide SDN and NFV said: “We believe that the open approach is the faster way to deploy solutions, and what we’ve seen OpenDaylight achieve in just two years has been impressive. The OpenDaylight community is truly leading the path toward interoperability by integrating legacy and emerging southbound protocols and defining northbound APIs for orchestration.”

The move will no doubt give the project more credibility in both carrier and enterprise segments.

Since Lenovo’s acquisition of IBM’s low-end x86 server unit it has been pushing heavily to establish itself as a serious player among global enterprises, where open standards continue to gain favour when it comes to pretty much every layer of the technology stack.

Comcast is also placing SDN at the core of its long-term network strategy and has already partnered with CableLabs, a non-profit R&D outfit investigating technology innovation and jointly owned by operators globally, on developing southbound plugins for OpenDaylight’s architecture.

“Like many service providers, Comcast is motivated to reduce the operational complexity of our networks. In the near-term this involves significant improvements to network automation under what we call our Programmable Network Platform. This framework outlines a stack of behaviors and abstraction layers that software uses to interact with the network,” explained Chris Luke, senior principal engineer, Comcast and OpenDaylight Advisory Group member.

“Some of our key objectives are to simplify the handoffs from the OSS/BSS systems, empower engineers to rapidly develop and deploy new services and to improve the operational support model. It is our hope that by harmonizing on a common framework and useful abstractions, more application groups within the company will be able to make use of better intelligence and more easily interact with the network.”

Luke said the company already has several proof-of-concepts in place, including an app that provides network intelligence abstraction in a way that allows it to treat its internal network like a highly elastic CDN, and mechanisms to integrate overlay edge services with legacy network architectures like MPLS.

“When ODL was launched we were excited to see that the industry was moving to a supportable open source model for SDN. There were a growing number of proprietary SDN controllers at the time and that had service providers like us questioning the direction of the market and whether it made sense to us. We were pleased to see an open source platform come forward aiming to provide a neutral playing field with support for more than just OpenFlow.”

VMware NSX vs. Cisco ACI: Which SDN solution is right for me?

In a video I did recently, I discussed steps organizations need to take to prepare their environments to be able to adopt software defined technologies when the time comes. In this video, I talk about VMware NSX and Cisco ACI.

VMware NSX and Cisco ACI are both really hot technologies that are generating a lot of conversation. Both are API driven SDN solutions. NSX and ACI are really good in their unique areas and each come at it from a unique perspective. While they are both very different solutions, they do have overlapping functionality.

//www.youtube.com/watch?v=xtdfHGnCovA

 

Are you interested in talking with Nick about VMware NSX or Cisco ACI? Let’s set up some time!

 

By Nick Phelps, Principal Architect

Verizon confirms SDN overhaul plans

Verizon is revamping its network

Verizon is revamping its network

Verizon has confirmed publicly its plans to develop and implement a software defined networking infrastructure, working alongside Alcatel-Lucent, Cisco, Ericsson and Nokia Networks, among others, reports Telecoms.com.

The US telco claims its SDN project will enable a transformation of its existing network, introduce new operational efficiencies and accelerate rapid and flexible service delivery to its customers. In outlining its intended overhaul, Verizon has worked with its aforementioned technology partners to create an SDN network architecture overview document.

The document, the telco claims, has included all interface specifications, reference architectures, plus requirements for both the control layer and forwarding box functions. It appears, as a consequence, Verizon is giving its suppliers very specific requirements for the upgrade, and that each partner is expected to deliver unique and bespoke elements to allow it to achieve the business and technical benefits of an SDN-enabled network.

The business case for implementing SDN has been well documented, such as elastic and scalable network-wide service creation, as well as dynamic resource allocation and network automation. Speaking of the announcement, Verizon’s chief information and technology architect, Roger Gurnani, reckons harnessing SDN will enable Verizon to more agilely deliver new services to its customers.

“Verizon and our key technology partners have always focussed on providing high-performance networks for our customers, and with this SDN architecture we will continue to ensure our network and services meet the needs of our customers, today and in the future,” he said.

Cisco’s chairman and CEO John Chambers, meanwhile, has targeted IoT as the next big growth opportunity for telcos, and says SDN will help enable its monetisation.

“This will become the foundation for innovative, new Verizon services and applications,” he said. “Both companies share a vision to transform the entirety of the network architecture to achieve the speed and operational efficiency required to meet the needs of today, as well as capture the growth opportunities to monetize with the Internet of Everything over the next decade and beyond.”

IBM opens SDN, NFV labs in Dallas, Paris

IBM is moving to bolster its service provider business

IBM is moving to bolster its service provider business

IBM has announced the launch of two Network Innovation Centres, where the company’s clients can experiment with software-defined networking and network function virtualisation technologies. The move seems aimed at bolstering its service provider business.

The centres, one in Paris, France and the other in Dallas, Texas, will focus primarily on experimenting with solutions for large enterprise networking systems and telecoms operators, and feature technologies from a range of IBM partner companies including Brocade, Cisco, Citrix, Juniper Networks, Riverbed, and VMware.

IBM said facilitating automation and orchestration innovation will be the main thrust of the centres.

“Effectively applying cloud technologies to the network could allow a company to reduce its overall network capacity while increasing utilization by dynamically providing resources during the day in Beijing while it’s nighttime in New York, and vice versa,” said Pete Lorenzen, general manager, Networking Services, IBM Global Technology Services.

“A telecom company could better manage periodic, localized spikes in smartphone usage caused by major sporting events or daily urban commutes, dynamically provisioning capacity when and where it’s needed,” Lorenzen added.

IBM has pushed farther into the networking space in recent years, having scored a number of patents in the area of networking automation and dynamic network resource allocation. A significant driver of this is its service provider business, where some of the company’s competitors – like HP – are attempting to make inroads.

Open Networking Foundation wary of ‘big vendor’ influence on SDN

Pitt said networking has remained too proprietary for too long

Pitt said networking has remained too proprietary for too long

Dan Pitt, executive director of the Open Networking Foundation (ONF), has warned of the dangers of allowing the big networking vendors to have too much influence over the development of SDN, arguing they have a strong interest in maintaining the proprietary status quo.

In an exclusive interview with Telecoms.com, Pitt recalled the non-profit ONF was born of frustration at the proprietary nature of the networking industry. “We came out of research that was done at Stanford University and UC Berkeley that was trying to figure out why networking equipment isn’t programmable,” he said.

The networking industry has been back in the mainframe days; you buy a piece of equipment from one company and its hardware, chips, operating system are all proprietary. The computing industry got over that a long time ago – basically when the PC came out – but the networking industry hasn’t.

“So out of frustration at not being able to programme the switches and with faculties wanting to experiment with protocols beyond IP, they decided to break open the switching equipment and have a central place that sees the whole network, figures out how the traffic should be routed and tells the switches what to do.”

Disruptive change, by definition, is bound to threaten a lot of incumbents and Pitt identifies this as a major reason why Networking stayed in the proprietary era for so long. “Originally we were a bunch of people that had been meeting on Tuesday afternoons to work out this OpenFlow protocol and we said we should make it an industrial strength standard,” said Pitt. “But if we give it to the IETF they’re dominated by a small number of very large switching and routing companies and they will kill it.”

“This is very disruptive to some of the traditional vendors that have liked to maintain a proprietary system and lock in their customers to end-to-end solutions you have to buy from them. Some have jumped on it, but some of the big guys have held back. They’ve opened their own interfaces but they still define the interface and can make it so you still need their equipment. We’re very much the advocates of open SDN, where you don’t have a single party or little cabal that owns and controls something to disadvantage their competitors.”

Ultimately it’s hard to argue against open standards as they increase the size of the industry for everyone. But equally it’s not necessarily in the short term interest of companies already in a strong position in a sector to encourage its evolution. What is becoming increasingly clear, however, is that the software genie is out of the bottle in the networking space and the signs are that it’s a positive trend for all concerned.

Gartner Data Center Conference: Success in the Cloud & Software Defined Technologies

I just returned from the Gartner Data Center conference in Vegas and wanted to convey some of the highlights of the event.  This was my first time attending a Gartner conference, and I found it pretty refreshing as they do take an agnostic approach to all of their sessions unlike a typical vendor sponsored event like VMWorld, EMC World, Cisco Live, etc.  Most of the sessions I attended were around cloud and software defined technologies.  Below, I’ll bullet out what I consider to be highlights from a few of the sessions.

Building Successful Private/Hybrid Clouds –

 

  • Gartner sees the majority of private cloud deployments being unsuccessful. Here are some common reasons for that…
    • Focusing on the wrong benefits. It’s not all about cost in $$. In cloud, true ROI is measured in agility vs dollars and cents
    • Doing too little. A virtualized environment does not equal a private cloud. You must have automation, self-service, monitoring/management, and metering in place at a minimum.
    • Doing too much. Putting applications/workloads in the private cloud that don’t make sense to live there. Not everything is a fit nor can take full advantage of what cloud offers.
    • Failure to change operational models. It’s like being trained to drive an 18 wheeler then getting behind the wheel of a Ferrari and wondering why you ran into that tree.
    • Failure to change funding model. You must, at a minimum, have a show back mechanism so the business will understand the costs, otherwise they’ll just throw the kitchen sink into the cloud.
    • Using the wrong technologies. Make sure you understand the requirements of your cloud and choose the proper vendors/technologies. Incumbents may not necessarily be the right choice in all situations.
  • Three common use cases for building out a private cloud include outsourcing commodity functions, renovating infrastructure and operations, and innovation/experimentation…but you have to have a good understanding of each of these to be successful (see above).
  • There is a big difference between doing cloud to drive bottom line (cost) savings vs top line (innovation) revenue expansion. Know ‘why’ you are doing cloud!
  • On the hybrid front, it is very rare today to see fully automated environments that span private and public as the technology still has some catching up to do. That said, it will be reality within 24 months without a doubt.
  • In most situations, only 20-50% of all applications/workloads will (or should) live in the cloud infrastructure (private or public) with the remaining living in traditional frameworks. Again, not everything can benefit from the goodness that cloud can bring.

Open Source Management Tools (Free or Flee) –

 

  • Organizations with fewer than 2500 employees typically look at open source tools to save on cost while larger organizations are interested in competitive advantage and improved security.
  • Largest adoption is in the areas of monitoring and server configuration while cloud management platforms (i.e. openstack), networking (i.e. open daylight), and containers (i.e. docker) are gaining momentum.
  • When considering one of these tools, very important to look at how active the community is to ensure relevancy of the tool
  • Where is open source being used in the enterprise today? Almost half (46%) of deployments are departmental while only about 12% of deployments are considered strategic to the overall organization.
  • Best slide I saw at the event which pretty much sums up open source….

 

Gartner Data Center Conference

 

If this makes you excited, then maybe open source is for you.  If not, then perhaps you should run away!

3 Questions to Ask Your SDN Vendor –

  • First, a statistic…organization which fail to properly integrate their virtualization and networking teams will see a 3x longer MTR (mean time to resolution) of issues vs those who do properly integrate the teams
  • There are approximately 500 true production SDN deployments in the world today
  • The questions to ask…
    • How to prevent network congestion caused by dynamic workload placement
    • How to connect to bare metal (non-virtualized) servers
    • How to integrate management and visibility between the underlay/overlay
  • There are numerous vendors in this space, it’s not just VMware and Cisco.
  • Like private cloud, you really have to do SDN for the right reasons to be successful.
  • Last year at this conference, there were 0 attendees who indicated they had investigated or deployed SDN. This year, 14% of attendees responded positively.

 

If you’re interested in a deeper discussion around what I heard at the conference, let me know and I’ll be happy to continue to dialogue.

 

By Chris Ward, CTO. Follow Chris on Twitter @ChrisWardTech . You can also download his latest whitepaper on data center transformation.