Todas las entradas hechas por Nate Schnable

Huh? What’s the Network Have to Do with It?

By Nate Schnable, Sr. Solutions Architect

Having been in this field for 17 years it still amazes me that people always tend to forget about the network.  Everything a user accesses on their device that isn’t installed or stored locally, depends on the network more than any other element of the environment.   It’s responsible for the quick and reliable transport of data. That means the user experience while working with remote files and applications, almost completely depends on the network.

However, this isn’t always obvious to everyone.  Therefore, they will rarely ask for network related services as they aren’t aware the network is the cause of their problems.  Whether it is a storage, compute, virtualization or IP Telephony initiative – all of these types of projects rely heavily on the network to function properly.  In fact, the network is the only element of a customer’s environment that touches every other component. Its stability can make or break the success and all important user experience.

In a VoIP initiative we have to consider, amongst many things, that proper QoS policies be setup –  so let’s hope you are not running on some dumb hubs.  Power over Ethernet (PoE) for the phones should be available unless you want to use bricks of some type of mid-span device (yuck).  I used to work for a Fortune 50 Insurance Company and one day an employee decided to plug both of the ports on their phone into the network because it would make the experience even better – not so much.  They brought down that whole environment.  Made some changes after that to avoid that happening again!

In a Disaster Recovery project we have to take a look at distances and subsequent latencies between locations.  What is the bandwidth and how much data do you need to back up?   Do we have Layer 2 handoffs between sites or is it more of a traditional L3 site to site connection?

If we are implementing a new iSCSI SAN do we need ten gig or one gig?  Do your switches support Jumbo Frames and flow control?  Hope that your iSCSI switches are truly stackable because spanning-tree could cause some of those paths to be redundant, but not active.

I was reading the other day that the sales of smart phones and tablets would reach approximately 1.2 billiion in 2013.  Some of these will most certainly end up on your wireless networks.  How to manage that is definitely a topic for another day.

In the end it just makes sense that you really need to consider the network implications before jumping into almost any type of IT initiative.  Just because those green lights are flickering doesn’t mean it’s all good.

 

To learn more about how GreenPages Networking Practice can help your organization, fill out this form and someone will be in touch with you shortly.

Optimizing Controller-Based Wireless LANs with Good Old-Fashioned Autonomous Concepts. Well – Kinda.

Get your attention yet? This isn’t fresh news by any stretch, but there are some good concepts to observe when deploying today’s controller-based WLANs. We have known for years the benefits of the typical controller-based wireless networks. The intelligence that the controller has of the access points (APs) and the ability to dynamically change channels and power outputs is obviously fantastic. Depends on the manufacturer as to what they call it – Radio Resource Management or Adaptive Radio Management etc. Either way, it’s one of the main reasons to go to a controller-based solution.

On top of this we also can get Layer 3 roaming capabilities. A typical controller-based solution has each individual access point create a tunnel back to the controller. In a lot of cases this gives you the ability to roam between L3 subnets. Consider a scenario where you have a corporate campus – there very well could be a voice and a data VLAN per closet. If the access point didn’t tunnel back to a controller we would potentially drop sessions (unless you extended a wireless VLAN across campus which has its own implications) when you went from one AP on one subnet to another AP on a disparate one. Often, this may not be an issue with some standard TCP applications – but if we are talking about time-sensitive applications such as voice this could be disastrous. If we have a tunnel from each AP to the controller, we can now set up Layer 3 roaming capabilities without having to create a sprawling wireless VLAN. Voice connections stay up and all is good – right?

So what happens if we have a situation – let’s call it a remote office without a controller – where you want to keep local traffic local, but tunnel the rest of the traffic back to a controller at the Data Center. If we stick to the newer model, all traffic gets directed to the controller via the tunnel. Kind of seems pointless for me at a remote branch sending a print job back to the Data Center and then back to the remote office to the printer next to me right? Many companies now are allowing their controller-based solutions to “hairpin” local traffic to keep it local rather than waste valuable bandwidth. This hybrid or HREAP type approach does in fact give us the ability to glean the benefits of both the centralized intelligence of the controller, but also can minimize the bandwidth burden that these tunnels take up. If it’s meant for the Data Center so be it; if it’s meant to remain local we can do that too. If the remote office is large enough to warrant its own local controller then that is a different conversation – until next time.

We’ll be holding educational events in Boston, NYC, and Atlanta over the month of November. Should be a lot of good info and a great networking opportunity. Click for registration details.

Virtual Appliances and the Networking Team

Over the last few years there has been a lot of progress made towards virtualizing a decent amount of the traditional, network-centric appliances that used to be just hardware based. Why are some companies still resistant to this software-based approach?  Is it because that’s the way it has always been, or is it inherent to the networking geeks who may be less virtualization-savvy than some of their cohorts in the other technology silos?  It reminds me of the days when VoIP was first being introduced and the subsequent lack of acceptance that some of the old-school, traditional telephony engineers fueled.  Some of them accepted it and others retired.  The point is though that it makes sense and those who accept it will be much the better for it.

With the dynamic today moving towards private and public cloud offerings, the virtual appliance marketplace will most certainly continue to grow and mature.  There are many reasons why this makes a lot of sense.

Take a look at the time it takes to implement a physical network appliance.  Let’s use an application delivery controller – or load-balancer if you prefer that term.  How long does it take to implement a physical box into an existing environment?  Between ordering the unit(s) which usually come in pairs, shipping, and installing, it takes some time.  The cables need to be run, the box racked and stacked and then physically powered on and provisioned.  We have been doing this for years and this used to be standard operating procedure. Now that works well and good, kinda, in your own data center.  What about a public cloud offering?  Sorry, you don’t own that infrastructure. How about downloading a virtual appliance, spinning up a VM and you are off to the races. Again, this happens after provisioning the unit, but there is a lot less moving parts going that route.  Cloud or not – either way it still makes sense.  There will be less infrastructure requirements: power, rack space, cabling etc.

There are some other tangible benefits as well.  From a refresh perspective it just makes sense to upgrade a virtual appliance with a newer image – or adding memory –rather than a hardware-based forklift upgrade every five years (with potentially more downtime required).  The ability to shrink or grow a virtual appliance is one of the things that set it apart.  We don’t have to repurchase anything – other than license keys and annual service contracts.  Regrettably, those won’t go away.  But coupled all together with the flexibility to move your virtual appliances along with your data from one environment to another is key.  We will see more and more network-centric appliances become virtualized.  There will most assuredly always be some physical boxes that the network folks can get their hands on, but that will be for access purposes only.

The companies/manufacturers/network-engineers who don’t embrace this trend could quickly find themselves behind the eight ball. Analog phones anyone?