Archivo de la categoría: Featured

Tech News Recap for the Week of 11/24/2014

Tech News RecapWith a short week due to Thanksgiving, here’s a quick tech news recap of articles you may have missed.

Tech News Recap

New Malware has been used for surveillance in 10 countries including Russia and Mexico. A study has found that Internet of Things in the enterprise is up three-fold since 2012. VMware is currently offering a 25% discount on vSOM. Computerworld provided its 2015 predictions around IT spending. The adoption of cloud computing continues to accelerate in the enterprise space. Ebay pulled it’s app from the App Store and InformationWeek provided a list of 10 Windows Tablets and Laptops that are under $200 to keep in mind as the holidays approach.

 

If you’d like to get more information on the 25% discount VMware is currently offering on vSOM, click here & a GreenPages Rep will reach out to you.

25% vSOM Discount Ends December 31st!

25% vSOM DiscountDid you know VMware’s offering a 25% vSOM discount? That’s right, VMware has been providing a 25% discount to upgrade to vSOM from naked vSphere since Labor Day weekend. The standard upgrade price is $825 MSRP, but the promo price drops it down to $620 MSRP. That’s over a $200 savings per CPU. There are some serious savings to be had here so I wanted to quickly bring you up to speed so you could assess the solution and see if it makes sense for your organization.

25% vSOM discount ENDS ON DECEMBER 31!

 

So what is vSOM? It’s a bundle of vSphere and vRealize Operations (formally known as vCOPS). When reviewing monitoring and management toolsets with a broad stroke, it’s easy to say they’re nice to have but not absolutely necessary. Yet if you dive deeper, there are many features and functions that make the investment worthwhile in the long-term growth and planning of your virtualization environment.

vRealize Operations enables IT to not just see immediate issues, but also potential future problems which can have a dramatic impact on reducing unplanned outages.  With Predictive Analytics and Smart Alerts, it proactively identifies and remedies system issues, while dynamic thresholds automatically adapt to environments to provide fewer and more specific alerts resulting in a 30 percent DECREASE in time to diagnose and resolve performance issues. That’s three hours of your day you get back allowing you to work on improving and emerging your environment rather than troubleshooting constant alert noises and notifications.

The old saying, it’s better to be safe than sorry, speaks volumes in a virtual environment and especially in over-provisioning. Research has shown that 9 out of 10 virtual machines are over-provisioned. While this may not seem like a bad thing on the surface, it leads to diminishing efficiency and optimization within the virtual infrastructure and, more importantly, increased infrastructure costs. Having the ability to manage your VMs more closely and effectively with vRealize Operations (vCOPS), you can finely tune each VM, allocating the resources that are really necessary and as a result save up to 30% in potential hardware costs. This solution provides a holistic overview of your virtualized environment and provides deep insight into the health of your infrastructure which would otherwise be invisible. Capacity planning is another key feature of the vRealize Operations toolset allowing you to model future resource needs and alert on constraints before those constraints result in unexpected system downtime.

Years ago many wondered what the ROI was for ESX?  It was nice to be able to put several VMs on one server, but was it needed?  When we moved from physical to virtual it was a big step, an unknown “pie in the sky” concept that made sense on paper, but would it work and would it be a worthy investment?  Well now we know that moving to a VM environment made sense, and for some it was easy to manage. However, this inevitably led to the issue of resource and VM sprawl and a lack of visibility to overall infrastructure health. vRealize Operations is a comprehensive tool which can provide predictive analytics, capacity planning, and performance and health management.  Hence, it is very much a “have to have” vs. a “nice to have.”

If you’re looking for more information on vRealize, I would suggest downloading this whitepaper.

Now is the time to take advantage of a good deal on a great product! As always, GreenPages can help. If you would like to learn more, get a demo or make the purchase, fill out this form and we’ll be in touch!

 

By Rob O’Shaughnessy, Director of Software Sales & Renewals

Balancing Control and Agility in Today’s IT Operational Reality

How can IT Departments balance control and agility in today’s IT operational reality? For decades, IT Operations has viewed itself as the controlling influence on the “wild west” of business influences. We have had to create our own culture of control in order to extend our influence beyond the four hardened walls of the datacenter, and now the diaphanous boundaries of the Cloud. Control was synonymous with good IT hygiene, and we prided ourselves in this. It’s not by accident that outside of the IT circles, we were viewed as gatekeepers and traffic cops, regulating the use (and hopefully abuse) of valuable IT resources and critical data sets. Many of us built our careers on a foundation of saying “no,” or, for those of us with less tact, “are you crazy?”

That was then, when we were the all-seeing, god-like nocturnal creatures operating in the dark of server rooms and wiring closets. Our IT worlds have changed dramatically since those heady days of power and ultimate dominion over our domain(s). I mean, really, we actually created something called Domains so the non-IT peasant-class could work in our world easier, and we even have our own Internet Hall of Fame!

Now, life is a little different. IT awareness has become more mainstream, and innovation is actually happening at a faster pace in the consumer market.  We are continually being challenged by the business, but in a different and more informed manner than in our old glory days. We need to adapt our approach, and adjust our perspective in order to stay valued by the business. My colleague John Dixon has a quality ebook around the evolution of the corporate IT department that I would highly recommend taking a look at.

This is where Agility comes into play. Think of what it takes to become agile.  It takes both a measure of control, and a measure of flexibility. They seem to be odd roommates. But in actuality, they feed off each other, balance one-another. Control is how you keep chaos out of agility, and agility is how you keep control from becoming too restraining.

Mario Andretti has a great quote about control: “If everything seems under control, you’re just not going fast enough.” And this is where the rub is in today’s business climate. We are operating at faster speeds and shorter times-to-market than ever before. Competition is global and not always above-board or out in the open. The raw number of influences in our customer base have exponentially increased.  We have less “control” over our markets now, and by nature have to become more “agile” in our progress.

IT operations must become more agile to support this new reality. Gone are the days of saying “not on my platform”, or calling the CIO the CI-NO. To become more agile, we need to enable our teams to spend more time on innovation than on maintenance.

So what needs to change? Well, first, we need to give our teams back some of the Time and Energy they are spending in maintenance and management functions. To do this, we need to drive innovations in that space, and think about lowest cost of delivery for routine IT functions. To some this means outsourcing, to others it’s about better automation and collaboration. If we can offload 50-70% of the current maintenance workload from our teams, our teams can then turn their attention away from the rear-view mirror and start looking for the next strategic challenge. A few months back I did a webinar around how IT departments can modernize their IT operations by killing the transactional treadmill.

Once we have accomplished this, we then need to re-focus their attention to innovating for the business.  This could be in the form of completing strategic projects or enhancing applications and services that drive revenue. Beyond the obvious benefits for the business, this re-focus on innovation will create a more valuable IT organization, and generally more invested team members.

With more time and energy focused on innovation, we need to now create new culture within IT around sharing and educating. IT teams can no longer operate in silos effectively if they are truly to innovate.  We have to remove the boundaries between the IT layers and share the knowledge our teams gather with the business overall.  Only then can the business truly see and appreciate the advances IT is making in supporting their initiatives.

To keep this going long term you need to adjust your alignment towards shared success, both within IT and between IT and the rest of the organization. And don’t forget your partners, those that are now assisting with your foundational operations and management functions. By tying all of them together to a single set of success criteria and metrics, you will enforce good behavior and focus on the ultimate objective – delivery of world class IT applications and services that enable business growth and profitability.

Or, you could just stay in your proverbial server room, scanning error logs and scheduling patch updates.  You probably will survive.  But is survival all you want?

 

By Geoff Smith, Senior Manager, Managed Services Business Development

What To Move To the Cloud: A More Mature Model for SMBs

what to move to the cloudMany SMBs struggle with deciding if and what to move to the cloud. Whether it’s security concerns, cost, or lack of expertise, it’s oftentimes difficult to map the best possible solution. Here are 8 applications and services to consider when your organization is looking to move to the Cloud and reduce their server footprint.

 

What to move to the cloud

1. Email

Obviously in this day and age email is a requirement in virtually every business. A lot of businesses continue to run Exchange locally. If you are thinking about moving portions of your business out to the cloud, email is a good place to start. Why should you move to the cloud? Simple, it’s pretty easy to do and at this point it’s been well documented that mail runs very well up in the cloud. It takes a special skill set to run Exchange beyond just adding and managing users. If something goes wrong and you have an issue, it can often times be very complicated to fix. It can also be pretty complicated to install. A lot of companies do not have access to high quality Exchange skills. Moving to the cloud solves those issues.  Having Exchange in the Cloud also gets your company off of the 3-5 year refresh cycle for the hardware to run Exchange as well as the upfront cost of the software.

Quick Tip – Most Cloud e-mail providers offer Anti-Spam/Anti-virus functionality as part of their offering. You can also take advantage of Cloud based AS/AV providers like MacAfee’s MXLogic.

2. File Shares

Small to medium sized businesses have to deal with sharing files securely and easily among its users. Typically, that’s a file server running locally in your office or at multiple offices. This can present a challenge of making sure everyone has the correct access and that there is enough storage available. Why should you move to the cloud? There are easy alternatives in the cloud to avoid dealing with those challenges. Such alternatives include Microsoft OneDrive, Google Drive or using a file server in Microsoft Azure. In most cases you can use Active Directory to be the central repository of rights to manage passwords and permissions in one place.

Quick Tip – OneDrive is included with most Office 365 subscriptions. You can use Active Directory authentication to provide access through that.

3. Instant Messaging/Online Meetings

This one is pretty self-explanatory. Instant messaging can oftentimes be a quicker and more efficient form of communication than email. There are many platforms out there that can be used including Microsoft Lync, Skype and Cisco Jabber. A lot of these can be used for online meetings as well including screen sharing. Your users are looking for these tools and there are corporate options. With a corporate tool like Lync or Jabber, you can be in control. You can make sure conversations get logged, are secure and can be tracked. Microsoft Lync is included in Office 365.

Quick Tip – If you have the option, you might as well take advantage of it!

4. Active Directory

It is still a best practice to keep an Active Directory domain controller locally at each physical location to speed the login and authentication process even when some or most of the applications are services are based in the Cloud. This still leaves most companies with an issue if their site or sites are down for any reason.  Microsoft now has provided the ability to run a domain controller in their Cloud with Azure Active Directory to provide that redundancy that many SMBs do not currently have.

Quick Tip – Azure Active Directory is pre-integrated with Salesforce, Office 365 and many other applications. Additionally, you can setup and use multi-factor authentication if needed.

5. Web servers

Web servers are another very easy workload to move to the cloud whether it’s Rackspace, Amazon, Azure, VMware etc. The information is not highly confidential so there is a much lower risk than putting extremely sensitive data up there. By moving your servers to the cloud, you can avoid all the traffic from your website going to the local connection; it can all go to the cloud instead.

Quick Tip – most cloud providers offer SQL server back-ends as part of their offerings. This makes it easy to tie in the web server to a backend database. Make sure you ask your provider about this.

6. Back Up 

A lot of companies are looking for alternate locations to store back up files. It’s easy to back up locally on disk or tape and then move offsite. It’s often cheaper to store in the cloud and it helps eliminate the headache of rotating tapes.

Quick Tip – account for bandwidth needs when you start moving backups to the cloud. This can be a major factor.

7. Disaster Recovery

Now that you have your backups offsite, it’s possible to have capacity to run virtual machines or servers up in the cloud in the event of a disaster. Instead of moving data to another location you can pay to run your important apps in the cloud in case of disaster. It’s usually going to cost you less to do this.

Quick Tip – Make sure you look at your bandwidth closely when backing up to the Cloud. Measure how much data you need to backup, and then calculate the bandwidth that you will need.  Most enterprise class backup applications allow you to throttle the backups so they do not impact business.

8. Applications

A lot of common applications that SMBs use are offered as a cloud service. For example, Salesforce and Microsoft Dynamics. These companies make and host the product so that you don’t have to onsite. You can take advantage of the application for a fraction of the cost and headache.

In conclusion, don’t be afraid to move different portions of your environment to the cloud. For the most part, it’s less expensive and easier than you may think. If I was starting a business today, the only thing I would have running locally would be an AD controller or file server. The business can be faster and leaner without the IT infrastructure overhead that one needed to run a business ten years ago.

Looking for more tips? Download this whitepaper written by our CTO Chris Ward “8 Point Checklist for a Successful Data Center Move

 

By Chris Chesley, Solutions Architect

Tech News Recap for the Week of 11/10/2014

Tech News RecapWere you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 11/10/2014.

Tech News Recap 11/10/2014

This week, a massive breach hit Postal Service employees. Google cloud will be storing petabytes of genome data for health researchers while a new Microsoft data center is being powered by fuel cells. Also with Microsoft, Bill Gates sold $925M in stock…but still owns $13.6B worth. There were articles about both the Army’s cloud strategy as well as its virtual desktop strategy. Facebook and Twitter will most likely be speaking with Russian officials about data storage regulations next month. AWS builds the first customized marketplace for the CIA’s private cloud. Samsung is readying Proximity, it’s challenger to Apple iBeacon. There was also an interview with Motus CIO Rick Blaisdell along with good articles around IT spending, project management offices driving business growth, and HP’s BYOD services.

What top tech news did we miss? Leave a comment with links to any quality articles from last week that other readers may enjoy!

Download this new whitepaper to get an 8-point checklist for a successful data center move

How the Project Management Office Can Drive Business Growth with Excellence in Customer Service

PMOProject Managers today don’t just manage projects; they are a key contributor in managing the business. So, is there a way the Project Management Office can gain the business competitive positioning and better business results? I say yes. We can do this through delivering excellence in customer service.

Aristotle said it best when he said, “We are what we repeatedly do. Excellence then is not an act, but a habit.” Aristotle ~384-322 BCE.  To create a culture of service excellence, the PMO must first define for itself what excellence in customer service is. Involve the members of the Project Management Office in this activity (after all, we know from our experience managing projects that stakeholder involvement facilitates buy-in). Ask each member to provide their best customer service experience. From the cumulative experiences, collaboratively define what service excellence is for your team in your business. This definition should become the mission statement of the PMO.

Next, have the Project Management Office members recommend the values they will guide themselves by to obtain service excellence. Below are a few general principles to build on. I agree that many may seem obvious or cliché, but you will find that they work:

  • Be available
  • Treat your customer the way you would like to be treated
  • Provide a personal and individual level of attention to each client
  • Be an expert in your role, discipline or practice
  • Be empowered to make decisions
  • Ask, listen and learn
  • Analyze risk to identify potential problems and implement corrective and preventive measures
  • Communicate early and often
  • Request feedback and use it to evolve service excellence
  • Be humble, honest, frank and prepared

Once the Project Management Office defines and outlines values, PMO management should create a formal documented Customer Service policy and roll it out to the team. The upkeep of the Customer Service policy should be considered an iterative process; the needs of the customer and feedback from stakeholders are regularly analyzed and constant improvements are made to the program.

Review the policy with the PM team regularly, especially when there are any updates, or new hires added to the team. Perform team building exercises in support of the program, and share lessons learned at regular team meetings to foster continued support of the program. We want to ensure everyone adopts this behavior. After all, service excellence must become the new norm.

Great service can be used as an effective acquisition strategy, as well as a retention strategy for happy customers. Roll out a Customer Service Excellence program in your Project Management Office and you will find that the customer service approach will lead to growth and profitability.

Are you interested in learning how effective project management strategies can help your business excel? Email us at socialmedia@greenpages.com

 

By Erin Marandola, Business Analyst, PMP

CTO Focus Interview: Rick Blaisdell, Motus

In the second installment of our CTO Focus Interview series (view part I with Stuart Appley here), I got the chance to sit down with Rick Blaisdell, CTO at Motus, at a coffee shop in Portsmouth, NH. Rick is an experienced IT pro with more than 20 years experience in the industry. Some of his specialties include SaaS, cloud computing, virtualization, software development and business process improvement.  Rick is a top thought leader in the industry. He has a very successful blog, Rick’s Cloud, and his opinions and insights are very well respected on Twitter. Definitely an interesting guy to talk to – enjoy!

CTO Focus InterviewBen: Fill us in on what you do and your IT experience.

Rick: I’m currently the CTO for Motus, a mobile technology company that builds solutions for mobile employees. I also advise technology companies on becoming more efficient, scalable and moving physical workloads to the cloud and streamlining their development processes. Small technology companies need direction with best practices; most of these companies have not yet invested heavily in physical infrastructure so it is easier for them to move to, and embrace, cloud technologies. Mid-size and larger companies have more complex issues when moving to the cloud as most of their infrastructure is physical with complex workloads that require assessments, partners and advanced planning before moving to the cloud.  I help these organizations with these types of decisions.

 

Ben: What are your main goals when heading into a new company?

Rick: My main objective is to migrate companies from CapExt to OpEx. I spend a lot of time discussing the advantages of moving to the cloud. The ultimate goal is to remove internal employee workloads to SaaS and external customer facing production workloads to IaaS and PaaS. Internally it is critical to get your data out of your closet. An enterprise private of hybrid managed cloud solution with process and controls around the data is often the solution for most companies. Migrating 100% of the workloads to the cloud may not be feasible for all companies, although if you are a small business or new technology startup that should be the goal.

Ben: Do you often get pushback from C-level executives about utilizing the cloud?

Rick: The C-suite understands the high level benefits and business value of utilizing the cloud. One of the unknown barriers of moving to the cloud often comes from within a company’s own internal IT fears.  The IT team has been focused on keeping physical environments running, security, patching and maintenance. Shifting to the cloud removes physical resources into virtual resources and now with managed solutions, shared experts are responsible for making sure your environment has the highest level of SLA’s and security. The internal IT team sees this as a threat and in some cases will find out how to slow cloud adoption.  The C-Suite wants to be efficient and grow by concentrating on what’s core to the business, not spending energy hiring employees to maintain hard drives. The challenge is educating and retraining these employees to be able to take on new tasks that accelerate the core mission of the company.

{Download our latest whitepaper – 8 Point Checklist for a Successful Data Center Move }

Ben: Which area of technology interests you the most?

Rick: The Internet of Things fascinates me, where there are billions of objects like toothbrushes, wearable technology, home appliances and tracking devices collecting data. Most technology companies in the near future will have some sort of IoT device. I recently wrote a blog post highlighting some cool internet of things startups.

Ben: What’s your view on the concept of Anything-as-a-Service?

Rick: Things are shifting to the “as a service” mentality. People like the pay as you go model. Everything is being added to the XaaS stack. Overall, I embrace the financial advantages of this model. All the companies I have worked with end up being more secure and efficient while saving money after migrating to SaaS, PaaS, DaaS, MCaaS services.

Ben: Throughout your career, what concept or technology would you say has had the most drastic impact on IT?

Rick: Virtualization would be my first answer. High density cloud environments cost less, are more efficient, are easier to scale and can be as secure as any physical environment. This has had a major impact on IT.

Ben: Where do you see IT in the next 5-7 years?

Rick: That is a long way out, my guess is that workloads will be fully commoditized, think of Priceline when you search for hotels today, this is where the world is headed, providers with services large and small available at discounts that meet your SLA’s, security, time frame all being managed and seamlessly migrated between each other. If this doesn’t happen, I will buy you dinner ;)

Are you a CIO/CTO interested in participating in our Focus Interview series? Email me at bstephenson@greenpages.com

 

 

By Ben Stephenson, Emerging Media Specialist

Tech News Recap for the Week of 11/3/2014

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 11/3/2014.

Tech News Recap

Microsoft is eliminating the fee to use most functions of its mobile aps for Office 365. Google cloud cuts prices yet again. Splunk is playing a major part in the Internet of Things. IDC is predicting the public cloud will be a $127 billion industry by 2018. ZDnet provided a review of smartwatches for work and for play. Drones could end up taking off in Europe before the US. There were also some good articles around converged/hyper-converged infrastructure, shadow IT, Microsoft Azure, mobile development and secure storage infrastructure.

What top tech news did we miss? Leave a comment with links to any quality articles from last week that other readers may enjoy!

Download this new whitepaper to get an 8-point checklist for a successful data center move

 

By Ben Stephenson, Emerging Media Specialist, GreenPages Technology Solutions

Reference Architecture, Converged, & Hyper-Converged Infrastructure: A Pizza Analogy

hyper-converged infrastructureThis morning, our CTO Chris Ward delivered an internal training that did a great job breaking down reference architecture, converged infrastructure, and hyper-converged infrastructure. To get his point across, Chris used the analogy of eating a pizza. He also discussed the major players and when it makes sense for organizations to use each. Below is a recap of what Chris covered in the training. You can hear more from Chris in his brand new whitepaper – an 8 Point Checklist for a Successful Data Center Move. You can also follow him on Twitter.

Reference Architecture

According to Chris, reference architecture is like getting a detailed recipe and making your own pizza. You need to go out and buy the ingredients, make the dough, add the toppings, and bake to the perfect temperature. With reference architecture, you essentially get an instruction book. If you’re highly technical, following the recipe is manageable. However, if you are more of a technology generalist, or if you’re newer to the filed, it may be difficult to follow and the chances you get lost in the recipe can be fairly high. The benefit here is that you have flexibility to make the pizza the way you want it. The downside is it doesn’t save you a ton of time. You still need to order the equipment, wait for the order to arrive, and then put it together.

The Players

  • EMC’s VSPEX – EMC storage, Cisco UCS compute, Cisco networking
  • Nimble – Nimble storage, Cisco UCS compute, Cisco (newer offering)
  • FlexPod – NetApp Storage, Cisco UCS compute, Cisco networking

There are several use cases when it makes sense to utilize reference architecture. These include when an organization:

  • Has disparate vendors where converged or hyper-converged infrastructure may not be an alternative and the organization is not open to a vendor switch
  • Requires more flexibility in components than converged infrastructure provides (i.e. you can add some extra garlic to your pizza and not have it be a big deal).
  • Doesn’t have a hardware refresh cycle between storage, compute and networking that is in alignment (i.e. you do not want to double up on servers you just bought last year)

Converged Infrastructure

Converged Infrastructure is like a take home pizza you buy at a grocery store (it’s not delivery it’s Digiorno!). Converged Infrastructure is more prepackaged than reference architecture. The dough has been made, the toppings have been added, but you still have to put it in the oven and bake it. Vendors do the physical rack, stack and cabling at the factory and ship it directly to the customer. Customers can expect this typically in 30-45 days of placing the order. You don’t have to wait months to get all parts shipped and then assemble yourself. However, the infrastructure is set in stone. If you are an IT department with a different shop than what you are getting with the converged infrastructure option, you can’t mix and match. There is also still integration that comes with converged infrastructure.

The players

There are several use cases when it makes sense to utilize converged infrastructure. These include when an organization:

  • Requires fast time to market (typically 30-45 days from order to constructed delivery. Keep in mind there is additional time on the front end before the order when planning the solution out).
  • Is building out of application PODS or private cloud. This is typically more of a use case in the enterprise space. For example, rolling out a new SAP environment and having, say, Vblock be solely dedicated to that one app running on it. Another example is a larger VDI project.
  • Requires known, guaranteed and predictable performance out of the infrastructure. With Vblock, VCE guarantees you the performance that you do not get with reference architecture
  • Requires large scalability – you can add to it over time. Keep in mind you need to have a clear direction of where you are headed before you start.
  • Is stuck in the mud with operations and or maintenance validation tasks. Again this is a more relevant use case in the enterprise space. Say an IT Department needs to upgrade from vSphere 5.1 to 5.5 in a cloud environment. This could take them 3-4 months to do all testing, etc. By the time they get everything together there could be a new update on its way out. This IT Department is always 2-3 upgrades behind because of all the manual work. With converged infrastructure, vendors do that work for you.

Hyper-Converged Infrastructure

A hyper-converged infrastructure is the equivalent to a fine dining pizza experience. You can sit back and have a glass of wine while your meal is served to you on a silver platter. Hyper-converged infrastructure is an in-a-box offering. It’s one physical unit – no cabling or wiring necessary. The only integration is to uplink it into your existing infrastructure. If you choose to go this route, you can place the order, overnight ship it and expect to have it on your floor in 48 hours. This is obviously a very fast time to market. As this is the newest space of the three, it’s a little less mature in terms of scalability. Hyper-converged infrastructure often makes the most sense for midmarket companies. Keep in mind, hyper-converged infrastructure is a take it or leave it, all or nothing deal.

The Players

It makes the most sense to utilize hyper-converged infrastructure when companies:

  • Storage and compute refresh cycles are roughly in sync
  • Are looking for out-of-the-box data protection (Simplivity)
  • Require known/guaranteed/predictable performance
  • Are looking for rack space and power consolidation savings
  • Require a small amount of scalability
  • Want a plug-and-play approach to infrastructure.

Which way makes the most sense for you to eat your pizza?

You can hear more from Chris in his brand new whitepaper – an 8 Point Checklist for a Successful Data Center Move. You can also follow him on Twitter.

 

Photo credit: http://www.sciencephoto.com/

By Ben Stephenson, Emerging Media Specialist

SDN Technologies: No Need to Pick the Winner, Just Get in the Game

With SDN, there are a lot of complementary technologies. Will the future be Change or Die? Or will it be Adopt & Co-mingle? In this short two minute video, GreenPages Solutions Architect Dan Allen discusses software define networking. You can hear more from Dan in this video blog about Cisco ASA updates and this video blog discussing wireless strategy.

 

SDN Technologies

http://www.youtube.com/watch?v=p6qgBY10SyY

 

Would you like to speak with Dan about SDN strategy or implementation? Email us at socialmedia@greenpages.com!