Category Archives: Networking

Nokia eyes the cloud infrastructure market with OpenStack, VMware-based servers

Nokia is offering up its own blade servers to the telco world

Nokia is offering up its own blade servers to the telco world

Nokia Networks revealed its AirFrame datacentre solutions this week, high-density blade servers running a combination of OpenStack and VMware software and designed to support Nokia’s virtualised network services for telcos.

“We are taking on the IT-telco convergence with a new solution to challenge the traditional IT approach of the datacentre,” said Marc Rouanne, executive vice president, Mobile Broadband at Nokia Networks.

“This newest solution brings telcos carrier-grade high availability, security-focused reliability as well as low latency, while leveraging the company’s deep networks expertise and strong business with operators to address an increasingly cloud-focused market valued in the tens of billions of euros.”

The servers, which come pre-integrated with Nokia’s own switches, are based on Intel’s x86 chips and run OpenStack as well as VMware, and can be managed using Nokia’s purpose-built cloud management solution. The platforms are ETSI NFV / OPNFV-certified, so they can run Nokia’s own VNFs as well as those developed by certified third parties.

The company’s orchestration software can also manage the split between both virtualised and network legacy functions in either centralised or distributed network architectures.

Phil Twist, vice president of Portfolio Marketing at Nokia Networks told BCN the company designed the servers specifically for the telco world, adding things like iNICs and accelerators to handle the security, encryption, virtual routing, digital signal processing (acceleration for radio) that otherwise would tie up processor capacity in a telco network.

But he also said the servers could be leveraged for standing up its own cloud services, or for the wider scale-out market.

“Our immediate ambition is clear: to offer a better alternative for the build-out of telco clouds optimized for that world.  But of course operators have other in-house IT requirements which could be hosted on this same cloud, and indeed they could then offer cloud services to their enterprise customers on this same cloud,” he explained.

“We could potentially build our own cloud to host SaaS propositions to our customers, or in theory potentially offer the servers for enterprise applications but that’s not our initial focus,” he added.

Though Twist didn’t confirm whether this was indeed Nokia’s first big move towards the broader IT infrastructure market outside networking, the announcement does mean the company will be brought into much closer competition with both familiar (Ericsson, Cisco) and less familiar (HP) incumbents offering their own OpenStack-integrated cloud kit.

Updating Your Network Infrastructure for Modern Devices

Today the world of IT infrastructures is changing. This is due to the way companies communicate and the way they send and receive data within their networks, and the development of cloud computing and virtualised servers has re-shaped the way we share information with one another.

Cloud computing is a scalable and reliable cloud based environment which utilises remote servers to host and store all of our information. Just some of the benefits of cloud computing include improved accessibility, reduced spending on maintaining localised servers, a streamlining of processes and much more flexibility for businesses and organisations. (To find out more about how cloud computing works and how it can benefit your business, visit PC Mag online.)

Networking and Secure Infrastructures

With the increased accessibility of using servers in the cloud, it’s never been more important for network security. A greater number of people and an increasing number of new devices, including mobile devices will request access to modern day business networks. From laptops and contemporary tablet devices, Blackberries and smart phones, to desktop computers and other digital devices, one single business will have a lot of different data handlers to consider.

With new devices, are increased levels of complexity when it comes to traffic patterns, and as expected there are more security threats when more devices request to access your network. With this in mind, today’s IT infrastructure needs to be updated in order to cope with the increasing amount of data flowing over the IT network. (For more information on networking, visit Logicalis, an international IT solutions provider.)

The Importance of Accessibility

What’s most important to understand is the importance of welcoming such changes to your IT network. Virtualisation can improve the way businesses send and receive information, both internally and externally, and can also help organisations of all sizes cut down on costs in the long-run. Cloud servers can also provided added security with data backup and the development of virtualised computing can reduce planned downtime by up to 90%.

With the growth and development of modern devices it’s now more important than ever to ensure that you have increased accessibility for all business devices. Finding the right IT solutions provider for your business can help you support next-generation technology whilst encouraging better communication between key people in your company. 

Read more on how virtualisation and cloud servers could be redefining the roles of IT within a business on the Logicalis blog

What’s Your Wireless Strategy?

Video with Dan Allen, Solutions Architect

 

There are many different factors that go into wireless deployments. Before you start you need a well thought out wireless strategy. For example, IT departments need to look into whether they have specific power restrictions. Will it be cheaper to run new cabling? Do you have the right switching infrastructure to support your initiative? Is it PoE or UPoE? How will you address security concerns?

 

What’s Your Wireless Strategy?

 

http://www.youtube.com/watch?v=JvVpot9_1kE

 

 

Are you interested in speaking more about your wireless strategy? Email us at socialmedia@greenpages.com

 

 

 

 

 

The 2013 Tech Industry – A Year in Review

By Chris Ward, CTO, LogicsOne

As 2013 comes to a close and we begin to look forward to what 2014 will bring, I wanted to take a few minutes to reflect back on the past year.  We’ve been talking a lot about that evil word ‘cloud’ for the past 3 to 4 years, but this year put a couple of other terms up in lights including Software Defined X (Datacenter, Networking, Storage, etc.) and Big Data.  Like ‘cloud,’ these two newer terms can easily mean different things to different people, but put in simple terms, in my opinion, there are some generic definitions which apply in almost all cases.  Software Defined X is essentially the concept of taking any ties to specific vendor hardware out of the equation and providing a central point for configuration, again vendor agnostic, except of course for the vendor providing the Software Defined solution :) .  I define Big Data simply as the ability to find a very specific and small needle of data in an incredibly large haystack within a reasonably short amount of time. I see both of these technologies becoming more widely adopted in short order with Big Data technologies already well on the way. 

As for our friend ‘the cloud,’ 2013 did see a good amount of growth in consumption of cloud services, specifically in the areas of Software as a Service (SaaS) and Infrastructure as a Service (IaaS).  IT has adopted a ‘virtualization first’ strategy over the past 3 to 4 years when it comes to bringing any new workloads into the datacenter.  I anticipate we’ll begin to see a ‘SaaS first’ approach being adopted in short order if it is not out there already.  However, I can’t necessarily say the same on the IaaS side so far as ‘IaaS first’ goes.  While IaaS is a great solution for elastic computing, I still see most usage confined to the application development or super large scale out application (Netflix) type use cases.  The mass adoption of IaaS for simply forklifting existing workloads out of the private datacenter and into the public cloud simply hasn’t happened.  Why?? My opinion is for traditional applications neither the cost nor operational model make sense, yet. 

In relation to ‘cloud,’ I did see a lot of adoption of advanced automation, orchestration, and management tools and thus an uptick in ‘private clouds.’  There are some fantastic tools now available both commercially and open source, and I absolutely expect to see this adoption trend to continue, especially in the Enterprise space.  Datacenters, which have a vast amount of change occurring whether in production or test/dev, can greatly benefit from these solutions. However, this comes with a word of caution – just because you can doesn’t mean you should.  I say this because I have seen several instances where customers have wanted to automate literally everything in their environments. While that may sound good on the surface, I don’t believe it’s always the right thing to do.  There are times still where a human touch remains the best way to go. 

As always, there were some big time announcements from major players in the industry. Here are some posts we did with news and updates summaries from VMworld, VMware Partner Exchange, EMC World, Cisco Live and Citrix Synergy. Here’s an additional video from September where Lou Rossi, our VP, Technical Services, explains some new Cisco product announcements. We also hosted a webinar (which you can download here) about VMware’s Horizon Suite as well as a webinar on our own Cloud Management as a Service Offering

The past few years have seen various predictions relating to the unsustainability of Moore’s Law which states that processors will double in computing power every 18-24 months and 2013 was no exception.  The latest prediction is that by 2020 we’ll reach the 7nm mark and Moore’s Law will no longer be a logarithmic function.  The interesting part is that this prediction is not based on technical limitations but rather economic ones in that getting below that 7nm mark will be extremely expensive from a manufacturing perspective and, hey, 64k of RAM is all anyone will ever need right?  :)

Probably the biggest news of 2013 was the revelation that the National Security Agency (NSA) had undertaken a massive program and seemed to be capturing every packet of data coming in or out of the US across the Internet.   I won’t get into any political discussion here, but suffice it to say this is probably the largest example of ‘big data’ that exists currently.  This also has large potential ramifications for public cloud adoption as security and data integrity have been 2 of the major roadblocks to adoption so it certainly doesn’t help that customers may now be concerned about the NSA eavesdropping on everything going on within the public datacenters.  It is estimated that public cloud providers may lose as much as $22-35B over the next 3 years as a result of customers slowing adoption due to this.  The only good news in this, at least for now, is it’s very doubtful that the NSA or anyone else on the planet has the means to actual mine anywhere close to 100% of the data they are capturing.  However, like anything else, it’s probably only a matter of time.

What do you think the biggest news/advancements of 2013 were?  I would be interested in your thoughts as well.

Register for our upcoming webinar on December 19th to learn how you can free up your IT team to be working on more strategic projects (while cutting costs!).

 

 

Rapid Fire Summary of Carl Eschenbach’s General Session at VMworld 2013

By Chris Ward, CTO, LogicsOne

I wrote a blog on Monday summarizing the opening keynote at VMworld 2013. Checking in again quickly to summarize Tuesday’s General Session. VMware’s COO Carl Eschenbach took the stage and informed the audience that there are 22,500 people in attendance, which is a new record for VMware. This makes it the single largest IT infrastructure event of the year. 33 of these attendees have been to all 10 VMworlds, and Carl is one of them.

Carl started the session by providing a recap of Monday’s announcements around vSphere/vCloud Suite 5.5, NSX, vSAN, vCHS, Cloud Foundry, and vCHS. The overall mantra of the session revolved around IT as a Service. The following points were key:

  • Virtualization extends to ALL of IT
  • IT management gives way to automation
  • Compatible hybrid cloud will be ubiquitous
  • Foundation is SDDC

After this, came a plethora of product demos. If you would like to watch the presentation to be able to check out the demos you can watch them here: http://www.vmworld.com/community/conference/us/learn/generalsessions

vCAC Demo

  • Started with showing the service catalogue & showing options to deploy an app to a private or public cloud. Also showed costs of each option as well
    • I’m assuming this is showing integration between vCAC & ITBM, although that was not directly mentioned
    • Next they displayed the database options as part of the app – assuming this is vFabric Data Director (DB as a Service)
    • Showed the auto-scale option
    • Showed the health of the application after deployment…this appears to be integration with vCOPS (again, not mentioned)
    • The demo showed how the product provided self-service, transparent pricing, governance, and automation

NSX Demo

  • Started with a networking conversation around why there are challenges with networking being the ball and chain of the VM. After that, Carl discussed the features and functions that NSX can provide. Some key ones were:
    • Route, switch, load balance, VPN, firewall, etc.
  • Displayed the vSphere web client & looked at the automated actions that happened via vCAC and NSX  during the app provisioning
  • What was needed to deploy this demo you may ask? L2 switch, L3 router, firewall, & load balancer. All of this was automated and deployed with no human intervention
  • Carl then went through the difference in physical provisioning vs. logical provisioning with NSX & abstracting the network off the physical devices.
  • West Jet has deployed NSX, got to hear a little about their experiences
  • There was also a demo to show you how you can take an existing VMware infrastructure and convert/migrate to an NSX virtual network. In addition, it showed how vMotion can make the network switch with zero downtime

The conversation then turned to storage. They covered the following:

  • Requirements of SLAs, policies, management, etc. for mission critical apps in the storage realm
  • vSAN discussion and demo
  • Storage policy can be attached at the VM layer so it is mobile with the VM
  • Showcased adding another host to the cluster and the local storage is auto-added to the vSAN instance
  • Resiliency – can choose how many copies of the data are required

IT Operations:

  • Traditional management silos have to change
  • Workloads are going to scale to massive numbers and be spread across numerous environments (public and private)
  • Conventional approach is scripting and rules which tend to be rigid and complex –> Answer is policy based automation via vCAC
  • Showed example in vCOPS of a performance issue and drilled into the problem…then showed performance improve automatically due to automated proactive response to detected issues.  (autoscaling in this case)
  • Discussing hybrid and seamless movement of workloads to/from private/public cloud
  • Displayed vCHS plugin to the vSphere web client
  • Showed template synchronization between private on prem vSphere environment up to vCHS
  • Provisioned an app from vCAC to public cloud (vCHS)  (it shows up inside of vSphere Web client)

 

Let me know if there are questions on any of these demos.

Days 5 at Cisco Live – Video Recap

By Nick Phelps, Consulting Architect, LogicsOne

http://www.youtube.com/watch?v=we5PRDAH_p0

Here’s the recap of the final day of Cisco Live. All in all, a great event with a ton of useful information. I got to sit in on some great sessions and get a lot of hands-on experience with a lot of cutting edge technologies. You can watch the recaps of days 1-4 here if you missed them:

Day 1

Day 2

Day 3 & 4

 

 

 

 

 

The Buzz Around Software Defined Networking

By Nick Phelps, Consulting Architect, LogicsOne

 

http://www.youtube.com/watch?v=p51KAxPOrt4

 

One of the emerging trends in our industry that is stirring up some buzz right now is software defined networking. In this short video I answer the following questions about SDN:

 

  1. What is Software Defined Networking or SDN?
  2. Who has this technology deployed and how are they using it?
  3. What does SDN mean to the small to mid-market?
  4. When will the mid-market realize the benefits from SDN based offerings?
  5. When will we hear more? When should we expect the next update?

 

What are your thoughts on SDN? I’d love to hear you’re comments on the video and my take on the topic!

 

 

Talari Networks Upgrades Adaptive Private Networking for Mercury WAN Appliances

Talari Networks today announced APN 3.0, an upgrade to its APN (Adaptive Private Networking) operating software to support its family of Mercury WAN appliances. Talari is demonstrating APN 3.0 for the first time at Interop Las Vegas this week, Booth #2450.

With the introduction of APN 3.0, Talari’s WAN solution dynamically builds fully meshed connections in reaction to application demand across an aggregated virtual WAN consisting of  broadband, leased-line and other links. This allows enterprises to have a network that automatically adapts to changing traffic patterns and bandwidth demands to ensure that mission-critical applications receive priority and real-time applications are provided the QoS levels they require to perform optimally. In the past, companies were only able to achieve this dynamic network architecture through a combination of disparate technologies or through an expensive fully meshed MPLS network.

Key new features and benefits of the APN 3.0 software release include:

  • Dynamic Conduits ? Allows the automated build up and tear down of a fully meshed network that reacts to changing traffic demands by creating best-path, multi-link tunnels across private or public Internet access links. As traffic from location to location exceeds bandwidth policy reservation thresholds or failures are detected, APN 3.0 builds a dynamic tunnel between those locations in real-time, allowing traffic to bypass some hops to decrease latency. All paths/links are monitored on a sub-second basis for quality, including the new ones, to ensure latency has been decreased. With dynamic conduits, network managers don’t have to anticipate traffic patterns and they can ensure adequate bandwidth exists for critical traffic and no sessions fail.
  • Single Point Configuration ? Talari’s WAN appliances communicate with one another to build an image of the network, the possible paths through the network, and the latency, loss and jitter of each path. This alleviates the need to configure each device within the network or to manually anticipate changing network demands.
  • Complete Network Visibility ? A new off-board Network Management System (NMS) provides full visibility of the Talari network, devices and all links, allowing network managers to easily visualize traffic patterns, quality issues and network outages.

APN 3.0 is fully compatible with leading network management and reporting tools. The operating software will be generally available in July 2013 and accessible via the Talari Customer Support Portal as a free upgrade for existing customers and the NMS will be sold as an optional add-on product.

Wired Profiles a New Breed of Internet Hero, the Data Center Guru

The whole idea of cloud computing is that mere mortals can stop worrying about hardware and focus on delivering applications. But cloud services like Amazon’s AWS, and the amazingly complex hardware and software that underpins all that power and flexibility, do not happen by chance. This Wired article about James Hamilton paints of a picture of a new breed of folks the Internet has come to rely on:

…with this enormous success comes a whole new set of computing problems, and James Hamilton is one of the key thinkers charged with solving such problems, striving to rethink the data center for the age of cloud computing. Much like two other cloud computing giants — Google and Microsoft — Amazon says very little about the particulars of its data center work, viewing this as the most important of trade secrets, but Hamilton is held in such high regard, he’s one of the few Amazon employees permitted to blog about his big ideas, and the fifty-something Canadian has developed a reputation across the industry as a guru of distributing systems — the kind of massive online operations that Amazon builds to support thousands of companies across the globe.

Read the article.

 

Le Nouvel Observateur Digital Uses Openmix Hybrid CDN Strategy for News Cycle Load Balancing

Sudden traffic load bursts following the news is business as usual for an online news provider. When the news is hot, interruptions and heavy slow-downs are common things on various websites; which can mean loss of audience and consequently loss of revenues, forcing technical teams to constantly anticipate any possible incident.

The third most read news & politics media in France, as per a September 2012 Nielsen study, with almost 8 million unique visitors every month, Le Nouvel Observateur recently chose the Cedexis “Openmix” load balancing solution to roll out and manage its own hybrid CDN — a mix of their infrastructure at origin and their own cache servers spread among various webhosting providers with third parties CDNs.

Rolled out during the second quarter of 2012, Le Nouvel Observateur’s hybrid CDN strategy now includes two content origins, backed by Varnish cache servers, located at two French hosting providers whose performance had been previously measured with Cedexis Radar as being optimum for serving their French audience. A global CDN is also used, mostly to deliver content to their international audience.

Details can be found in a case study published by Cedexis.