Cloud-based collaboration: What the private sector can learn from the public sector

(c)iStock.com/Massimo Merlini

The Cloud World Forum event, which starts tomorrow in London and will have this publication in attendance, will more than likely discuss business cloud strategies, security fears, as well as the emergence of the Internet of Things (IoT) and DevOps concepts.

Among the list of expected attendees, there is a not unreasonable amount of security professionals from the private sector. Also there is cloud-based enterprise collaboration provider Kahootz. The company’s primary customer testimonial base and marketing push is through public sector clients, yet John Glover, Kahootz sales and marketing director, notes an interesting paradigm shift afoot in the private sector.

“People are now starting to put the reins in there, and put some government structures in there,” he tells CloudTech. “I see them having their own app store, within the enterprise on pre-approved suppliers, rather than just rely on people go and buy stuff off the Internet hoping it’s going to be okay.”

The truth is, organisations are increasingly putting sensitive data on cloud-based systems, and it’s not done haphazardly either. Sure, companies can still get it wrong – for UK businesses, a data leakage may earn them a visit from the Information Commissioner’s Office (ICO). But most private sector enterprises will have data assurance officers in place to mop up potential spillages, and Glover argues the industry is getting “smarter”.

“The reason we reference the public sector is because it does have credentials that can be used in the private sector,” he adds. “We certainly get contacted by private sector companies now.”

As Glover explains, the theory from public sector companies is simple: if it’s good enough for government, then it’s good enough for us. Yet he admits that while the private sector is becoming smarter, they’re still “nervous”. The CESG (Communications-Electronics Security Group) issued a 14 point cloud security handbook for public sector back in December 2013 – a privilege the private sector does not enjoy.

“I’ll give you the classic mistake that everyone makes,” Glover says. “[Businesses will] get a software supplier that say ‘our data centre is ISO 27001 certified.’ I don’t think people realise how naive that statement is.”

He adds: “The data centre is secure, but what about the applications, what about the staff who run that application, what about the business processes that back up that information? If your data centre is ISO 27001, [it] sounds good, but the average private sector doesn’t understand what that means and how bad a statement that is.”

Kahootz offers a secure online workspace with a variety of options, from local business intranets, so conference rooms and tender management software. Glover explains how for Kahootz’s clients – particularly the larger ones – there is a ‘land and expand’ strategy: start small first, deploy more widely later. For him, the key feature organisations look for is governance – mainly so businesses don’t “orchestrate chaos.”

When Glover attends meetings and organisations say their collaboration setup isn’t working out, he gives four points to consider when putting together a workspace: purpose; scope; the activities they perform; and governance. “The worst thing to do is just set up a site which says ‘here’s a great place to share documents and ideas,’” he says. “That doesn’t mean anything, it doesn’t lead anywhere, and because of that nobody’s listening, and nobody’s taken the management over the site to take on that information and do something with it.”

So even if you are nervous about taking the leap into cloud collaboration tools, remember that a journey of a thousand miles always begins with the first step.

Should IT Model Construction? By @DMacVittie | @DevOpsSummit #DevOps

I’ve worked in just about every job in tech. For this blog, I’ll mention the two roles likely to come up again later – Manager of Enterprise Systems, and Enterprise Architect. Both pretty cool in their own realm, but vastly different roles.
Before we go any further, let’s take a moment to talk construction. Much of my family is or was in construction, and construction sites are well-oiled machines. They put together houses where plumbing depends upon wall alignment, and electric depends upon design decisions, where the roof needs a square set of walls to sit correctly, and where the doors count on a level floor. It’s complex, with circuit design, plumbing, etc all crammed into one. And it’s manual. A lot of labor. But guess what? They put buildings together at an amazing rate and it all comes together. Know why? Because over the years they have structured their crews to meet the needs of the job, and learned when complex systems did not interoperate. It’s not amusing to the plumbers when they find their pipe should be in a cement wall that is already poured, for example.

read more

Parallels 10 for 10: I Might Be in Love with My iPad Mini 3

I was handed an iPad Mini 3 a month ago when I started working at Parallels, and I’ll admit that I looked at it with some skepticism for a week before it made its way into my regular tech rotation. It was only a few days later that I wanted to carve my initials in […]

The post Parallels 10 for 10: I Might Be in Love with My iPad Mini 3 appeared first on Parallels Blog.

Elastica Partners with Telstra to Expand into Australian Cloud Security Market

Recently, cloud security firm Elastica has partnered with Cisco and Telstra to expand into the New Zealand-Australia region in response to the growing threat of “Shadow IT” that has stemmed from increased cloud use.

Elastica’s APAC managing director John Cunningham describes that problems may arise from the struggle to monitor activities of the many apps operating on their network as well as the data that is left unmonitored in the system. This may pose a threat to the system. Elastica is a company whose aim is to secure the cloud.

 

Telstra-2

 

Because of Australia’s demand for cloud based solutions, it is the perfect market for companies like Elastica, for when cloud networks are needed, cloud security is necessary as well.  Cunningham describes, “Typically with technology, it starts in the US and then it would expand globally, maybe to Japan, maybe to Europe, and then Australia. But this time, it’s a little bit different. Cloud is going out simultaneously around the world, so our investment in Australia is going to be there to support that rapid adoption of cloud applications within Australia.” He then pronounces the importance of cloud security, “For every use of a cloud application, there are millions of events being generated … that becomes a data science problem. As humans, and with the scale of activities happening on cloud application, data science is required to help organizations get visibility of what is important.”

Telstra director of security practice John Ieraci said that Telstra was very impressed by Elasticas ability to handle issues that came from ‘Shadow IT.” “When Elastica appeared in mid-2014, we were impressed with the ability to monitor, track, and block sensitive data in real time and quickly identify shadow IT and shadow data for cloud applications, both SaaS and IaaS, using a data science approach and with zero deployment.”

 

The post Elastica Partners with Telstra to Expand into Australian Cloud Security Market appeared first on Cloud News Daily.

IT Infrastructure as a Managed Service By @Kevin_Jackson | @CloudExpo #Cloud

ViON solves complex enterprise problems by combining passion and agility to deliver the most effective, innovative solutions because commitment to mission success is in their DNA. One of the ways they deliver success is through ViON on Demand™, which delivers highly secure compute, network and storage capabilities delivered through on-premise private clouds.  ViON on Demand supports a customer whose business strategy is to consume IT infrastructure as a managed service. Through ViON on Demand, ViON’s customer can procure and consume a range of IT hardware and software suited to their specific needs (compute, storage, data center networking). This strategy helps them:

  • Use technology on-premise, like private Cloud;
  • Customize technology, vendor and configuration based on specific needs;
  • Scale up and down to meet demand without penalty or minimums;
  • Pay with operations dollars rather than capital expenditure;
  • Achieve best-practice, customized service-level agreements (SLAs); and
  • Enjoy 24/7 live, secure support when needed.  

The executive responsible for managing this business is RobDavies, Vice President ViON on Demand.  I had the opportunity to meet him at the ViON headquarters building in Herndon, Virginia for a discussion on government cloud computing.

Kevin:Thank you very much for the opportunity to speak with you about cloud in the US government.  To start off, what is your position here at ViON?
Rob: Thank you Kevin for coming out to visit us.  I am the Executive Vice President of Operations here at ViON and also have the responsibility of managing our On Demand cloud solutions.
Kevin: Being responsible for ViON’s cloud computing solutions seems like a pretty demanding task. How is that going?
Rob: Cloud computing in the US Government marketplace holds great promise, but yes, it also presents a demanding challenge. As you know, the US Federal marketplace has been a budget
constricted environment for quite a few years but that environment is actually good for cloud computing because it has forced agencies into looking for better ways to do information technology. Here at ViON, we’ve actually benefitted from that.
Kevin: That sounds pretty interesting.  Can you please elaborate on that a bit?
Rob: Sure. In observing agencies that are looking to find better and more efficient ways to do information technology, they have really needed to figure out how to use cloud within their existing organizational structure.  This is more difficult than it appears on the surface because government IT organizations are typically structured around a horizontal view of an IT infrastructure.  That means that all their processes and decisions are aligned with IT operational layers. The server team makes decisions on servers, the storage team makes decisions on storage, the application team makes decisions on applications and so forth. This organization also drives budget allocations and decision along those same operational layers. This horizontal viewpoint doesn’t work well with cloud computing because budget decisions need to be more aligned with mission, workload and application characteristics. To do this properly the organization needs to adopt a more vertical view of the IT infrastructure.
Kevin: How have ViON’s cloud computing customers dealt with this problem?
Rob: Though our professional services support, ViON has been able to help its customers elevate their organizational viewpoint. This has enabled them to figure out how to use cloud effectively without changing their existing organization. In a way we have collaborated with our customers and now know how to do cloud within this traditional componentized organizational structure.
Kevin: How is that done? Many have said that cloud computing is nearly impossible without changing existing policies or getting FAR (Federal Acquisition Regulation) waivers.
Rob: The first step in the transition is to get legacy infrastructure people more familiar with cloud consumption models. You also need to move them away from a focus on the technical specification of the infrastructure. In my experience, the expertise of government IT professionals is very high.  The only issue is that organizationally, they are forced to see cloud as an extension of the infrastructure component that lies within their responsibility. Storage people can deal with storage-as-a-service but they have no authority to link a server or application with that storage. Once the infrastructure team collaborate with a vertical viewpoint they can then builds a common lexicon for the solution that’s being design. This, in turn, will drive organizational changes that are friendlier to more efficient consumption-based IT service models.
Kevin: What about the budgeting models? Aren’t they still based on IT components?
Rob: Yes and most federal agencies are way behind in that area. It is, however, a bit easier in the DoD because of the use of working capital funds. This budgeting construct was designed as a means for dealing with the wide variability of the DoD mission. This budgeting variability can be equally used for cloud services. There is no widespread corollary on the civilian side. Civilian agencies have a willingness to adopt cloud, but the acquisition challenges and the lack of a working capital construct make it more difficult.
Kevin: So how can ViON help agencies get over this hurdle?
Rob: ViON has experience in helping agencies learn how to manage a traditional fixed budget in an environment that has variable purchase requirements. Options include ordering agreements and blanket purchase agreement. These have more funding flexibility than direct award contracts. We can also determine appropriate workloads for cloud migration, help in analyzing the budget process around those specific workloads and assist with documenting and forecasting capacity needs. Although peak capacity requirements will certainly be in the budget, that money may come back if the capacity is not actually needed.
http://www.vion.com/Agile-Cloud-Solution/Agile-Cloud-Platform.aspx

Kevin:Are you arguing for changes in government procurement rules?

Rob: Not really. Procurement rules don’t need to be changed but more flexibility needs to be allowed.  COTRs and Contracting Officers just need better tools for purchasing cloud. For example, an ability to pool funds across infrastructure or multiple mission areas would go a long way.
Kevin: You’re really arguing then for a more holistic view and increased visibility of IT within the government. Neither one of those are part of government culture. How do you see this happening?
Rob: Change is hard and cloud computing defines a hard change. To be successful in this, government agencies need to tap the knowledge of government IT infrastructure professionals and make them an integral part of the process. Those professionals know their agency’s mission and how best to manage this change. Unfortunately, in the past, they have been the last to know about an application or system was being funded and built. The government can absolutely do it but very strict restrictions on how money can be spent may need to be changed. Property and use tax payments are a case in point.
Current tax payment rules are driven by ownership. When the government uses cloud services the CSP (Cloud Service Provider) stills owns the equipment and the FAR is silent on this type of situation. Restriction on the use of different colors of money may also need to be addressed. Today the CIO doesn’t have any budget authority. FITARA (Federal Information Technology Acquisition Reform Act) was designed to help in this area and we can only hope that Congress can see a way forward in helping the CIO get away from management through influence towards being able to manage with authority.
Some of the new vehicles are more structured for cloud with dedicated acquisition shops. This will help the rest of the acquisition community come along.
Kevin:  Any advice for those CIO trying to tackle the challenge of transitioning to the cloud?
Rob: We’ve coached our customers to look at the total acquisition process. When initiating a consumption based IT contract, allow for time to transition from one contractor to another. Since the vendor needs to be able to make and recoup their investments, contracts tend to be longer and the government needs to be able to scale up with a new vendor slowly. This approach maximizes the value to all parties.  A total acquisition process view also reduces contract churn, contract related technical evaluations and reduces overall acquisition cost.
Kevin:  In wrapping up, what is the health of cloud in the government. What is your prognosis with respect to the future?
Rob: I am really optimistic. It will take a lot more time but we will get there. Mainframe won’t go away, neither will cloud. We will get there because there are more offerings in the market, more variety, more flexibly, better acquisition models and cross pollination across the government.
Kevin: Thanks Rob.


Rob Davies explains ViON On Demand

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)

Cloud Musings

( Thank you. If you enjoyed this article, get free updates by email or RSS – © Copyright Kevin L. Jackson 2015)

read more

Hybrid cloud implementations: The sooner the better for the CIO

(c)iStock.com/Artem_Egorov

Adopting cloud can often raise as many questions as it answers. The business questions focus on how cloud will deliver increased agility, efficiency and productivity; but predicting the answers to these questions in an ever-changing landscape is no easy task. As a result, many CIOs are grappling with complex 10-year roadmaps, and missing out on the ‘quick wins’ a hybrid approach could deliver in the interim. Through an incremental approach, small shifts to cloud can be taken that will deliver immediate benefits and a much more manageable roadmap.

The future of how businesses operate will be immeasurably impacted by the introduction of SaaS, as it creates a long-tail of services that will realise the consumerisation of the enterprise. The biggest change will be in the IT department; its role will change from that of a builder of systems to a broker of services that empowers the rest of the business. This disruption will usher in a new model of IT, the full impact of which will be felt within the next five years.

The only real certainty is the architecture of the future will look very different to how it appears today. In the meantime, the best and first step an organisation can take to prepare for this change is to begin the adoption of hybrid cloud that will help it optimise and cope with its legacy investments, whilst enabling agility for new digital projects – the much talked about ‘bimodal IT’.

Remember that cloud is outsourcing

At a time when enterprise IT environments are already becoming more difficult to manage, cloud has the potential to add to the burden. To avoid a cloud infrastructure project becoming a costly mistake, CIOs must approach it with the knowledge that ultimately, public or hosted cloud is just another outsourced IT service. It therefore needs to be assessed, benchmarked and implemented with the same level of detail.

As with any form of outsourcing, any business that can’t manage a service effectively in-house, will not solve its problems by simply placing it in the hands of an external supplier. IT departments must look closely at exactly how well it is controlling its own IT resources, before it decides what to place in the cloud.

What goes first?

Deciding which selected workloads will be moved to the cloud can only be done by benchmarking existing IT services. Similarly, IT departments need to know that functions such as IT service management and capacity planning are fully under control before any migration. For example, if a business does not need the benefit of elasticity for a static, predictable workload, then in the long run cloud could be more expensive.

If, however, the business does need elasticity for a fluctuating workload, then cloud may be the most cost-effective option. This careful benchmarking will help identify where small cloud steps can be introduced for quick wins. A step-by-step hybrid cloud infrastructure helps businesses transform their IT environment to enable and empower employees, with the best possible chance of success.

Small steps result in big wins

With this renewed focus on enablement and simplification, adopting cloud in incremental stages helps move from the challenges of traditional infrastructure and deliver agile, cost effective services. Through this staggered approach, businesses can realise big benefits from taking a hybrid cloud pathway in as little as 12-18 months – rather than predicting gains that will be realised 5-10 years in the future.

Ultimately, the business might decide that in 10 years’ time it will have fully adopted public cloud and on-premise services will be a thing of its past. But in the meantime, there’s little point worrying about the long-term headache of managing infrastructure when businesses can start enjoying the benefits of cloud now.

Enabling rather than restricting

As the role of the CIO and the IT department continues to change, their focus now must be on simplifying the challenges that users face; letting users consume services in a low-risk way that enables them to be productive. Whilst refocusing, CIOs must at the same time manage the existing infrastructure and IT legacy that runs the business today, and re-shape it over time.

Taking an incremental approach to hybrid cloud gives CIOs a platform to lead real business change from the centre, and avoid being bypassed or replaced. By driving the strategy and promoting the positive benefits of cloud, CIOs will reduce risks and maximise investments; rather than simply ignoring cloud and falling behind.

Cloud Security Myths By @TierPoint | @CloudExpo [#Cloud]

Malicious agents are moving faster than the speed of business. Even more worrisome, most companies are relying on legacy approaches to security that are no longer capable of meeting current threats. In the modern cloud, threat diversity is rapidly expanding, necessitating more sophisticated security protocols than those used in the past or in desktop environments. Yet companies are falling for cloud security myths that were truths at one time but have evolved out of existence.

read more

Tech News Recap for the Week of 6/15/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 6/15/2015.

Tech News RecapEMC eyes containers with Docker storage drivers, Google and Microsoft have both joined the government’s disaster response program, and many interesting announcements came out of Cisco Live 2015.

Tech News Recap

 The corporate IT department has evolved. Make sure you have kept pace.

 

By Ben Stephenson, Emerging Media Specialist

Cloud migration: From monolith to microservices

(c)iStock.com/Andrey Prokhorov

A cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or business processes it is used for.

There may be a clear business for doing so, such as the hardware platform becoming obsolete; however the organisation overall won’t realise any additional benefits – there is no business transformation as part of this move.

With senior executives potentially expecting broader strategic capabilities as a result of a move to the cloud, it’s therefore important that clarifying this scope is the very first step in planning a cloud migration, and the OMG’s Architecture Driven Modernisation (ADM) methodology is ideal for this purpose.

As the ADM ‘horseshoe’ model articulates, and this Carnegie Mellon article shows, a migration project can be considered with three distinct tiers of scope possible, increasing the size and length of the project with an increasing level of associated business benefit.

This begins at a technical migration, meaning the application is migrated ‘as is’ to a new hardware infrastructure service without modification.

Breaking innovation gridlock: harnessing DevOps and microservices

Higher levels then include application and data architecture and business architecture, meaning that as well as shifting platforms the application itself is also transformed and then furthermore, so is the business model that it enables.

As the horseshoe describes, these increases in scope mean a larger project that takes longer, because each is delivering a larger scope of business benefits, impacting a larger group of stakeholders and requiring a larger business transformation exercise.

Exploring the nature of these benefits can help specify exactly what business executives are hoping to gain by moving to the cloud, and this can be headlined by a theme of “breaking innovation gridlock”, described in this whitepaper from HP.

In short, this described how most large enterprise organisations have a legacy application estate, made up of elderly technologies like mainframes running COBOL, that perform the core business processes of the organisation and are thus central to the business value they provide, but because of their age have become rigid, inflexible and fragile business systems.

Due to the complexity of these environments and the lack of staff skilled in these technologies they essentially become untouchable black boxes, the CIO can’t take the risk of downtime by trying to make changes, and due to their age their maintenance is very costly, consuming the majority of their budgets as HP describes.

Thus they have become trapped in a state of innovation gridlock, unable to afford investment in new digital-enabling platforms and unable to adapt legacy systems to offer new customer-centric processes.

Enterprise DevOps

Although moving to IaaS can deliver benefits such as elastic capacity and utility pricing for infrastructure level components, this isn’t really of strategic value to most large organisations as they aren’t constrained in these areas.

Instead where the major business value will come from is modernising this legacy environment, transforming the core enterprise applications to new cloud-centric approaches so that innovation gridlock is broken and a faster cycle of development throughput is achieved.

A variety of tools are available that can automate the process of transforming legacy code like COBOL into their modern equivalents on Java and .net, meaning they can be re-deployed to private or public Cloud services and most importantly, then much more easily modified by software developers, setting the scene for an agile Enterprise DevOps culture and faster change cycle achieved through Continuous Deployment practices.

Furthermore leading edge Cloud architecture principles can also be utilized, such as ‘Microservices’. This means breaking up large monolith software, like mainframe systems, into an array of small self-contained services making it even easier to implement change at a faster pace. As described in our Microservices section pioneering organizations like Nike have adopted this approach.

Conclusion

An especially powerful aspect of these legacy transformation solutions is that they can also automatically generate the new code required for key features such as a web front-end and mobile client.

This would provide the foundations for achieving the enhanced functionality that senior executives are likely hoping for from their cloud investments. As they seek to pioneer their digital strategies enabling ‘omnichannel’ access across web and mobile interfaces the IT team would previously have faced a considerable challenge achieving this goal when working with aged application environments.

By employing the full scope of architecture driven modernisation they can quickly accomplish this important capability while also transforming the environment so that additional innovative enhancements can more easily be engineered too on an ongoing basis.

With web and mobile platforms established this then makes possible the full three-tier scope of ADM transformations. Business executives can more easily build social communities around their core business processes, explore dynamic new mobile commerce scenarios, and so on. In short the only limitation of the innovative business models they might pioneer would be their imagination, not the IT estate.

The post Cloud Migration: From monolith to microservices appeared first on Cloud Best Practices.

Has software defined networking finally come of age?

(c)iStock.com/loops7

Can VMware’s NSX finally fulfil the promise of SDN?

The first time I heard the acronym SDN was more than 10 years ago. At that time SDN was the cool way of saying “once the networking guys have racked, stacked and wired the switching and routing components we can install tools on a server and configure them through those tools from the comfort of the NOC”.

This was much more than just standard monitoring and management; we could setup whole networks and change them around, in software. But alas, this wasn’t really what we were hoping for and we would have to wait a number of years for the next advance.

The next evolutionary step was realized when VMware came out with the ability to create a virtual switch on top on their ESX hypervisor (Virtual Distributed Switching or VDS). Now we were talking slick. The ability to hang a switch without having to roll out physical hardware – amazing! Just provide some basic networking hardware at the perimeter, enough CPU to support your networking needs and start building networks. Great, but not quite good enough. Some of the questions left unanswered were:

  • Modern networks consist of much more than basic switches and routers: what happened to the other components?
  • How do I as a provider of cloud services keep the multiple tenant networks from affecting each other?
  • How can I provide overlay networking capabilities to decouple the logical networking world from the physical?

In other words: “You promised me real virtual networking, so where is it?”

NSX-v

Enter VMware’s NSX for vSphere (NSX-v). Does it do everything? No, but it gets the cloud world a whole lot closer to the promise of SDN. The following picture provides an overview of the functions that VMware NSX-v provides:

The Virtual Distributed Switch (VDS) is the basic building block for the overall NSX-v architecture. VDS, as previously mentioned, is the original SDN function but in NSX-v the VDS has taken on a much more complete set of switching capabilities, including multi-layer switching.

Routing within NSX-v as described in VMware’s NSX Network Virtualization Design Guide: “The Logical Routing capability in the NSX platform provides customers the ability to interconnect endpoints (virtual and physical) deployed in different logical L2 networks. Once again, this is possible due to the decoupling between network infrastructure and logical networks provided by the deployment of network virtualization.” Bottom line, much of what you previously needed a physical router for, you can now do virtually.

The NSX Distributed Firewall (DFW) provides, as expected, full firewall capabilities in a virtual appliance. An even more interesting feature of DFW is  micro-segmentation, or the ability to place a set of servers (1 or more VMs) in their own security zone and logically isolate them from other logical/virtual environments.

Again quoting the VMware NSX Design Guide: “In legacy environments, to provide security services to a server or set of servers, traffic from/to these servers must be redirected to a firewall using VLAN stitching method or L3 routing operations: traffic must go through this dedicated firewall in order to protect network traffic. With DFW, this is no longer needed as the firewall function is brought directly to the VM. Any traffic sent or received by this VM is systematically processed by the DFW.

“As a result, traffic protection between VMs (workload to workload) can be enforced if VMs are located on same Logical Switch (or VDS VLAN-backed port-group) or on different logical switches.”

NSX also provides a fairly impressive network load balancing service based on the NSX edge device. Some of the important supported features of NSX LB are special design for cloud applications, fully programmable via API, management through the same stack as all other NSX services, support for TCP and UDP applications, connection throttling, L7 manipulation and integration with third party LB solutions.

NSX L2 VPN service allows extending L2 connectivity across two separate data centre locations. Some of the use cases for NSX VPN Services, delivered through the NSX Edge device, include enterprise workload migration/DC consolidation, service provider tenant on-boarding, cloud bursting, and stretched application tiers.

Connectivity to the Physical Environment via NSX-v allows for the use of the physical network as a backbone, while allowing highly flexible, highly customised networking as an overlay. The use of the Virtual Extensible LAN protocol (VXLAN) enables the building of logical networks that provide L2 adjacency between workloads, without the issue and scalability concerns found with traditional layer 2 technologies.

NSX Spine and Leaf Use Case

The above diagram depicts a consolidated use case, which covers many different possibilities when using NSX-v. The architecture shown above is a modern spine and leaf construct of the type that I have recently been involved in utilising the following NSX-v features:

  • Connectivity to the physical environment and the use of VXLAN
  • NSX L2 VPN Services
  • L2/L# Switching, Routing, Gateways
  • NSX DFW and Micro-segmentation
  • NSX Network Load Balancing

Conclusion

While VMware’s NSX-v represents a tremendous leap forward in software defined networking, there is still plenty to do. During a recent project, a few minor challenges arose, around how to integrate NSX into an existing physical infrastructure. The particulars involved my company’s security standards which required a physical firewall between the environment’s control plane and the user definable areas. VMware engineering was extremely helpful and the problem resolved.

As the product set matures and usage increases these issues will inevitably decrease.  What we have gained is much more cloud centric and feature rich software creation, control and change capabilities. Sure, there is still work to do to reach true SDN, but now we are that much closer.