All posts by SagarNangare

How is Kubernetes leading the game in enabling NFV for cloud-native?

The impact of cloud-native readiness on applications, which are mostly orchestrated using Kubernetes, can be seen on VMware’s announcements at the recent VMworld 2019. This made it clear to the IT world that focus of IT infrastructure has shifted to containerisation from virtualisation. Going cloud-native and shifting workloads on top of Kubernetes clusters is a key trend being followed by the industry.

CNCF (Cloud Native Computing Foundation) has shown the aggression to push their projects to enterprise IT infrastructure and telecom service providers to build the core of data centres using new containerised and microservices methods.

NFV and telecom use cases have also started shifting to a cloud-native landscape in the last two years. NFV techniques have help CXOs move to the software-defined and centric data centre with virtual network functions (VNFs) as core elements, being orchestrated using VNF managers (VNFM). The VNF's orchestration can be done using commercial VNFM platforms offered by Nokia, Cisco, Ericsson, Huawei, and NEC; and an open-source platform like OpenStack Tacker. Now with the cloud-native movement in the IT domain, VNFs are becoming cloud-native network functions (CNFs).

Cloud-native development of network functions:

  • Makes applications or code portable and reusable – in other words can be repetitively used independent of the underlying infrastructure
  • Allows the application to scale up and down where there is demand
  • Can be deployed with microservices way but not mandatorily
  • Is suitable for elastic and distributed computing environments

Cloud-native development also enables NFV to embrace DevOps, agile techniques, and more importantly allows container orchestration engines like Kubernetes to handle workloads – which also means that more dynamism comes into the picture at the core stack of NFV.

Earlier, CNFs were in evaluation phase to check for readiness by several vendors and service providers to be used in NFV use cases. In 2018, I wrote about the benefits of deploying network functions in containers and being architected using microservices. Also, I wrote on why cloud-native VNFs are important in NFV success.

The below image shows how VNFs were managed in the past, how it is currently managed along with CNFs, and showing how Kubernetes can be a de facto framework to handle network functions and applications pushed into CNFs and VNFs.

Kubernetes in the picture

We can now see how Kubernetes has evolved so much in the data centre of every size for handling every workload type. Kubernetes is also becoming a choice to orchestrate workloads at edges as well. We have seen several collaborations for new solutions for 5G that specifically focused on handling containers using Kubernetes and legacy virtual machines using OpenStack.

There are several ways Kubernetes can be useful for NFV use cases for handling network functions and applications. Kubernetes can be useful in hosting all cloud-native software stack into the clusters.

If you are a software or solution provider, Kubernetes can help you orchestrate all workload types like VNFs, CNFs, VMs, containers, and functions. With Kubernetes, it has become possible for all workloads to co-exist in one architecture. ONAP is leading service orchestrator and NFV MANO platform to handle services deployed in NFV. A Kubernetes plugin specifically developed for ONAP makes it possible to orchestrate different services and workloads cater through multiple sites.

ONAP has challenges in terms of installation and maintenance, while concerns have also been noted related to heavy consumption of resources like storage and memory. To work along with Kubernetes, ONAP release a lightweight version, which will fit with many NFV architectures. It is called ONAP4K8S. Requirements and package contents are published on its profile page.

There can be cases where it is not possible to completely get away from virtual machines. Some of the existing functions need to reside with virtual machines and cannot be containerised. For such cases, Kubernetes community KubeVirt and Mirantis’s Virlet frameworks can be integrated to dynamically manage virtual machines along with containers. Kubernetes also becomes a choice for enabling orchestration at the edge of the network. Kubernetes based control plane uses less number of resources that makes it suitable for edge nodes even with one server.

Cloud-native NFV stack

The Akraino edge stack is hosting a blueprint, Integrated Cloud Native (ICN) NFV Stack, under which all developments of making NFV core cloud-native are in progress. The current progress of integrating open-source cloud-native projects for NFV stack is shown below:

Srinivasa Rao Addepalli (senior principal engineer and chief architect at Intel) and Ravi Chunduru (Associate Fellow, Verizon) will be presenting a session at the upcoming Open Networking Summit Europe 2019 on how Kubernetes can be used at core of NFV and how Linux Foundation communities (ONAP, OPNFV, CNCF, LFE) are doing efforts to make NFV core a cloud-native.

Editor's note: Download Calsoft’s eBook – A Deep-Dive On Kubernetes For Edge – which focuses on current scenarios of adoption of Kubernetes for edge use cases, latest Kubernetes and edge case studies, deployment approaches, commercial solutions and efforts by open communities.

Image sources: https://events.linuxfoundation.org/wp-content/uploads/2018/07/ONS2019_Cloud_Native_NFV.pdf

The post How is Kubernetes Leading the Game in Enabling NFV for Cloud Native? appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

An analysis of Kubernetes and OpenStack combinations for modern data centres

Editor's note: This article was originally published on OpenStack Superuser. CloudTech has the author's permission to re-publish here.

For many telecom service providers and enterprises who are transforming their data centre to modern infrastructure, moving to containerised workloads has become a priority. However, vendors often do not choose to shift completely to a containerised model.

Data centres have to support virtual machines (VMs) as well to keep up with legacy VMs. Therefore, a model of managing virtual machines with OpenStack and containers using Kubernetes has become popular. In an OpenStack survey conducted in 2018, it was seen that 61% OpenStack deployments are also working with Kubernetes.

Apart from this, some of the recent tie-ups and releases of platforms clearly show this trend. For example:

  • AT&T’s three year deal with Mirantis to develop 5G core backed by Kubernetes and OpenStack
  • Platform9’s Managed OpenStack and Kubernetes – providing required featured sets bundled in solution stack for the service provider as well as developers. They support Kubernetes on VMware platform as well
  • Nokia’s CloudBand release – containing Kubernetes and OpenStack for workload orchestrations
  • OpenStack Foundation’s recently announced Airship project aiming to bring the power of OpenStack and Kubernetes in one framework

The core part of a telecom network or any virtualised core of a data centre has undergone a revolution, shifting from physical network functions to virtual network functions (VNFs). Organisations are now adopting cloud-native network functions (CNFs) to help bring CI/CD-driven agility into the picture.

This journey is shown in one of the slides from the Telecom User Group session at KubeCon Barcelona in May, which was delivered by Dan Kohn, the executive director of CNCF and Cheryl Hund, the director of ecosystem of CNCF. (Image source).

 

According to the slide, presently, application workloads deployed in virtual machines (VNFs) and containers (CNFs) can be managed with OpenStack and Kubernetes, respectively, on top of bare metal or any cloud. The optional part that is ONAP is a containerised MANO framework, which is managed with Kubernetes.

As discussed in birds-of-a-feather (BoF) – telecom user group session delivered by Kohn –  with the progress of Kubernetes for cloud-native movement, it is expected that CNFs will be a key workload type. Kubernetes will be used to orchestrate CNFs as well as VNFs. VNFs will be segregated with KubeVirt or Virtlet or OpenStack on top of Kubernetes.

Approaches for managing workloads using Kubernetes and OpenStack

Let’s understand the approaches of integrating Kubernetes with OpenStack for managing containers and VMs.

The first approach can be a basic approach wherein Kubernetes co-exists with OpenStack to manage containers. It gives a good performance but you cannot manage unified infrastructure resources through a single pane. This causes problems associated with planning and devising policies across workloads. Also, it can be difficult to diagnose any problems affecting the performance of resources in operations.

The second approach can be running a Kubernetes cluster in a VM managed by OpenStack. This enables OpenStack-based infrastructure to leverage the benefits of Kubernetes within a centrally managed OpenStack control system. Also, it allows full-feature multi-tenancy and security benefits for containers in an OpenStack environment. However, this contributes to performance lags and necessitates additional workflows to manage VMs that are hosting Kubernetes.

The third approach is an innovative one, leaning towards a completely cloud-native environment. In this approach, Kubernetes can be replaced with OpenStack to manage containers along with VMs as well. Workloads take complete advantage of hardware accelerators and Smart NICs, among others. With this, it is possible to offer integrated VNS solutions with container workloads for any data centre, but this demands improved networking capabilities like in OpenStack (SFC, provider networks, segmentation).

Kubernetes versus OpenStack –  is it true?

If you looked at the recent VMworld 2019 US event, it was clearly seen that Kubernetes would be everywhere. There were 66 sessions and plenty of hands-on training that will focus only on Kubernetes integration in every aspect of IT infrastructure.

But is that the end of OpenStack? No. As we have already seen, the combination of both systems will be a better bet for any organisation that wants to stick with traditional workloads while gradually moving to a new container-based environment.

How Kubernetes and OpenStack are going to combine

I came across a very decent LinkedIn post by Michiel Manten. He stated that there are downfalls for both containers and VMs. Both have their own use cases and orchestration tools. OpenStack and Kubernetes will complement each other if properly combined to run some of the workloads in VMs to get isolation benefits within a server and some in containers. One way to achieve this combination is to run Kubernetes clusters within VMs in OpenStack, which eliminates the security pitfalls of containers while leveraging the reliability and resiliency of VMs.

What are the benefits?

  • Combining systems will immediately benefit all current workloads so that enterprises can start their modernisation progress, maintaining high speed with much lower cost than commercial solutions
  • Kubernetes and OpenStack can be an ideal and flexible solution for any form of a cloud or new far-edge cloud where automated deployment, orchestration, and latency will be the concern
  • All workloads will be in a single network in a single IT ecosystem. This makes it easier to apply high-level network and security policies
  • OpenStack supports most enterprise storage and networking systems in use today. Running Kubernetes with and on top of OpenStack enables a seamless integration of containers into your IT infrastructure. Whether you want to run containerized applications bare metal or VMs, OpenStack allows you to run containers the best way for your business
  • Kubernetes has self-healing capabilities for infrastructure. As it is integrated into an OpenStack, it can enable easy management and resiliency to failure of core services and compute nodes
  • A recent release of OpenStack software (OpenStack Stein) has several enhancements to support Kubernetes in the stack. A team behind OpenStack Certified Kubernetes installer made it possible to deploy all containers in a cluster within five minutes regardless of the number of nodes. It was previously 10-12 minutes. With this, we can launch a very large-scale Kubernetes environment in 5 minutes

Telecom service providers who have taken steps towards 5G agreed upon the fact that a cloud-native core is imperative for a 5G network. OpenStack and Kubernetes are mature, open-source operating and orchestration frameworks today. Providing agility is the key capability of Kubernetes for data centers and OpenStack has several successful projects for focusing on storage and networking of workloads, and support for myriad applications.

Editor's note: Download the Calsoft eBook – A Deep-Dive On Kubernetes For Edge –  focusing on current scenarios of adoption of Kubernetes for edge use cases, latest Kubernetes and edge case studies, deployment approaches, commercial solutions and efforts by open communities.

The post Analysis of Kubernetes and OpenStack Combination for Modern Data Centers appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Addressing the concerns of data management and sovereignty in multi-cloud and edge scenarios

MWC Barcelona last month heavily focused on two fast-emerging technology trends; 5G and edge computing. Together, they will significantly impact businesses by enabling massive volumes of digital data to transfer between cloud servers located in multiple regions around the world as well as between IoT devices and edge nodes. This is due to the hyper-fast speed of 5G networks and edge computing architectures that have micro-clouds and data centres located closer to data-generating IoT devices.

To seize new opportunities and stay ahead of competitors, businesses are in the process of transforming their operational models to take advantage of 5G and edge computing.

Currently, this data generated by multiple devices is stored in the cloud; this could either be on-premises, in a public cloud like Amazon Web Services (AWS), Azure or Google, hybrid, or multi-cloud. Additionally, the edge can also be seen as a ‘mini-cloud’ where some data will surely reside to support endpoint applications. With the edge, an increasing number of data storage servers are emerging to host data. In a few years, large amounts of data will be scattered across clouds and edges located in different countries and continents.

However, growing amounts of digital data are bounded by the regulations of many countries and regions, which helps to gain data sovereignty, enabling the protection of both general and sensitive information from external access for misuse. Last year, for example, the European Union implemented GDPR. Similarly, India, China and Brazil, among other nations, established their own data protection bills. The varied and growing number of regulations creates concerns for businesses, which are in the midst of transformation driven by 5G and edge benefits. Businesses, including technology infrastructure vendors and service providers, will want ownership of data which is generated by consumers, whether that occurs locally or across borders.

The key question therefore is: how can data in multi-cloud and multi-node environments be managed? Will data sovereignty be a roadblock to latency-sensitive 5G use cases?

I came across one company, Kmesh, and found it was working on compelling solutions for data mobility in edge and multi-cloud scenarios. I got in touch with Jeff Kim, CEO of Kmesh, to learn about the core of their technology.

Kmesh, founded only in 2018, today has several solution offerings to address challenges with data used in multi-cloud environments, different countries, and edges. The offerings are SaaS solutions for data sovereignty, edge data and multi-cloud, and each provides a centralised software portal where users can set up policies for the ways they wish to distribute data. These SaaS offerings allow organisations to transform centralised data into distributed data, operating over multiple clouds, countries and edges as a single global namespace.

Kmesh enables businesses to take full control of their data generated at various data centres and residing in different geographies. Businesses can also move or synchronise the data in real time. So how do their SaaS offerings work? “Using our SaaS, you install a Kmesh software agent on-premises and another Kmesh software agent on any cloud or clouds,” said Kim. “Then, using our SaaS, you control which data gets moved where. Push a button, and the data gets moved/synced in real time, with no effort by the customer.”

With this approach, Kmesh aims to deliver significant efficiency improvements in operations involving data by providing the ability to orchestrate where data generated by end devices will reside and be accessed across edge, multi-cloud and on-prem.

Kmesh also aims to offer agility and flexibility in application deployment when used with Kubernetes, the de facto technology for orchestrating where applications reside. Businesses gain the flexibility to deploy applications anywhere and can leverage data ponds, which are placed at different locations. Like Kubernetes, Kmesh follows the native design principles targeted at cloud, hybrid cloud, and multi-cloud use cases.

Leading public clouds are known to have excellent artificial intelligence (AI) and machine learning (ML) capabilities for data provided to them. Kim explained how Kmesh can focus on data mobility in the age of AI and ML. “Enterprise customers still have their data predominantly on-premises,” he said. “Cloud providers have great AI/ML applications, such as TensorFlow and Watson, but moving data to the cloud and back again remains a challenge. Kmesh makes that data movement easy and eliminates those challenges, allowing customers to focus on what they want – the AI/ML application logic.”

Kmesh offerings reduce the burden on network resources by eliminating the need to transfer huge amounts of data between cloud and digital devices. In addition, businesses can substantially lower their storage costs by eliminating the need for data replication on different clouds.

I also asked if Kmesh could benefit telecom service providers in any way. “We can help in two ways, with them as partners and as customers,” said Kim. “As customers, telcos have massive amounts of data, and we can help them move it faster and more intelligently. As partners, if they offer cloud compute solutions, then they can resell Kmesh-based services to their enterprise customers.

“One early sales entry point to enterprises is by supporting data sovereignty in countries where the big clouds – AWS, Azure, Google – have little or no presence,” added Kim. “Many countries, particularly those with high GDPs, now have regulations that mandate citizen data remains in-country. Telcos in countries like Vietnam, Indonesia, Switzerland, Germany [and] Brazil can use Kmesh to offer data localisation compliance.”

The technology world is looking for flexible IT infrastructure that will easily evolve to meet changing data and performance requirements in support of the onslaught of upcoming and lucrative use cases. Kmesh is one company which aims to address data management and data sovereignty concerns while decreasing costs associated with storage and network resources.

The post Addressing the Concerns of Data Management and Sovereignty in Multi-Cloud and Edge appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How are faster networks advancing the next generation of data centres?

We are witnessing significant uplift in data transmission speeds offered by network connectivity providers. Service providers are now promising speeds in hundreds of MBs to GBs with which, for instance, we can see live streaming of Blu-ray movie prints without any buffering.

Such network speeds are set to trigger many new technology possibilities. Businesses cannot afford to stay behind, as they have to take into account new technologies which are widely adopted by the competitive market landscape. Therefore, the focus of businesses has become clear and narrow; to constantly satisfy customer demands with lucrative digital offerings and push businesses ahead for gaining competitive advantage.

To align with this trend, businesses have already started to optimise and redesign their data centres to handle a vast amount of data generated by a growing number of consumer devices. It is obvious for businesses to transform the data centre for addressing the need for upgrading. A transition would involve the use of:

  • Virtual network functions (VNFS), which replaces server hardware with software-based packages to specific work – network function virtualisation (NFV)
  • Software-defined networking to gain a central control of the network using a core framework which will allow admins to define network operations and security policies
  • Seamless orchestration among several network components using ONAP, ETSI OSM, Cloudify among others
  • Workloads (VM and containers) and data centre management by implementing OpenStack, Azure Stack, Amazon S3, CloudStack, and Kubernetes. Containers are getting widely adopted due to features like faster instantiation, integration, scaling, security, and ease in management

The next thing which will disrupt the data centre is the adoption of edge architecture. Edge computing will bring a mini data centre closer to where data is going generated by devices like smartphones, industrial instruments, and other IoT devices. This will add more endpoints before data is gathered by the central data centre. But the advantage is that maximum computing will be done at the edge that will help to reduce the load on network transmission resources. Adding to this, hyperconvergence can be used at edge nodes to bring simplification in the required mini data centre.

Mobile edge computing (MEC), a core project maintained by ETSI, is emerged at an edge computing model to be followed by telecom operators. ETSI is maintaining and working on innovations to improve the delivery of core network functionalities using MEC, as well as guiding vendors and service providers.

Aside from edge computing, network slicing is a new architecture introduced in 5G that will have an impact on how data centres are designed for particular premises, and dedicated for specific cases such as Industrial IoT, transportation, and sports stadia.

Data centre performance for high speed networks

In this transforming age, a large amount of data will transfer between devices and the data centre as well as between data centres. As low latency and high bandwidth is required by new use cases, it is important to obtain a higher level of performance from the data centre. It is not possible to achieve such paramount performance with legacy techniques and by adding more capacity to data centres.

With the ‘data tsunami’ of recent years, data centre technology vendors came up with new inventions and communities formed to address performance issues raised by different types of workloads. One of the techniques which has been significantly utilised in new age data centres is to offload some of the CPU tasks to network or server interconnecting switches and routers. Let’s take an example of the network interface card (NIC) which, when used to connect servers to network components of the data centre, has become a SmartNIC, offloading processing tasks that the system CPU would normally handle. SmartNICs can perform network-intensive functions such as encryption/decryption, firewall, TCP/IP, and HTTP processing.

Analyst firm Futorium conducted a Data Centre Network Efficiency survey targeted to IT professionals about their perceptions and views on data centres and networks. Apart from virtualising network resources and workloads, for efficient processing of data for high-speed networks, SmartNIC usage and process offload techniques have emerged as the top interest for IT professionals. This reveals how businesses are relying more on smart techniques which can save costs, along with notable data centre performance improvements for faster networks.

Workload accelerators, like GPUs, FPGAs, and SmartNICs are widely used in current enterprise and hyperscale data centres to improve data processing performance. These accelerators interconnect with CPUs for generating faster processing of data and require much lower latency for transmitting data back and forth from the CPU server.

Most recently, to address the high speed and lower latency requirements between workload accelerators and CPUs, Intel, along with companies including Alibaba, Dell EMC, Cisco, Facebook, Google, HPE and Huawei, have formed an interconnect technology called Compute Express Link (CXL) that will aim to improve performance and remove the bottlenecks in computation-intensive workloads for CPUs and purpose-built accelerators. CXL is focused to create high speed, low latency interconnect between the CPU and workload accelerators, as well as maintain memory coherency between the CPU memory space and memory on attached devices. This allows for resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.

NVMe is another interface introduced by the NVM Express community. It is a storage interface protocol used to boost access to SSDs in a server. NVMe can minimise CPU cycles from applications and handle enormous workloads with lesser infrastructure footprints. NVMe has emerged as a key storage technology and has had a great impact on businesses, which are dealing with vast amounts of fast data particularly generated by real-time analytics and emerging applications.

Automation and AI

Agile 5G networks will result in the growth of edge compute nodes in network architecture to process data closer to endpoints. These edge nodes, or mini data centres, will sync up with a central data centre as well as be interconnected to each other.

For operators, it will be a task ahead to manually set up several edge nodes. The edge nodes will regularly need initial deployment, configuration, software maintenance and upgrades. In the case of network slicing, there could be a need to install, or update VNFs for particular tasks for devices in the slice. It is not possible to do this manually. At this point, automation comes into the picture where operators need to get the central dashboard at the data centre to design and deploy configuration for edge nodes.

Technology businesses are demonstrating or implementing AI and machine learning at the application level for enabling auto-responsiveness – for instance, using chatbots on a website. Much of the AI is applied for a data lake, to generate insights from self-learning AI-based systems. These types of autonomous capabilities will be required by the data centre.

AI systems will be used for monitoring server operations for tracking activities meant for self-scaling for a sudden demand in compute or storage capacity, as well as self-healing from breakdowns, and end-to-end testing of operations. Already, tech businesses have started offering solutions for each of these use cases; for example, a joint AI-based integrated infrastructure offering by Dell EMC Isilon and NVIDIA DGX-1 for self-scaling at the data centre level.

Conclusion

New architectures and technologies are being introduced with the revolution in the network. Most of this infrastructure has turned software-centric as a response to the growing number of devices and higher bandwidth. Providing lower latency – up to 10 microseconds – is a new challenge for operators to enable new technologies in the market. For this to happen, data centres need to complement the higher broadband network. It will form the base for further digital innovation to occur.

Editor’s note: Download the eBook ‘5G Architecture: Convergence of NFV & SDN Networking Technologies’ to learn more about the technologies behind 5G and the status of adoption, along with key insights into the market

The post Analysis: How are Faster Networks Advancing the New-Age Datacenters appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Is the cloud the next thing for long-term data retention? Looking at the key vendors in the space

For any organisation in this era, there is a realisation on how data is critical for business needs and operations.

An enormous amount of data has been produced already after the disruption of cloud computing into various types of organisation, be it education, finance, healthcare or manufacturing. Today, organisations are more concerned about the data which has been developed in the last 15 to 20 years due to the surge of IT infrastructure.

This data and applications are probably not being used actively, but it is important to organisations as this data contains critical information, having compliance requirements around it. Security of old data (unstructured content, applications, virtual machines) is becoming crucial for the organisation. There has to be a cost effective and reliable archiving solution to store and secure data while gaining rapid access when needed.

In the past, IT management used to save the data in tape drives or on premises data centres without any filtering. But the data demands have drastically changed.

Even more data will be produced in the next five to seven years as more digitally connected devices become part of business operations. Data will be fuel for any business as they will abstract analytical information to get ahead of the competition or to be aligned with consumer demands. This digital transformation is not just to acquire new technology enhancement but to save CAPEX and OPEX every time when the data centre moves ahead in innovations.

As data grows, edge computing architecture will enable data centre systems to get closer to digital devices for processing of information (machine learning/analysis) and only a small set of information will be pushed to the cloud or private data centre.

How will organisations deal will past data when real-time data will also need to get archived for reference? How will organisations deal with data in hybrid cloud or a multi-cloud model where private and public cloud will be utilised for different data processing purposes? Will there be automation available for constantly syncing data based on archival methods that will get integrated in an archival strategy? What about the security from external breaches or physical damages to archival systems?

There are various vendors who have developed solutions to address these needs. Organisations have different choices to select a solution which fits their requirements and can be customised as per the budget. In this post, I have taken a look at data archival solutions from leading vendors like Rubrik, Cohesity and Zerto. Let’s evaluate their solutions.

Cohesity: Enterprise-grade long-term retention and archival

Cohesity’s solutions allow you to leverage both cloud and tapes to archive the data based on the organisation's requirements. The solution they call cloud-native is where, apart from tapes, archival is possible on public clouds, private clouds, Amazon S3-compatible devices and QStar managed tape libraries. The solution enables IT management to define workflow policies for automated backup and archival. It consists of two Cohesity products: Cloud Archive & Data Protect.

Cloud Archive allows to leverage public cloud for long term data retention, while Data Protect helps to reduce long term retention and archival cost with its pay as you go cost model.

Rubrik: Data archival

Rubrik’s solution provides support to organisations for data management on hybrid cloud environments. Organisations can choose their storage and architecture containing:

  • Archive to Google Cloud Storage
  • VMware vSphere, Nutanix AHV, and Microsoft Hyper-V
  • Microsoft SQL Server
  • Oracle, Linux, Windows, UNIX, and NAS
  • Remote and branch offices

The client uses real time predictive global search to access the archived data. You will see files directly from the archive as you type in the search box. This drastically reduces access time for your files. Also, it is possible to instantiate VMs in the cloud itself with Rubrik's solution. 

Data deduplication is used while accessing the data which further reduces transfer and storage costs. With this solution, all the data is encrypted before being send from physical devices to target storage infrastructure. A user is presented with a simple HTML5 responsive interface to set up a policy driven automation and target for archival.

Zerto: Zerto virtual replication

Zerto offers a different solution for archival of data compared with Rubrik and Cohesity. Zerto does the archival of data using an ad-hoc feature in its main software Zerto Virtual Replication. With this feature, it is possible to take daily, weekly and monthly backup of data to be archived. It is possible to use target for archival on tapes, network share in a third location, dedicated disk-based backup device or even cheap S3 or Blob Storage in AWS or Azure.

The latest release supports continuous data protection (CDP), replication, automated orchestration and long-term retention with offsite backup. Journal File Level Recovery mechanism is used to restore backup data quickly.

Conclusion

Apart from Rubrik, Cohesity and Zerto, there are more vendors who have offered different types of solutions for different workloads and for diverse requirements. But these three can be useful in most of the new age workloads like data generated by IoT devices, machine learning analysis data and unstructured big data lakes.

As organisations are evaluating new technologies to deal with data, a proper archival or long term retention solution will help them to get most of the past data and allow them to focus on newly generated data. As per this evaluation, it is clear that most vendors are focused towards utilizing public cloud or hybrid cloud environments to archive the long-term data. Use of the hybrid cloud means that private cloud can be used to store data, which is bound by compliance and security norms critical to organisations. But it will be completely up to the organisations to which solution they would like to go with as there are good options available.

The post Is the Cloud Next Thing for Long Term Data Retention or Archival? appeared first on Calsoft Inc. Blog.

Editor's note: Download the eBook NVMe: Optimizing Storage for Low Latency and High Throughput.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Exploring the evolution of Kubernetes to manage diverse IT workloads

Kubernetes started in 2014. For next two years, the adoption of Kubernetes as container orchestration engine was slow but steady, as compared to its counterparts – Amazon ECS, Apache Mesos, Docker Swarm, GCE et al. After 2016, Kubernetes started creeping into many IT systems that  have wide variety of container workloads and demand higher performance for scheduling, scaling and automation.

This is so as to enable a cloud native approach having a microservices architecture in application deployments. Leading tech giants (AWS, Alibaba, Microsoft Azure, Red Hat) have started new solutions based on Kubernetes and in 2018, they are consolidating to build a de-facto Kubernetes solution which can cover every use case that handles dynamic hyperscale workloads.

Two very recent acquisitions depict how Kubernetes has created a huge impact in IT ecosystem. One is IBM’s Red Hat and VMware’s Heptio acquisition. IBM did not shown the direct interest to target container orchestrations but had eyes on Red Hat’s Kubernetes Based Openshift.

At VMworld Europe 2018, the acquisition of Kubernetes solution firm Heptio by VMware triggered a lot of heat. This acquisition is said to have a significant impact on the data centre ecosystem where Red Hat (IBM) and Google are among the top players. Heptio’s solution will be co-integrated with VMware’s Pivotal Container Services (PKS) to make this as a de-facto Kubernetes standard which will cover maximum data centre use cases from private, multi-cloud and public cloud.

Heptio was formed by ex-Google engineers Joe Beda and Craig McLuckie back in 2016. In its 2 years Heptio captured the eyeballs of industry giants with its offerings and contribution to cloud native technologies based on Kubernetes. Also, Heptio had raised $33.5 million through two funding rounds.

So, the question is why and on which kind of use cases Kubernetes is being used or being tested to use.

Enabling automation and agility in networking with Kubernetes

Leading communication service providers (CSPs) are demonstrating 5G in selected cities. 5G networks will support a wide range of use cases with a lowest possible latency and high bandwidth network. CSPs will need to deploy network services at edge of the network where data is generated by number of digitally connected devices.

To deploy services at the edge of the network and have a control on each point of the network, CSPs will need automated orchestration on each part. What's more, as software containers are being adopted by CSPs to deploy virtual network functions, CSPs will be leveraging cloud native approach by employing microservices based network functions and real time operations by employing CI/CD methodologies. In this scenario, Kubernetes emerged as an enterprise level container management and orchestration tool. Kubernetes brings a number of advantages in this environment.

Jason Hunt wrote in a LinkedIn post that “Kubernetes allows service providers to provision, manage, and scale applications across a cluster. It also allows them to abstract away the infrastructure resources needed by applications. In ONAP’s experience, running on top of Kubernetes, rather than virtual machines, can reduce installation time from hours or weeks to just 20 minutes.” He added that CSPs were utilising mixing of public and private clouds for running network workloads. Kubernetes works well for all types of clouds to handle workloads of any scale.

Other example of Kubernetes utilisation in telecom is the recent release of Nokia CloudBand software for NFV. With this release of CBIS 19, there is support for edge network deployments along with support for containerised workloads and integration of Kubernetes for container management along with OpenStack which will handle virtual machine as well. In the last few years, usage of containers has being discussed within NFV architecture. But this release is one of the first representations of employing containers and container management for handling network functions in NFV infrastructure.

Kubernetes and AI/machine learning

KubeFlow – Managing machine learning stacks: Moving further on managing containers, Kubernetes has evolved to the extent that it is used to manage complex workloads for machine learning applications.

Machine learning applications or systems contain several software components, tools and libraries from different vendors which are all integrated together to process information and generate output. Connecting and deploying all the components and tools require manual efforts which are tedious and takes a fair amount of time. Also, for most of the cases the hardest part is that the machine leaning models are immobile, and require re-architecture while transferring from the development environment to a highly scalable cloud cluster.

To address this concern, Kubernetes introduced open framework KubeFlow which has all machine learning stacks pre-integrated into Kubernetes which will instantiate any project easily, quickly and extensively.

KubeFlow Architecture for ML Stacks

Image source: https://www.kubeflow.org/blog/why_kubeflow/ 

Kubernetes for eCommerce retailer JD.com: Besides the launch of KubeFlow, one interesting application of Kubernetes for AI is JD.com, a Chinese eCommerce retailer, which is managing the world’s largest Kubernetes clusters with more than 20,000 bare metal services in several clusters across data centres in multiple regions.

In an interview with CNCF, Liu Haifeng, chief architect at JD.com, was asked about how Kubernetes is helping JD for AI or big data analytics. He disclosed: “JDOS, our customised and optimised Kubernetes supports a wide range of workloads and applications, including big data and AI. JDOS provides a unified platform for managing both physical servers and virtual machines, including containerised GPUs and delivering big data and deep learning frameworks such as Flink, Spark, Storm, and Tensor Flow as services. By co-scheduling online services and big data and AI computing tasks, we significantly improve resource utilisation and reduce IT costs.”

JD.com is declared as winner in the top end user award by CNCF for its contribution to the cloud native ecosystem.

Managing hardware resources using Kubernetes

Kubernetes can also be used to manage hardware resources like graphics processing units (GPUs) for public cloud deployments. In one of the presentations at KubeCon China this year, Hui Luo, a software engineer at VMware demonstrated how Kubernetes can be used to handle machine learning workloads in private cloud as well.

Summary

As enterprises have started embracing open source technologies in considerable manner to reduce costs, it has been observed that Kubernetes has been evolved from just a container orchestration framework to handling even more complex workloads of different types.

Even though most of the software industry has leaned towards cloud-native, dividing monolithic applications in small services which can scale, managed independently and communicate among themselves through APIs, Kubernetes has become a de facto standard to completely take care of all services residing in containers. A similar mechanism of Kubernetes has been adopted to handle NFV, machine learning, and hardware resources workloads.

Download our eBook to know more about the Kubernetes technology and industry/market insights.

The post Evolvement of Kubernetes to Manage Diverse IT Workloads appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Gartner’s strategic tech trends show the need for an empowered edge and network for a smarter world

Opinion Earlier this week, Gartner released its top 10 strategic technology trends for 2019, and looking at the list, I was not surprised to see edge and blockchain technologies continuing into 2019 from last year.

In 2018, Gartner mentioned the technology trends as ‘cloud to the edge’ where the shift to edge-based infrastructure was predicted from centralised cloud platforms to address challenges related to bandwith constraints, connectivity and latency. Now this year, Gartner is emphasising the empowerment of edge-focused infrastructure due to ongoing substantial growth in digital devices, especially those devices which require analysis response after computation at the data centre end in no time.

Technologies which have enabled intelligence into operations or in devices is what we have been listening to for a long time. Like intelligent or autonomous things, quantum computing, and AI-driven development. In fact, we have seen such imaginary stuff in movie and TV ads as well.

But to actually enable 100% accuracy and delivery of services to end users, it needs a higher capacity network, computational power, and lower latency. Intelligent technologies will be useful only when a response will be real time – otherwise it won’t be much use for people who will engage with AI-based autonomous robots who will think first and respond after ‘some’ time. That would be disastrous.

Take an example of an autonomous car where manufacturers are evangelising its usage across the world to reduce mishaps and allow luxurious long rides. What if a network fails to respond in time which is bringing ‘intelligent’ instruction to the car so that it will take the corresponding action?

All such digital innovation will not be possible without two things; a communication network having lightning speed, almost giving real time experience and, most importantly, an agile response from computing resources of processed ‘intelligent’ data.

This is impossible with cloud, but can be enabled using edge computing. How? Cloud is a centralised data centre equipped with all computing infrastructure with higher capacity to support multiple types of digital communications. Many of the current cloud-based applications are not affected with bandwidth and latency constraints. For example, SaaS applications may not need a rapid response where it will only store data in the application. But the enablement of cognitive technologies for autonomous ‘things’ will be time and latency sensitive. Such possibilities can occur by bringing edge computing into the current communication network.

Looking at a future of having all digital devices serving to end users, there was a need for such edge topology which can give cloud-like performance closer to devices and reduce the burden on network usage. The upcoming 5G, and its feature of multi-access edge computing, will address exactly this.

Take any leading tech vendors – they are all indulged around these technology trends Gartner has listed. They are actively optimising or innovating existing solutions, offering new revolutionary products to support digital growth. But I believe that all the innovation will be consumed by the end user at its fullest when there is continuous upward innovation in IT infrastructure and communication networks. Smart devices can only be smart if they have capabilities to communicate in real time.

Editor’s note: You can find out more about the basics of edge computing architectures, types, use cases, as well as the market ecosystem in 2018, with this eBook which can be downloaded here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

OpenStack and NVMe-over-Fabrics: Getting higher performance for network-connected SSDs

What is NVMe over Fabrics (NVMe-oF)?

The evolvement of the NVMe interface protocol is a boon to SSD-based storage arrays. It further powered SSDs (solid state drives) to obtain high performance and reduced latency for accessing data. Benefits further extended by the NVMe over Fabrics network protocol brings NVMe feature retained over the network fabric, while accessing the storage array remotely. Let’s understand how.

While leveraging NVMe protocol with storage arrays consists of high-speed NAND and SSDs, a latency was experienced when NVMe based storage arrays were accessed through shared storage or storage area networks (SAN). In SAN, data should be transferred between the host (initiator) and the NVMe-enabled storage array (target) over Ethernet, RDMA technologies (iWARP/RoCE), or Fibre Channel. Latency was caused due a translation of SCSI commands into NVMe commands, in the data transportation process.

To address this bottleneck, NVM Express introduced the NVMe over Fabrics protocol, to get replaced with iSCSI as a storage networking protocol. With this, the benefits of NVMe were taken onto network fabrics in a SAN-kind of architecture to have a complete end-to-end NVMe-based storage model which is highly efficient for modern workloads. NVMe-oF supports all available network fabrics technologies, such as RDMA (RoCE, iWARP), Fibre Channel (FC-NVMe), Infiniband, Future Fabrics, and Intel Omni-Path architecture.

NVMe over Fabrics and OpenStack

As we know, OpenStack consists of a library of open source projects for the centralised management of data centre operations. OpenStack provides an ideal environment to implement an efficient NVMe-based storage model for high throughput. OpenStack Nova and Cinder are components used in proposed NVMe-oF with an OpenStack solution. This consists of creation and integration of Cinder NVME-oF target driver, along with OpenStack Nova.

OpenStack Cinder is a block storage service project for OpenStack deployments mainly used to create services which provide persistent storage to cloud-based applications. It provides APIs to users to access storage resources without disclosing storage location information.

OpenStack Nova is a component within OpenStack which helps provide on-demand access to compute resources like virtual machines, containers, and bare metal services. In NVMe-oF with OpenStack solutions, Nova is attaching NVMe volumes to VMs.

Support of NVMe-oF in OpenStack is available from the ‘Rocky’ release. A proposed solution requires RDMA NICs and supports kernel initiator and kernel target.

NVMe-oF targets supported

Based on the proposed solution above, we get two choices to implement NVMe-oF with OpenStack; first, with a kernel NVMe-oF target driver which is supported as of the OpenStack ‘R’ release, and second Intel’s SPDK (storage performance development kit) based NVMe-oF implementation containing SPDK NVMe-oF target driver and the SPDK LVOL (Logical Volume Manager) backend. This is anticipated to be in the OpenStack ‘S’ release.

Kernel NVMe-oF target: Here is the implementation consisting of support for kernel target and kernel initiator. But the kernel-based NVMe-oF target implementation has limitations in terms of number of IOPs per CPU core. Also, kernel-based NVMe-oF suffers latency issues due to CPU interrupts, many systems calling to read data, and time taken to transfer data between threads.

Kernel Based NVMe-oF + OpenStack ImplementationFig – Kernel Based NVMe-oF + OpenStack Implementation

SPDK NVMe-oF target: Why SPDK? SPDK architecture achieved high performance for NVMe-oF with OpenStack by moving all necessary application drivers to userspaces (apart from the kernel) and enables operation in polled mode instead of interrupt mode and lockless (avoiding the use of CPU cycles synchronising data between threads) processing.

Let’s understand what it means.

In SPDK implementation, storage drivers which are utilised for storage operations like storing, updating and deleting data are isolated from the kernel space where general purpose computing processes run. This isolation of storage drivers from kernel saves time required for processing in the kernel, and enables CPU cycles to spend more time for execution of storage drivers at user space. This avoids interruption and locking of storage drivers with other general-purpose computing drivers in kernel space.

In a typical I/O model, application requests a read/write data access and waits until the I/O cycle to complete. In polled mode, once the application places a request for data access, it goes at other execution and comes back after a defined interval to check completion of an earlier request. This reduces latency and process overheads, and further improves the efficiency of I/O operations.

By summarising, SPDK is specially designed to extract performance from non-volatile media, containing tools and libraries for scalable and efficient storage applications utilised userspace, and polled mode components to enable millions of I/Os per core. SPDK architecture is open source BSD licensed blocks optimised for bringing out high throughput from the latest generation of CPUs and SSDs.

SPDK ArchitectureFig – SPDK Architecture

Why SPDK NVMe-oF target?

As per the performance benchmarking report of NVMe-oF using SPDK, it has been seen that:

  • Throughput scales up and latency decreases almost linearly with the scaling of SPDK NVMe-oF target and initiator I/O cores
  • SPDK NVMe-oF target performed up to 7.3x better with regards to IOPS/core than Linux Kernel NVMe-oF target while running 4x 100% random write workload with increasing number of connections (16) per NVMe-oF subsystem
  • SPDK NVMe-oF initiator is 3x faster than Kernel NVMe-oF initiator with null bdev-based backend
  • SPDK reduces NVMe-oF software overheads by up to 10x
  • SPDK saturates 8 NVMe SSDs with a single CPU core

SPDK vs. Kernel NVMe-oF I/O Efficiency

Fig – SPDK vs. Kernel NVMe-oF I/O Efficiency

SPDK NVMe-oF implementation

This is the first implementation of NVMe-oF integrating with OpenStack (Cinder and Nova) which leverages NVMe-oF target driver and SPDK LVOL (Logical Volume Manager)-based SDS storage backend. This provides a high-performance alternative to kernel LVM and kernel NVMe-oF target.

SPDK Based NVMe-oF + OpenStack Implementation

Fig – SPDK Based NVMe-oF + OpenStack Implementation

The implementation was demonstrated at OpenStack Summit 2018 Vancouver. You can watch the demonstration video here.

If compared with Kernel-based implementations, SPDK reduces NVMe-oF software overheads and yields high throughput and performance. Let’s see how this will be added to the upcoming OpenStack ‘S’ release.

This article is based on a session at OpenStack Summit 2018 Vancouver – OpenStack and NVMe-over-Fabrics – Network connected SSDs with local performance. The session was presented by Tushar Gohad (Intel), Moshe Levi (Mellanox) and Ivan Kolodyazhny (Mirantis).

The post OpenStack and NVMe-over-Fabrics – Getting High Performance for Network Connected SSDs appeared first on Calsoft Inc. Blog.

Evaluating container-based VNF deployment for cloud-native NFV

The requirements of cloud native VNFs (virtual network functions) for telecom are different than IT applications – and VNF deployment using microservices and containers can help realising cloud-native NFV implementation success.

The best application for NFV is how it will be integrated, architected and further matured to strengthen 5G implementation for telecom service providers. Based on the current pitfalls related to VNF deployment and orchestration, making cloud-native VNF is the only solution in front of service providers today.

Yet telecom applications’ requirements of VNFs are different than any cloud-native IT application. Telecom VNF applications are built for data plane/packet processing functions, along with control, signalling and media processing. An error, or harm to VNF may break down the network and will impact the number of subscribers. Due to such a critical processing requirement, VNFs in telecom should be resilient, offer ultra-high performance, low latency, scalability, and capacity. Telecom VNFs need to be a real-time application having latency sensitivity to fulfil network data, control and signalling processing requirements.

Decomposition of cloud-native VNFs into microservices

VNFs are network functions-embedded software taken out of network peripherals and hosted on virtual machines as an application. Any kind of update to VNFs raises a time-consuming manual effort which hammers overall NFV infrastructure operations. To get ready for cloud native, bundled VNF software needs to be microservices-based, wherein monolithic VNFs are decomposed into different smaller sets of collaborative services having diverse but related functionalities, maintaining their own states, having different infrastructure resources consumption requirements, should be communicated, automatically scaled and orchestrated using well-defined APIs.

There are various benefits of microservice-based VNF decomposition:

  • Decomposed VNF sub-services are deployed on hardware which is best suited to be efficiently run and managed. It can scale as needed
  • Any error or glitch in the microservice causes failure to only that specific function, which allows easy troubleshooting and enables high availability
  • Decomposition allows reusability of service within VNF lifecycle in NFV environment. It also allows some services to get rollout quickly
  • Whole VNF becomes lightweight as functions like load balancing and deep packet inspection (DPI) are stripped out from the core application

As VNFs get divided in microservices, service providers may face operation complexity as the number grows. To manage all microservices well in production environment, high level automation needs to be implemented with NFV MANO layer and cloud orchestrator.

Evaluating deployment method of VNF using virtual machine and containers

Containers are a form of virtualisation at the operating system level. It encapsulates application dependencies, required libraries and configuration in a package which is isolated from other containers in the same operating system. Containers allow applications to run in an independent way and can be easily portable.

As a move towards cloud-native, VNF microservices can be deployed in containers which enable the continuous delivery/deployment of large, complex applications. But this approach is still in the early stage for cloud-native NFV.

Concerns with using containers for VNF

To use in NFV, there are certain concerns of using container technology:

  • The ecosystem is still evolving and immature compared with virtual machines
  • Security risks are involved with containers – all containers in OS share a single kernel, any breach on kernel OS breaks down all containers dependent on it
  • Isolating a fault is not easy with containers and a fault can be replicated to subsequent containers

Service providers who may want to use containers in an NFV environment may face challenges in multi-tenancy support, multi-network plane support, forwarding throughput, and limited orchestration capabilities. It is still possible to use containers in mobile edge computing (MEC) environments, which is going to co-exist with NFV in 5G in the future. MEC will be taking user plane function near to the edge of the network, closer to user application to provide very low latency, agility and enable real-time use cases like IoT, augmented reality, or virtual reality.

Containers can possibly be used along with virtual machines in an NFV environment as well. The deployment of VNFs can be virtual machine only, containers only, hybrid – where a container will run in virtual machines providing security and isolation features – and heterogeneous mode, where some VNFs will run in VM, some in containers, alongside a mix of both.

Service providers can evaluate their deployment methods as per their requirements at NFV infrastructure level.

Benefits of containers for cloud-native NFV path

Having a container in place to host microservices can allow active schedule and management to optimise resource utilisation. Container orchestration engines enable provisioning of host resources to containers, assigning containers to hosts, instantiate and reschedule containers. With containers, service providers can realise successful implementation of DevOps methodologies, allowing ease in automation tasks like scaling, upgrading, healing, and become resilient.

A major benefit of containerised microservices is the ability to orchestrate the containers so that separate lifecycle management processes can be applied to each service. This allows for each service to be versioned and upgraded singularly, as opposed to upgrading the entire VNF in virtual machine. While upgrading a whole application or VNF, a container scheduler determines which individual services have changed and deploys only those specific services.

Containers enable cloud-native ability into NFV infrastructure with added performance, portability and agility benefits, for telecom-specific application deployment and orchestration. To have fully featured cloud-native 5G networks, it is imperative for service providers to have containers to deploy more than virtual machines. But service providers will seek further research and development from open source communities like ONAP and OPNFV.

How containers impact NFV at application, infrastructure, and process levels

Applications (VNFs):
– It packages microservices along with its dependencies, libraries and configuration, and make it isolated
– Containers can build quickly with existing images in place for microservices
– Enables faster time to market due to highly automated deployment
– Programmable API enables a complete DevOps approach to be implemented with VNF development, deployment and lifecycle management

Infrastructure (VNF orchestration):
– Containers are portable packages which can move from one environment to another
– Containers can scale in/scale out as per requirement at NFV infrastructure
– Enables higher density
– Enables multi-tenancy to serve multiple requests
– Ease in upgrades and rollbacks as containers allow versioning

Process (VNF deployment):
– Containers can be immutable and can be pushed to any platform
– Allows smooth transition from dev to test to ops
– Enables highly efficient automation
– With containers, service providers can drive continuous integration/deployment to VNF onboarding and lifecycle management

Containers play a vital role on the path to achieve a complete 5G network built with highly automated cloud-native NFV. Successful deployment of 5G will depend on how service providers build a strategy around usage of containers in NFV infrastructure. Aside from the security risks involved in using containers, there might be use case challenges of containers in telecom applications that demand much higher performance. Containerisation can be possibly implemented in mobile edge computing to provide its benefits, but full integration will be expected by service providers to enable cloud-native NFV.

References

The post Evaluating Container Based VNF Deployment For Cloud Native NFV appeared first on Sagar Nangare.

Why cloud-native virtual network functions are important for NFV

Virtual network functions (VNFs) are software implementations of network function equipment packaged in virtual machines, on top of commercial off the shelf hardware NFV infrastructure. VNFs are a core part of NFV – as we know the base of NFV was to virtualise the network functions and software based to reduce cost and gain full control over network operations with added agility and flexibility benefits. We can say that the majority of NFV operations are focused towards how VNFs can be served in NFV infrastructure to introduce new services for consumers. In future, we can expect major developments will be related to VNFs only.

VNFs and NFV are separated by the fact that VNF are provided by external vendors or open source communities to service providers who are transitioning their infrastructure to NFV. There may be several VNFs which combine to form a single service for NFV. This adds complexity to the overall NFV purpose of agility, where VNFs from different vendors need to deploy in NFV infrastructure having a different operational model.

VNFs developed by different vendors have different methodologies for complete deployment in existing NFV environments. Onboarding VNFs remains a challenge due to a lack of standard processes for complete management from development to deployment and monitoring.

At a basic level, traditional VNFs come with limitations such as:

  • VNFs consume huge amounts of hardware in order to be highly available
  • VNFs are developed, configured and tested to run for specified NFV hardware infrastructure
  • Needs manual installation, configuration and deployment on NFVi
  • API not provided for VNF to enable automated scaling, configuration to serve the sudden spike in demand for utilisation
  • Not supporting multi-tenancy, VNFs cannot be easily shared in infrastructure for reuse

Building cloud-native VNFs is a solution for vendors and this is a revolution in software development to have all cloud-native characteristics to VNFs. Features we can expect as cloud-native VNFs are containerised functions, microservices-based, dynamically managed and specifically designed for orchestration. The major differentiator of cloud-native VNFs from traditional VNFs can be self-management capability and scalability.

Building cloud-native VNFs overcomes the limitations of traditional VNFs and gives the following benefits. Cloud-native VNFs have APIs which enables:

  • Automated installation and configuration
  • Automated scaling when dynamic requirement from network
  • Self-healing or fault tolerant
  • Automated monitoring and analysis of VNFs for errors, capacity management and performance
  • Automated upgrading and updating VNFs for applying new releases and patches
  • Standard and simplified management enables less power consumption; reduction of unnecessary allocated resources
  • Reusability and sharing of processes within VNFs can be achieved. VNFs can be easily shared within an NFV environment

NFV is a key technology used in the development of 5G networks. But NFV is going through a maturation stage where NFV solution providers are resolving many challenges like automated deployment, and VNF onboarding. Developing VNF and deploying into NFV infrastructure sounds simple, but it raises various questions when it comes to scale, configuring or updating VNFs. Any task related to VNFs need manual intervention, leads to more time consumption for launching or updating new services for service providers.

To deliver the promise of agility by NFV in 5G needs exceptional automation at every level of NFV deployment. Building cloud-native VNFs seems to be the solution – but it is at a very early stage.

The post Importance of Cloud-Native VNFs for NFV Success appeared first on Sagar Nangare.