All posts by SagarNangare

Realising the impact of unsecured container deployments: A guide

A recently published report by StackRox on the state of containers and Kubernetes security has revealed the statistics related to security concerns in data centres with containerised workloads. 94% of respondents out of 540 IT and security professionals who participated in the survey had experienced security incidents in the last 12 months. Misconfigurations and human errors were the primary issues which came out of the survey.

As a result, enterprises who have already deployed, or are in the process to deploy containers, are impacted by lacking security in hosting applications with containers. This has a subtle impact on the overall process of adoption of containers into the data centre modernisation strategy of many enterprises.

Impact on deployments

A recent CNCF survey found that security is already one of the top roadblocks in using/deploying containers.

Further, in the StackRox survey is it seen that 44% of respondents have slowed down application deployment into production due to the container or Kubernetes concerns. This data shows container adoption and deployments have been already impacted and further new security issues will halt the progress.

Investment in security strategies

Security incidents and vulnerabilities found in Kubernetes have made enterprises think about re-strategising their container deployment process. Earlier, while adopting and implementing containers, enterprises had less emphasis on security aspects and that leads to lower CAPEX. Now, with the insights which came out of the StackRox and CNCF surveys, the importance of security integration has been realised.

Due to a wide range of use cases of containers to boost digital innovation, enterprises will take actionable steps to harden containerised workloads. One will be to go for containers or Kubernetes security platforms or use managed solutions or services for containers. It will help them to automate management of containers and Kubernetes clusters to stay secure and updated.

Security skills

Kubernetes and containers are open source and comparatively new technologies that are evolving with time. But the huge acceptance of containers has resulted in realisations in terms of security glitches that have occurred due to lack of knowledge and skills to follow security practices.

The main highlight of the StackRox report is that most security glitches only happen due to misconfiguration. To tackle this, enterprises will look to hire highly-skilled engineers, train their existing resources and mandate them to follow best practices for container security. Kubernetes is a leading orchestration platform and it is considered that containers will be managed with it only. Resources having Kubernetes expertise with secure cluster deployments and management will also be on top of the list for hiring.

DevSecOps

Puppet’s recent 2019 State of DevOps Report threw light on the importance of integrating security in the software delivery lifecycle. It is suggested in the report that organisations adopting DevOps should prioritise security in a delivery cycle of software services. It is also found that the container environment will be less impacted if security practices are followed while developing and deploying applications and tools are integrated to handle testing and security incidents.

As more automation will involve in configuration and management of containers, there will be fewer changes for misconfigurations and human errors. Enterprises will look to amalgamate DevOps methodologies with security teams and developers to make sure containers will not suffer from security breaches.

Zero Trust in container networks

The authorisation of access by different levels of users is key to secure any data centre environment. For containers, orchestration platforms like Kubernetes offer modules like Role-Based Access Control (RBAC), PodSecurityPolicy and authentication mechanisms to strengthen cluster and pod access. Moving further from this, Zero Trust network overlays will begin to implement within Kubernetes clusters that are hosting a vast number of microservices.

The use of service mesh technologies like Istio and LinkedD is one of the movements to use the Zero Trust network overlay. Usage service meshes will be increased to get better visibility, control on networking and encryption of data between microservices.

Conclusion

The adoption of containers and Kubernetes has resulted in bringing agility in digital transformation progress. Security concerns are a proven roadblock; however, various containers and Kubernetes security measures can be implemented with existing mechanisms, best practices and managed solutions.

Editor’s note: Find out more about container security and Kubernetes security best practice here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

A guide to computational storage: Boosting performance for SSD storage arrays

With the proliferation of IoT devices and 5G fast wireless appearing on the horizon, enterprises are moving towards edge-based infrastructure deployment. Computational storage is a new storage technology powering the storage and compute part of edge infrastructure. With computational storage, it will be possible for enterprises as well as telecom service providers to support a massive amount of data processing, mainly at edge nodes.

A few companies have started offering computational storage solutions to businesses and organisations. Let’s understand the computational storage concept in-depth and how it is backed by communities and tech vendors.

The need for computational storage

Currently, most developments in the technology domain are focusing on digital user experience in real time with intelligence. This calls for the data centre or infrastructure stack to be at the highest performance level, equipped with all the latest hardware resources and computational processing techniques. Artificial intelligence/machine learning, analytics techniques are moved into the data centre to make digital devices intelligent.

As a result, we have seen the evolution of many new data centre technologies to boost data centre performance. We have also seen legacy HDD getting replaced by flash-based SSD arrays; the use of NVMe and FPGAs to boost data access in storage devices; the use of GPUs for hyper-scale data centre; and so on. Overall, we are witnessing the emergence of High-Performance Computing (HPC) systems that support the processing of huge amounts of data.

This leads to two types of gradual demands as we move forward into digital transformation. One, AI/ML and analytics applications need faster access to data than which are currently provided via traditional storage systems.

Secondly, data processing demands will continuously increase as per growth in IoT and Edge computing. Moreover, the humongous data generated by 5G networks will exponentially support IoT and edge use cases.

Although a maximum number of data centres are equipped with all-flash storage arrays, organisations face bottlenecks in supporting the ever-growing processing demands by AI/ML or big data applications.

This is where computational storage comes in.

What is computational storage and why do we need it?

Computational storage is a technique of moving at least some processing closer to or along with storage devices. It is also being referred to as ‘in-situ’ or ‘in-storage’ processing.

Generally, data has to move between the CPU and a storage layer that causes a delay in response time for input queries. Applying computational storage is critical to address the real-time processing requirement of AI/ML or analytics applications. We can host such high-performance computing applications within the storage itself, reducing resource consumption and costs, and achieving a higher throughput for latency-sensitive applications. Additionally, computational storage enables reduction in power consumption by data centre resources.

The core reason why computational storage stands advantageous to data centres is due to a mismatch between the storage capacity and the host machine’s memory data bandwidth (PCI links) that are connected to the CPU. To understand how this mismatch can be caused in a hyper-scale data centre, let’s take an example of the proposed server architecture by Azure and Facebook at OpenCompute.

In this proposed server, 64 SSDs are attached to one CPU host through PCI links. As shown in the above proposed server block diagram, 64 SSDs are connected to PCI links of 16 lanes. Each of the SSDs has 16 flash channels for data access, taking the total internal flash bandwidth to 8.5 GB/s. Now, 64 flash channels are available across 16 SSDs, which makes the total storage capacity as 544 GB/s. The bandwidth of PCI links is limited to 16 GB/s. This is a huge mismatch in the path of data to the host CPU. In such cases, in-situ processing can be applied so that most critical high-performance applications move to SSDs.

SNIA standards and market development

A global storage community, SNIA, has formed a Computational Storage Technical Work Group (TWG) to promote the interoperability of computational storage devices, and to define interface standards for system deployment, provisioning, management, and security. The TWG includes storage product companies such as Arm, Eideticom, Inspur, Lenovo, Micron Technology, NetApp, NGD Systems, Nyriad, Samsung Electronics, ScaleFlux, SK Hynix, Western Digital Corporation, and Xilinx. 

SNIA has defined the following three standards to implement computational storage in any type of server, whether it’s a small-medium scale enterprise data centre or a hyperscale data centre.

Computational Storage Drive (CSD): A component that provides persistent data storage and computational services

Computational Storage Processor (CSP): A component that provides computational services to a storage system without providing persistent storage

Computational Storage Array (CSA): A collection of computational storage drives, computational storage processors and/or storage devices, combined with a body of control software

Several R&Ds are under way and researchers are developing POCs to test standards defined by SNIA on high-performance computing applications. For example: CSD is demonstrated with project Catalina.

What's more, some of the core members of SNIA’s computations storage TWG have already started offering solutions. The vendors are NGD Systems, Samsung, ScaleFlux, Eideticom, and Nyriad.

Conclusion

Computational storage standards will be a great addition keeping in mind the growing demand for process data through high-performance computing applications. This type of in-storage embedded processing will come along with different forms and approaches which can be offered with NVMe-based architecture to boost SSD stacked servers.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Kubernetes as a service: What is it – and do you really need it?

We have seen that, with the acquisition of Heptio, how Kubernetes is well integrated into product stacks of VMware and launched new commercial and open source solutions. 

VMware’s motive is to shift to container based infrastructure powered with Kubernetes and participate in the competitive data centre market. Additionally, Kubernetes has been well received by public cloud and other leading tech vendors by showing full-stack support to manage containers either on bare metal or the cloud.

We are now in the era where every technology backend, infrastructure or platform is being sold in the form of an ‘as a service’ model, Kubernetes is adopted by more than 30 solution providers to offer bundled, managed and customised Kubernetes as a service (KaaS).

But investment, deployment and later management of Kubernetes might raise risks and challenges to organisations that want the rapid transformation to modern infrastructure to support dynamic needs by consumers. KaaS solution providers are coming up with an end-to-end solution that will save them from dead investment and time consumption, plugin most secure way. Let’s understand what KaaS is and what are its benefits and features.

What is Kubernetes as a service (KaaS)?

Kubernetes as a service is a type of expertise offered by a solution or product engineering provider companies, to help customers to shift to cloud-native enabled Kubernetes based platform and manage the lifecycle of K8s clusters.

This can include migration of workloads to Kubernetes clusters; deployment, management, and sustenance of Kubernetes clusters on the customer's data centre. KaaS mainly handles day one and day two operations while moving to Kubernetes native infrastructure, along with features like self service, zero-touch provisioning, scaling and multi-cloud portability.

Why do organisations need KaaS?

In the roadmap of digital transformation to gain a competitive edge in the market, companies are shifting their workloads to containers and integrating container orchestration platforms to manage their containerised workloads. Now, workloads might be applications decomposed into microservices (hosted by containers), backends, API servers, storage units, or so on. To accomplish this procedure, organisations may need expert resources and time to implement the transition. Later on, the sustenance team needs to deal with intermittent issues like scaling, upgrades of K8s stacks, policy changes, and more. 

Organisations cannot afford to spend time as well as money in this transformation as the pace of innovation is rapid. This is where Kubernetes as a service comes in to rescue organisations offering customised solutions based on organisations' existing requirements and scale of the data centre, keeping budget constraints in mind. Some of the benefits of KaaS are:

  • Security: Deployment of the Kubernetes cluster can be easy once we understand the service delivery ecosystem and data centre configuration. But this can lead to open tunnels for external malicious attacks. With KaaS, we can have policy-based user management so that users of infrastructure get proper permission to access the environment based on their business needs. Also, KaaS providers follow security policies that can prohibit most of the security attacks similar to the network firewall.

    Normal Kubernetes implementation exposes API server to the internet, inviting attackers to break into servers. With KaaS, some vendors enable the best VPN options to hide the Kubernetes API server
     

  • Saving in investment for resources: Customised KaaS allows organisations to procrastinate requirements for investment for resources, be it a team to handle KaaS terminals or physical resources to handle storage and networking component within infrastructure. Organisations get a better overview while KaaS is in place
     
  • Scaling of infrastructure: With KaaS in place, IT infrastructure can scale rapidly. It is possible due to high-level automation provided with KaaS. This saves a lot of time and bandwidth of the admin team

What do you get exactly?

Effective day two operations: This includes patching, upgrading, security hardening, scaling, and public cloud IaaS integration. These are all important as container-based workload management comes into the picture. And, when we consider Kubernetes, it may still not fit use cases of the data centre for particular organisations as most of the best practices are still evolving to match up innovation. 

Additionally, if we apply containers in infrastructure positive results should be expected rather than backtracking of strategies. KaaS have predefined policies and procedures that can be customised for organisations to meet ever-changing demands of organisations with Kubernetes.

Multi-cloud portable: Multi-cloud is new trend emerged in 2019 wherein containerised applications will be portable across different public and private cloud. Also, access to existing applications will be shared in a multi-cloud environment. In this case, having KaaS will be useful so that developers can focus on building applications without worrying about the underlying infrastructure. With KaaS, managing and portability will be with the KaaS provider.

Central management: KaaS gives admins to create and manage Kubernetes clusters from a single UI terminal. Admin has better visibility of all components within overall clusters and performs continuous health monitoring using tools like Prometheus and Grafana. Admins can upgrade the Kubernetes stack along with different frameworks used in the setup. 

It is also possible to remotely monitor Kubernetes clusters, check for any glitches in configuration, and send alerts. Additionally, the KaaS admin can apply patches to clusters if they find any security vulnerability associated with the technology stack deployed within clusters. Admin can reach out to any pods or containers in a network of the different clusters using a single pane of glass provided with KaaS.

Conclusion

Implementing Kubernetes is not just a solution, but it might create several issues that can cause security as well as resource consumption. Kubernetes as a service offerings are a breather for enterprises and organisations ranging from large scale to small scale who already have shifted workloads to a containerised model or are planning to do so. 

KaaS can increase the deployment speed of the Kubernetes cluster along with a raise in the performance of containerised infrastructure. With KaaS, organisations get single-handed support for their infrastructure which will allow them to focus on the services layer.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How is Kubernetes leading the game in enabling NFV for cloud-native?

The impact of cloud-native readiness on applications, which are mostly orchestrated using Kubernetes, can be seen on VMware’s announcements at the recent VMworld 2019. This made it clear to the IT world that focus of IT infrastructure has shifted to containerisation from virtualisation. Going cloud-native and shifting workloads on top of Kubernetes clusters is a key trend being followed by the industry.

CNCF (Cloud Native Computing Foundation) has shown the aggression to push their projects to enterprise IT infrastructure and telecom service providers to build the core of data centres using new containerised and microservices methods.

NFV and telecom use cases have also started shifting to a cloud-native landscape in the last two years. NFV techniques have help CXOs move to the software-defined and centric data centre with virtual network functions (VNFs) as core elements, being orchestrated using VNF managers (VNFM). The VNF's orchestration can be done using commercial VNFM platforms offered by Nokia, Cisco, Ericsson, Huawei, and NEC; and an open-source platform like OpenStack Tacker. Now with the cloud-native movement in the IT domain, VNFs are becoming cloud-native network functions (CNFs).

Cloud-native development of network functions:

  • Makes applications or code portable and reusable – in other words can be repetitively used independent of the underlying infrastructure
  • Allows the application to scale up and down where there is demand
  • Can be deployed with microservices way but not mandatorily
  • Is suitable for elastic and distributed computing environments

Cloud-native development also enables NFV to embrace DevOps, agile techniques, and more importantly allows container orchestration engines like Kubernetes to handle workloads – which also means that more dynamism comes into the picture at the core stack of NFV.

Earlier, CNFs were in evaluation phase to check for readiness by several vendors and service providers to be used in NFV use cases. In 2018, I wrote about the benefits of deploying network functions in containers and being architected using microservices. Also, I wrote on why cloud-native VNFs are important in NFV success.

The below image shows how VNFs were managed in the past, how it is currently managed along with CNFs, and showing how Kubernetes can be a de facto framework to handle network functions and applications pushed into CNFs and VNFs.

Kubernetes in the picture

We can now see how Kubernetes has evolved so much in the data centre of every size for handling every workload type. Kubernetes is also becoming a choice to orchestrate workloads at edges as well. We have seen several collaborations for new solutions for 5G that specifically focused on handling containers using Kubernetes and legacy virtual machines using OpenStack.

There are several ways Kubernetes can be useful for NFV use cases for handling network functions and applications. Kubernetes can be useful in hosting all cloud-native software stack into the clusters.

If you are a software or solution provider, Kubernetes can help you orchestrate all workload types like VNFs, CNFs, VMs, containers, and functions. With Kubernetes, it has become possible for all workloads to co-exist in one architecture. ONAP is leading service orchestrator and NFV MANO platform to handle services deployed in NFV. A Kubernetes plugin specifically developed for ONAP makes it possible to orchestrate different services and workloads cater through multiple sites.

ONAP has challenges in terms of installation and maintenance, while concerns have also been noted related to heavy consumption of resources like storage and memory. To work along with Kubernetes, ONAP release a lightweight version, which will fit with many NFV architectures. It is called ONAP4K8S. Requirements and package contents are published on its profile page.

There can be cases where it is not possible to completely get away from virtual machines. Some of the existing functions need to reside with virtual machines and cannot be containerised. For such cases, Kubernetes community KubeVirt and Mirantis’s Virlet frameworks can be integrated to dynamically manage virtual machines along with containers. Kubernetes also becomes a choice for enabling orchestration at the edge of the network. Kubernetes based control plane uses less number of resources that makes it suitable for edge nodes even with one server.

Cloud-native NFV stack

The Akraino edge stack is hosting a blueprint, Integrated Cloud Native (ICN) NFV Stack, under which all developments of making NFV core cloud-native are in progress. The current progress of integrating open-source cloud-native projects for NFV stack is shown below:

Srinivasa Rao Addepalli (senior principal engineer and chief architect at Intel) and Ravi Chunduru (Associate Fellow, Verizon) will be presenting a session at the upcoming Open Networking Summit Europe 2019 on how Kubernetes can be used at core of NFV and how Linux Foundation communities (ONAP, OPNFV, CNCF, LFE) are doing efforts to make NFV core a cloud-native.

Editor's note: Download Calsoft’s eBook – A Deep-Dive On Kubernetes For Edge – which focuses on current scenarios of adoption of Kubernetes for edge use cases, latest Kubernetes and edge case studies, deployment approaches, commercial solutions and efforts by open communities.

Image sources: https://events.linuxfoundation.org/wp-content/uploads/2018/07/ONS2019_Cloud_Native_NFV.pdf

The post How is Kubernetes Leading the Game in Enabling NFV for Cloud Native? appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

An analysis of Kubernetes and OpenStack combinations for modern data centres

Editor's note: This article was originally published on OpenStack Superuser. CloudTech has the author's permission to re-publish here.

For many telecom service providers and enterprises who are transforming their data centre to modern infrastructure, moving to containerised workloads has become a priority. However, vendors often do not choose to shift completely to a containerised model.

Data centres have to support virtual machines (VMs) as well to keep up with legacy VMs. Therefore, a model of managing virtual machines with OpenStack and containers using Kubernetes has become popular. In an OpenStack survey conducted in 2018, it was seen that 61% OpenStack deployments are also working with Kubernetes.

Apart from this, some of the recent tie-ups and releases of platforms clearly show this trend. For example:

  • AT&T’s three year deal with Mirantis to develop 5G core backed by Kubernetes and OpenStack
  • Platform9’s Managed OpenStack and Kubernetes – providing required featured sets bundled in solution stack for the service provider as well as developers. They support Kubernetes on VMware platform as well
  • Nokia’s CloudBand release – containing Kubernetes and OpenStack for workload orchestrations
  • OpenStack Foundation’s recently announced Airship project aiming to bring the power of OpenStack and Kubernetes in one framework

The core part of a telecom network or any virtualised core of a data centre has undergone a revolution, shifting from physical network functions to virtual network functions (VNFs). Organisations are now adopting cloud-native network functions (CNFs) to help bring CI/CD-driven agility into the picture.

This journey is shown in one of the slides from the Telecom User Group session at KubeCon Barcelona in May, which was delivered by Dan Kohn, the executive director of CNCF and Cheryl Hund, the director of ecosystem of CNCF. (Image source).

 

According to the slide, presently, application workloads deployed in virtual machines (VNFs) and containers (CNFs) can be managed with OpenStack and Kubernetes, respectively, on top of bare metal or any cloud. The optional part that is ONAP is a containerised MANO framework, which is managed with Kubernetes.

As discussed in birds-of-a-feather (BoF) – telecom user group session delivered by Kohn –  with the progress of Kubernetes for cloud-native movement, it is expected that CNFs will be a key workload type. Kubernetes will be used to orchestrate CNFs as well as VNFs. VNFs will be segregated with KubeVirt or Virtlet or OpenStack on top of Kubernetes.

Approaches for managing workloads using Kubernetes and OpenStack

Let’s understand the approaches of integrating Kubernetes with OpenStack for managing containers and VMs.

The first approach can be a basic approach wherein Kubernetes co-exists with OpenStack to manage containers. It gives a good performance but you cannot manage unified infrastructure resources through a single pane. This causes problems associated with planning and devising policies across workloads. Also, it can be difficult to diagnose any problems affecting the performance of resources in operations.

The second approach can be running a Kubernetes cluster in a VM managed by OpenStack. This enables OpenStack-based infrastructure to leverage the benefits of Kubernetes within a centrally managed OpenStack control system. Also, it allows full-feature multi-tenancy and security benefits for containers in an OpenStack environment. However, this contributes to performance lags and necessitates additional workflows to manage VMs that are hosting Kubernetes.

The third approach is an innovative one, leaning towards a completely cloud-native environment. In this approach, Kubernetes can be replaced with OpenStack to manage containers along with VMs as well. Workloads take complete advantage of hardware accelerators and Smart NICs, among others. With this, it is possible to offer integrated VNS solutions with container workloads for any data centre, but this demands improved networking capabilities like in OpenStack (SFC, provider networks, segmentation).

Kubernetes versus OpenStack –  is it true?

If you looked at the recent VMworld 2019 US event, it was clearly seen that Kubernetes would be everywhere. There were 66 sessions and plenty of hands-on training that will focus only on Kubernetes integration in every aspect of IT infrastructure.

But is that the end of OpenStack? No. As we have already seen, the combination of both systems will be a better bet for any organisation that wants to stick with traditional workloads while gradually moving to a new container-based environment.

How Kubernetes and OpenStack are going to combine

I came across a very decent LinkedIn post by Michiel Manten. He stated that there are downfalls for both containers and VMs. Both have their own use cases and orchestration tools. OpenStack and Kubernetes will complement each other if properly combined to run some of the workloads in VMs to get isolation benefits within a server and some in containers. One way to achieve this combination is to run Kubernetes clusters within VMs in OpenStack, which eliminates the security pitfalls of containers while leveraging the reliability and resiliency of VMs.

What are the benefits?

  • Combining systems will immediately benefit all current workloads so that enterprises can start their modernisation progress, maintaining high speed with much lower cost than commercial solutions
  • Kubernetes and OpenStack can be an ideal and flexible solution for any form of a cloud or new far-edge cloud where automated deployment, orchestration, and latency will be the concern
  • All workloads will be in a single network in a single IT ecosystem. This makes it easier to apply high-level network and security policies
  • OpenStack supports most enterprise storage and networking systems in use today. Running Kubernetes with and on top of OpenStack enables a seamless integration of containers into your IT infrastructure. Whether you want to run containerized applications bare metal or VMs, OpenStack allows you to run containers the best way for your business
  • Kubernetes has self-healing capabilities for infrastructure. As it is integrated into an OpenStack, it can enable easy management and resiliency to failure of core services and compute nodes
  • A recent release of OpenStack software (OpenStack Stein) has several enhancements to support Kubernetes in the stack. A team behind OpenStack Certified Kubernetes installer made it possible to deploy all containers in a cluster within five minutes regardless of the number of nodes. It was previously 10-12 minutes. With this, we can launch a very large-scale Kubernetes environment in 5 minutes

Telecom service providers who have taken steps towards 5G agreed upon the fact that a cloud-native core is imperative for a 5G network. OpenStack and Kubernetes are mature, open-source operating and orchestration frameworks today. Providing agility is the key capability of Kubernetes for data centers and OpenStack has several successful projects for focusing on storage and networking of workloads, and support for myriad applications.

Editor's note: Download the Calsoft eBook – A Deep-Dive On Kubernetes For Edge –  focusing on current scenarios of adoption of Kubernetes for edge use cases, latest Kubernetes and edge case studies, deployment approaches, commercial solutions and efforts by open communities.

The post Analysis of Kubernetes and OpenStack Combination for Modern Data Centers appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Addressing the concerns of data management and sovereignty in multi-cloud and edge scenarios

MWC Barcelona last month heavily focused on two fast-emerging technology trends; 5G and edge computing. Together, they will significantly impact businesses by enabling massive volumes of digital data to transfer between cloud servers located in multiple regions around the world as well as between IoT devices and edge nodes. This is due to the hyper-fast speed of 5G networks and edge computing architectures that have micro-clouds and data centres located closer to data-generating IoT devices.

To seize new opportunities and stay ahead of competitors, businesses are in the process of transforming their operational models to take advantage of 5G and edge computing.

Currently, this data generated by multiple devices is stored in the cloud; this could either be on-premises, in a public cloud like Amazon Web Services (AWS), Azure or Google, hybrid, or multi-cloud. Additionally, the edge can also be seen as a ‘mini-cloud’ where some data will surely reside to support endpoint applications. With the edge, an increasing number of data storage servers are emerging to host data. In a few years, large amounts of data will be scattered across clouds and edges located in different countries and continents.

However, growing amounts of digital data are bounded by the regulations of many countries and regions, which helps to gain data sovereignty, enabling the protection of both general and sensitive information from external access for misuse. Last year, for example, the European Union implemented GDPR. Similarly, India, China and Brazil, among other nations, established their own data protection bills. The varied and growing number of regulations creates concerns for businesses, which are in the midst of transformation driven by 5G and edge benefits. Businesses, including technology infrastructure vendors and service providers, will want ownership of data which is generated by consumers, whether that occurs locally or across borders.

The key question therefore is: how can data in multi-cloud and multi-node environments be managed? Will data sovereignty be a roadblock to latency-sensitive 5G use cases?

I came across one company, Kmesh, and found it was working on compelling solutions for data mobility in edge and multi-cloud scenarios. I got in touch with Jeff Kim, CEO of Kmesh, to learn about the core of their technology.

Kmesh, founded only in 2018, today has several solution offerings to address challenges with data used in multi-cloud environments, different countries, and edges. The offerings are SaaS solutions for data sovereignty, edge data and multi-cloud, and each provides a centralised software portal where users can set up policies for the ways they wish to distribute data. These SaaS offerings allow organisations to transform centralised data into distributed data, operating over multiple clouds, countries and edges as a single global namespace.

Kmesh enables businesses to take full control of their data generated at various data centres and residing in different geographies. Businesses can also move or synchronise the data in real time. So how do their SaaS offerings work? “Using our SaaS, you install a Kmesh software agent on-premises and another Kmesh software agent on any cloud or clouds,” said Kim. “Then, using our SaaS, you control which data gets moved where. Push a button, and the data gets moved/synced in real time, with no effort by the customer.”

With this approach, Kmesh aims to deliver significant efficiency improvements in operations involving data by providing the ability to orchestrate where data generated by end devices will reside and be accessed across edge, multi-cloud and on-prem.

Kmesh also aims to offer agility and flexibility in application deployment when used with Kubernetes, the de facto technology for orchestrating where applications reside. Businesses gain the flexibility to deploy applications anywhere and can leverage data ponds, which are placed at different locations. Like Kubernetes, Kmesh follows the native design principles targeted at cloud, hybrid cloud, and multi-cloud use cases.

Leading public clouds are known to have excellent artificial intelligence (AI) and machine learning (ML) capabilities for data provided to them. Kim explained how Kmesh can focus on data mobility in the age of AI and ML. “Enterprise customers still have their data predominantly on-premises,” he said. “Cloud providers have great AI/ML applications, such as TensorFlow and Watson, but moving data to the cloud and back again remains a challenge. Kmesh makes that data movement easy and eliminates those challenges, allowing customers to focus on what they want – the AI/ML application logic.”

Kmesh offerings reduce the burden on network resources by eliminating the need to transfer huge amounts of data between cloud and digital devices. In addition, businesses can substantially lower their storage costs by eliminating the need for data replication on different clouds.

I also asked if Kmesh could benefit telecom service providers in any way. “We can help in two ways, with them as partners and as customers,” said Kim. “As customers, telcos have massive amounts of data, and we can help them move it faster and more intelligently. As partners, if they offer cloud compute solutions, then they can resell Kmesh-based services to their enterprise customers.

“One early sales entry point to enterprises is by supporting data sovereignty in countries where the big clouds – AWS, Azure, Google – have little or no presence,” added Kim. “Many countries, particularly those with high GDPs, now have regulations that mandate citizen data remains in-country. Telcos in countries like Vietnam, Indonesia, Switzerland, Germany [and] Brazil can use Kmesh to offer data localisation compliance.”

The technology world is looking for flexible IT infrastructure that will easily evolve to meet changing data and performance requirements in support of the onslaught of upcoming and lucrative use cases. Kmesh is one company which aims to address data management and data sovereignty concerns while decreasing costs associated with storage and network resources.

The post Addressing the Concerns of Data Management and Sovereignty in Multi-Cloud and Edge appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

How are faster networks advancing the next generation of data centres?

We are witnessing significant uplift in data transmission speeds offered by network connectivity providers. Service providers are now promising speeds in hundreds of MBs to GBs with which, for instance, we can see live streaming of Blu-ray movie prints without any buffering.

Such network speeds are set to trigger many new technology possibilities. Businesses cannot afford to stay behind, as they have to take into account new technologies which are widely adopted by the competitive market landscape. Therefore, the focus of businesses has become clear and narrow; to constantly satisfy customer demands with lucrative digital offerings and push businesses ahead for gaining competitive advantage.

To align with this trend, businesses have already started to optimise and redesign their data centres to handle a vast amount of data generated by a growing number of consumer devices. It is obvious for businesses to transform the data centre for addressing the need for upgrading. A transition would involve the use of:

  • Virtual network functions (VNFS), which replaces server hardware with software-based packages to specific work – network function virtualisation (NFV)
  • Software-defined networking to gain a central control of the network using a core framework which will allow admins to define network operations and security policies
  • Seamless orchestration among several network components using ONAP, ETSI OSM, Cloudify among others
  • Workloads (VM and containers) and data centre management by implementing OpenStack, Azure Stack, Amazon S3, CloudStack, and Kubernetes. Containers are getting widely adopted due to features like faster instantiation, integration, scaling, security, and ease in management

The next thing which will disrupt the data centre is the adoption of edge architecture. Edge computing will bring a mini data centre closer to where data is going generated by devices like smartphones, industrial instruments, and other IoT devices. This will add more endpoints before data is gathered by the central data centre. But the advantage is that maximum computing will be done at the edge that will help to reduce the load on network transmission resources. Adding to this, hyperconvergence can be used at edge nodes to bring simplification in the required mini data centre.

Mobile edge computing (MEC), a core project maintained by ETSI, is emerged at an edge computing model to be followed by telecom operators. ETSI is maintaining and working on innovations to improve the delivery of core network functionalities using MEC, as well as guiding vendors and service providers.

Aside from edge computing, network slicing is a new architecture introduced in 5G that will have an impact on how data centres are designed for particular premises, and dedicated for specific cases such as Industrial IoT, transportation, and sports stadia.

Data centre performance for high speed networks

In this transforming age, a large amount of data will transfer between devices and the data centre as well as between data centres. As low latency and high bandwidth is required by new use cases, it is important to obtain a higher level of performance from the data centre. It is not possible to achieve such paramount performance with legacy techniques and by adding more capacity to data centres.

With the ‘data tsunami’ of recent years, data centre technology vendors came up with new inventions and communities formed to address performance issues raised by different types of workloads. One of the techniques which has been significantly utilised in new age data centres is to offload some of the CPU tasks to network or server interconnecting switches and routers. Let’s take an example of the network interface card (NIC) which, when used to connect servers to network components of the data centre, has become a SmartNIC, offloading processing tasks that the system CPU would normally handle. SmartNICs can perform network-intensive functions such as encryption/decryption, firewall, TCP/IP, and HTTP processing.

Analyst firm Futorium conducted a Data Centre Network Efficiency survey targeted to IT professionals about their perceptions and views on data centres and networks. Apart from virtualising network resources and workloads, for efficient processing of data for high-speed networks, SmartNIC usage and process offload techniques have emerged as the top interest for IT professionals. This reveals how businesses are relying more on smart techniques which can save costs, along with notable data centre performance improvements for faster networks.

Workload accelerators, like GPUs, FPGAs, and SmartNICs are widely used in current enterprise and hyperscale data centres to improve data processing performance. These accelerators interconnect with CPUs for generating faster processing of data and require much lower latency for transmitting data back and forth from the CPU server.

Most recently, to address the high speed and lower latency requirements between workload accelerators and CPUs, Intel, along with companies including Alibaba, Dell EMC, Cisco, Facebook, Google, HPE and Huawei, have formed an interconnect technology called Compute Express Link (CXL) that will aim to improve performance and remove the bottlenecks in computation-intensive workloads for CPUs and purpose-built accelerators. CXL is focused to create high speed, low latency interconnect between the CPU and workload accelerators, as well as maintain memory coherency between the CPU memory space and memory on attached devices. This allows for resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.

NVMe is another interface introduced by the NVM Express community. It is a storage interface protocol used to boost access to SSDs in a server. NVMe can minimise CPU cycles from applications and handle enormous workloads with lesser infrastructure footprints. NVMe has emerged as a key storage technology and has had a great impact on businesses, which are dealing with vast amounts of fast data particularly generated by real-time analytics and emerging applications.

Automation and AI

Agile 5G networks will result in the growth of edge compute nodes in network architecture to process data closer to endpoints. These edge nodes, or mini data centres, will sync up with a central data centre as well as be interconnected to each other.

For operators, it will be a task ahead to manually set up several edge nodes. The edge nodes will regularly need initial deployment, configuration, software maintenance and upgrades. In the case of network slicing, there could be a need to install, or update VNFs for particular tasks for devices in the slice. It is not possible to do this manually. At this point, automation comes into the picture where operators need to get the central dashboard at the data centre to design and deploy configuration for edge nodes.

Technology businesses are demonstrating or implementing AI and machine learning at the application level for enabling auto-responsiveness – for instance, using chatbots on a website. Much of the AI is applied for a data lake, to generate insights from self-learning AI-based systems. These types of autonomous capabilities will be required by the data centre.

AI systems will be used for monitoring server operations for tracking activities meant for self-scaling for a sudden demand in compute or storage capacity, as well as self-healing from breakdowns, and end-to-end testing of operations. Already, tech businesses have started offering solutions for each of these use cases; for example, a joint AI-based integrated infrastructure offering by Dell EMC Isilon and NVIDIA DGX-1 for self-scaling at the data centre level.

Conclusion

New architectures and technologies are being introduced with the revolution in the network. Most of this infrastructure has turned software-centric as a response to the growing number of devices and higher bandwidth. Providing lower latency – up to 10 microseconds – is a new challenge for operators to enable new technologies in the market. For this to happen, data centres need to complement the higher broadband network. It will form the base for further digital innovation to occur.

Editor’s note: Download the eBook ‘5G Architecture: Convergence of NFV & SDN Networking Technologies’ to learn more about the technologies behind 5G and the status of adoption, along with key insights into the market

The post Analysis: How are Faster Networks Advancing the New-Age Datacenters appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Is the cloud the next thing for long-term data retention? Looking at the key vendors in the space

For any organisation in this era, there is a realisation on how data is critical for business needs and operations.

An enormous amount of data has been produced already after the disruption of cloud computing into various types of organisation, be it education, finance, healthcare or manufacturing. Today, organisations are more concerned about the data which has been developed in the last 15 to 20 years due to the surge of IT infrastructure.

This data and applications are probably not being used actively, but it is important to organisations as this data contains critical information, having compliance requirements around it. Security of old data (unstructured content, applications, virtual machines) is becoming crucial for the organisation. There has to be a cost effective and reliable archiving solution to store and secure data while gaining rapid access when needed.

In the past, IT management used to save the data in tape drives or on premises data centres without any filtering. But the data demands have drastically changed.

Even more data will be produced in the next five to seven years as more digitally connected devices become part of business operations. Data will be fuel for any business as they will abstract analytical information to get ahead of the competition or to be aligned with consumer demands. This digital transformation is not just to acquire new technology enhancement but to save CAPEX and OPEX every time when the data centre moves ahead in innovations.

As data grows, edge computing architecture will enable data centre systems to get closer to digital devices for processing of information (machine learning/analysis) and only a small set of information will be pushed to the cloud or private data centre.

How will organisations deal will past data when real-time data will also need to get archived for reference? How will organisations deal with data in hybrid cloud or a multi-cloud model where private and public cloud will be utilised for different data processing purposes? Will there be automation available for constantly syncing data based on archival methods that will get integrated in an archival strategy? What about the security from external breaches or physical damages to archival systems?

There are various vendors who have developed solutions to address these needs. Organisations have different choices to select a solution which fits their requirements and can be customised as per the budget. In this post, I have taken a look at data archival solutions from leading vendors like Rubrik, Cohesity and Zerto. Let’s evaluate their solutions.

Cohesity: Enterprise-grade long-term retention and archival

Cohesity’s solutions allow you to leverage both cloud and tapes to archive the data based on the organisation's requirements. The solution they call cloud-native is where, apart from tapes, archival is possible on public clouds, private clouds, Amazon S3-compatible devices and QStar managed tape libraries. The solution enables IT management to define workflow policies for automated backup and archival. It consists of two Cohesity products: Cloud Archive & Data Protect.

Cloud Archive allows to leverage public cloud for long term data retention, while Data Protect helps to reduce long term retention and archival cost with its pay as you go cost model.

Rubrik: Data archival

Rubrik’s solution provides support to organisations for data management on hybrid cloud environments. Organisations can choose their storage and architecture containing:

  • Archive to Google Cloud Storage
  • VMware vSphere, Nutanix AHV, and Microsoft Hyper-V
  • Microsoft SQL Server
  • Oracle, Linux, Windows, UNIX, and NAS
  • Remote and branch offices

The client uses real time predictive global search to access the archived data. You will see files directly from the archive as you type in the search box. This drastically reduces access time for your files. Also, it is possible to instantiate VMs in the cloud itself with Rubrik's solution. 

Data deduplication is used while accessing the data which further reduces transfer and storage costs. With this solution, all the data is encrypted before being send from physical devices to target storage infrastructure. A user is presented with a simple HTML5 responsive interface to set up a policy driven automation and target for archival.

Zerto: Zerto virtual replication

Zerto offers a different solution for archival of data compared with Rubrik and Cohesity. Zerto does the archival of data using an ad-hoc feature in its main software Zerto Virtual Replication. With this feature, it is possible to take daily, weekly and monthly backup of data to be archived. It is possible to use target for archival on tapes, network share in a third location, dedicated disk-based backup device or even cheap S3 or Blob Storage in AWS or Azure.

The latest release supports continuous data protection (CDP), replication, automated orchestration and long-term retention with offsite backup. Journal File Level Recovery mechanism is used to restore backup data quickly.

Conclusion

Apart from Rubrik, Cohesity and Zerto, there are more vendors who have offered different types of solutions for different workloads and for diverse requirements. But these three can be useful in most of the new age workloads like data generated by IoT devices, machine learning analysis data and unstructured big data lakes.

As organisations are evaluating new technologies to deal with data, a proper archival or long term retention solution will help them to get most of the past data and allow them to focus on newly generated data. As per this evaluation, it is clear that most vendors are focused towards utilizing public cloud or hybrid cloud environments to archive the long-term data. Use of the hybrid cloud means that private cloud can be used to store data, which is bound by compliance and security norms critical to organisations. But it will be completely up to the organisations to which solution they would like to go with as there are good options available.

The post Is the Cloud Next Thing for Long Term Data Retention or Archival? appeared first on Calsoft Inc. Blog.

Editor's note: Download the eBook NVMe: Optimizing Storage for Low Latency and High Throughput.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Exploring the evolution of Kubernetes to manage diverse IT workloads

Kubernetes started in 2014. For next two years, the adoption of Kubernetes as container orchestration engine was slow but steady, as compared to its counterparts – Amazon ECS, Apache Mesos, Docker Swarm, GCE et al. After 2016, Kubernetes started creeping into many IT systems that  have wide variety of container workloads and demand higher performance for scheduling, scaling and automation.

This is so as to enable a cloud native approach having a microservices architecture in application deployments. Leading tech giants (AWS, Alibaba, Microsoft Azure, Red Hat) have started new solutions based on Kubernetes and in 2018, they are consolidating to build a de-facto Kubernetes solution which can cover every use case that handles dynamic hyperscale workloads.

Two very recent acquisitions depict how Kubernetes has created a huge impact in IT ecosystem. One is IBM’s Red Hat and VMware’s Heptio acquisition. IBM did not shown the direct interest to target container orchestrations but had eyes on Red Hat’s Kubernetes Based Openshift.

At VMworld Europe 2018, the acquisition of Kubernetes solution firm Heptio by VMware triggered a lot of heat. This acquisition is said to have a significant impact on the data centre ecosystem where Red Hat (IBM) and Google are among the top players. Heptio’s solution will be co-integrated with VMware’s Pivotal Container Services (PKS) to make this as a de-facto Kubernetes standard which will cover maximum data centre use cases from private, multi-cloud and public cloud.

Heptio was formed by ex-Google engineers Joe Beda and Craig McLuckie back in 2016. In its 2 years Heptio captured the eyeballs of industry giants with its offerings and contribution to cloud native technologies based on Kubernetes. Also, Heptio had raised $33.5 million through two funding rounds.

So, the question is why and on which kind of use cases Kubernetes is being used or being tested to use.

Enabling automation and agility in networking with Kubernetes

Leading communication service providers (CSPs) are demonstrating 5G in selected cities. 5G networks will support a wide range of use cases with a lowest possible latency and high bandwidth network. CSPs will need to deploy network services at edge of the network where data is generated by number of digitally connected devices.

To deploy services at the edge of the network and have a control on each point of the network, CSPs will need automated orchestration on each part. What's more, as software containers are being adopted by CSPs to deploy virtual network functions, CSPs will be leveraging cloud native approach by employing microservices based network functions and real time operations by employing CI/CD methodologies. In this scenario, Kubernetes emerged as an enterprise level container management and orchestration tool. Kubernetes brings a number of advantages in this environment.

Jason Hunt wrote in a LinkedIn post that “Kubernetes allows service providers to provision, manage, and scale applications across a cluster. It also allows them to abstract away the infrastructure resources needed by applications. In ONAP’s experience, running on top of Kubernetes, rather than virtual machines, can reduce installation time from hours or weeks to just 20 minutes.” He added that CSPs were utilising mixing of public and private clouds for running network workloads. Kubernetes works well for all types of clouds to handle workloads of any scale.

Other example of Kubernetes utilisation in telecom is the recent release of Nokia CloudBand software for NFV. With this release of CBIS 19, there is support for edge network deployments along with support for containerised workloads and integration of Kubernetes for container management along with OpenStack which will handle virtual machine as well. In the last few years, usage of containers has being discussed within NFV architecture. But this release is one of the first representations of employing containers and container management for handling network functions in NFV infrastructure.

Kubernetes and AI/machine learning

KubeFlow – Managing machine learning stacks: Moving further on managing containers, Kubernetes has evolved to the extent that it is used to manage complex workloads for machine learning applications.

Machine learning applications or systems contain several software components, tools and libraries from different vendors which are all integrated together to process information and generate output. Connecting and deploying all the components and tools require manual efforts which are tedious and takes a fair amount of time. Also, for most of the cases the hardest part is that the machine leaning models are immobile, and require re-architecture while transferring from the development environment to a highly scalable cloud cluster.

To address this concern, Kubernetes introduced open framework KubeFlow which has all machine learning stacks pre-integrated into Kubernetes which will instantiate any project easily, quickly and extensively.

KubeFlow Architecture for ML Stacks

Image source: https://www.kubeflow.org/blog/why_kubeflow/ 

Kubernetes for eCommerce retailer JD.com: Besides the launch of KubeFlow, one interesting application of Kubernetes for AI is JD.com, a Chinese eCommerce retailer, which is managing the world’s largest Kubernetes clusters with more than 20,000 bare metal services in several clusters across data centres in multiple regions.

In an interview with CNCF, Liu Haifeng, chief architect at JD.com, was asked about how Kubernetes is helping JD for AI or big data analytics. He disclosed: “JDOS, our customised and optimised Kubernetes supports a wide range of workloads and applications, including big data and AI. JDOS provides a unified platform for managing both physical servers and virtual machines, including containerised GPUs and delivering big data and deep learning frameworks such as Flink, Spark, Storm, and Tensor Flow as services. By co-scheduling online services and big data and AI computing tasks, we significantly improve resource utilisation and reduce IT costs.”

JD.com is declared as winner in the top end user award by CNCF for its contribution to the cloud native ecosystem.

Managing hardware resources using Kubernetes

Kubernetes can also be used to manage hardware resources like graphics processing units (GPUs) for public cloud deployments. In one of the presentations at KubeCon China this year, Hui Luo, a software engineer at VMware demonstrated how Kubernetes can be used to handle machine learning workloads in private cloud as well.

Summary

As enterprises have started embracing open source technologies in considerable manner to reduce costs, it has been observed that Kubernetes has been evolved from just a container orchestration framework to handling even more complex workloads of different types.

Even though most of the software industry has leaned towards cloud-native, dividing monolithic applications in small services which can scale, managed independently and communicate among themselves through APIs, Kubernetes has become a de facto standard to completely take care of all services residing in containers. A similar mechanism of Kubernetes has been adopted to handle NFV, machine learning, and hardware resources workloads.

Download our eBook to know more about the Kubernetes technology and industry/market insights.

The post Evolvement of Kubernetes to Manage Diverse IT Workloads appeared first on Calsoft Inc. Blog.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Gartner’s strategic tech trends show the need for an empowered edge and network for a smarter world

Opinion Earlier this week, Gartner released its top 10 strategic technology trends for 2019, and looking at the list, I was not surprised to see edge and blockchain technologies continuing into 2019 from last year.

In 2018, Gartner mentioned the technology trends as ‘cloud to the edge’ where the shift to edge-based infrastructure was predicted from centralised cloud platforms to address challenges related to bandwith constraints, connectivity and latency. Now this year, Gartner is emphasising the empowerment of edge-focused infrastructure due to ongoing substantial growth in digital devices, especially those devices which require analysis response after computation at the data centre end in no time.

Technologies which have enabled intelligence into operations or in devices is what we have been listening to for a long time. Like intelligent or autonomous things, quantum computing, and AI-driven development. In fact, we have seen such imaginary stuff in movie and TV ads as well.

But to actually enable 100% accuracy and delivery of services to end users, it needs a higher capacity network, computational power, and lower latency. Intelligent technologies will be useful only when a response will be real time – otherwise it won’t be much use for people who will engage with AI-based autonomous robots who will think first and respond after ‘some’ time. That would be disastrous.

Take an example of an autonomous car where manufacturers are evangelising its usage across the world to reduce mishaps and allow luxurious long rides. What if a network fails to respond in time which is bringing ‘intelligent’ instruction to the car so that it will take the corresponding action?

All such digital innovation will not be possible without two things; a communication network having lightning speed, almost giving real time experience and, most importantly, an agile response from computing resources of processed ‘intelligent’ data.

This is impossible with cloud, but can be enabled using edge computing. How? Cloud is a centralised data centre equipped with all computing infrastructure with higher capacity to support multiple types of digital communications. Many of the current cloud-based applications are not affected with bandwidth and latency constraints. For example, SaaS applications may not need a rapid response where it will only store data in the application. But the enablement of cognitive technologies for autonomous ‘things’ will be time and latency sensitive. Such possibilities can occur by bringing edge computing into the current communication network.

Looking at a future of having all digital devices serving to end users, there was a need for such edge topology which can give cloud-like performance closer to devices and reduce the burden on network usage. The upcoming 5G, and its feature of multi-access edge computing, will address exactly this.

Take any leading tech vendors – they are all indulged around these technology trends Gartner has listed. They are actively optimising or innovating existing solutions, offering new revolutionary products to support digital growth. But I believe that all the innovation will be consumed by the end user at its fullest when there is continuous upward innovation in IT infrastructure and communication networks. Smart devices can only be smart if they have capabilities to communicate in real time.

Editor’s note: You can find out more about the basics of edge computing architectures, types, use cases, as well as the market ecosystem in 2018, with this eBook which can be downloaded here.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.