Todas las entradas hechas por bhanusingh

Edge computing and ITOps: Analysing the opportunities and challenges ahead

It’s true that edge computing is hard to define and is running high on the hype scale. But research and surveys continue to indicate that this trend of processing data where it’s collected for better latency, cost savings and real-time analysis is an innovation with legs. There will be 75 billion IoT devices by 2025, according to Statista.

According to Spiceworks’ “2019 State of IT” report, 32% of large enterprises with more than 5,000 employees are using edge computing, and an additional 33% plan to adopt it by 2020.Tied to the growth of edge computing is the advent of 5G wireless: 51 operators globally will start 5G services by 2020, according to Deloitte Global research from 2019.

The major cloud companies are also investing in the edge. The AWS Local Zones service allows single-digit latency connecting to computing resources in a metro environment, while Microsoft offers the Azure Stack Edge appliance and Google Cloud IoT is a “complete set of tools to connect, process, store, and analyse data both at the edge and in the cloud.” It’s safe to say that edge computing is becoming mainstream and CIOs and their IT operations leaders should plan appropriately for it in 2020 and beyond.

Benefits of the edge for ITOps

We’ve read plenty about the business benefits from edge computing: oil rig operators need to see critical sensor data immediately to prevent a disaster; marketers want to push instant coupons to shoppers while in the store; video security monitoring can catch a thief in the act and medical device alerts that ensure patient safety are just a few solid use cases for edge-based processing. Edge computing may save IT money on cloud and network bandwidth costs as data volumes keep exploding and the need to store every data point becomes harder to justify.

There are also implications for IT management and operations. Local processing of high volume data could provide faster insights to manage local devices and maintain high-quality business services when seconds make a difference – such as in the event of a critical server performance issue threatening the ecommerce site.

Today, IT operations teams are inundated with data from thousands of on-premise and cloud infrastructure components and an increasingly distributed device footprint. The truth is, only an estimated 1% of monitoring data is useful, meaning that it provides indications of behavior anomaly or predictions about forthcoming change events.

With edge monitoring, we can potentially program edge-based systems to process and send only that small sliver of actionable data to the central IT operations management system (ITOM), rather than transmitting terabytes of irrelevant data daily to the cloud or an on-premise server where it consumes storage and compute power.

The job of filtering out the highly-contextual data on the edge, where business occurs, can support real-time decisions for successfully running IT operations at speed and scale—regardless of what combination of on-premise, public cloud or private cloud infrastructure is in place. At the same time, ITOps will need to be a leader when it comes to minimising the risk of edge technology from a performance, security and privacy perspective. However, as detailed below, we are in the early stages of determining how to make this work in practice.

These are the ITOps realities for edge computing:

Edge-specific security needs are still unknown

Edge devices are often small and infrequently designed with security in mind. More than 70 percent of edge devices don’t mandate authentication for third-party APIs, and more than 60 percent don’t encrypt data natively. So the attack surface in IoT and edge environments is now larger, and less secure. This is particularly worrisome when considering edge devices that collect personally identifiable information such as email, phone numbers, health data or financial formation like credit card data. IT operations will need to work closely with security and legal teams to map out the company-specific risk, governance and compliance requirements around managing edge data.

Edge monitoring tools are immature

Companies need platforms that can instantly monitor and analyse edge-generated data. In the connectivity of tomorrow, billions of connected devices will be communicating machine-to-machine, and the addition or subtraction of connected devices will be possible at an unprecedented scale. In this environment, the ability to manage large volumes of connected devices and the information being exchanged between them will be critical. 5G acts as the unifying technology, bringing flow of information and the density of scale. We will see an influx of innovation in edge monitoring in the coming years.

New environments call for new rules

As organisations move more data and application assets to edge computing environments, IT will need to devise new policies and thresholds for central processing and alerting of all this data. Applying AI-based automation is essential here, as manual efforts will have zero chance of keeping up with the volume of data filtering, analysis and response. We are entering the age of nano satellites, vis-à-vis SpaceX and OneWeb. These edge devices will transform the future of agriculture, energy, mining, transportation and finance due to their capabilities for sending insightful data in real-time to customers, wherever they are at any moment. IT operations will have its work cut out to understand and properly manage this evolving edge infrastructure.

DevOps processes will become even more paramount

If you haven’t already realised that DevOps is taking over software development and IT management, just wait for when edge goes mainstream. There will be no other way to manage change and deployments of edge technology without the agile, continuous integration and continuous delivery methodology of DevOps. It will be imperative for ITOps to adopt DevOps practices and tools to manage, monitor and deploy edge resources.

Conclusion

ITOps is at a crossroads, determining how much of the past is still relative and how much they will need to change to adapt to a distributed, hybrid cloud world that will soon include edge as a fundamental pillar of their digital strategy. Security, machine intelligence and DevOps will be crucial areas of expertise for ITOps teams looking to help drive better business value and customer experiences from the edge.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

What automation can learn from DevOps – and why the future is automation with iteration

A recent survey from Capgemini revealed that while enterprise-scale automation is still in its infancy, IT automation projects are moving along (below). IT is starting to view automation less tactically, and more strategically.

Figure 1. IT leads automation implementation (respondents were asked to select all that apply: “In which of the following functions has your organisation implemented automation initiatives?”). Source: Capgemini Research Institute, Automation Use Case Survey; July 2018, N=705 organisations that are experimenting with or implementing automation initiatives.

The Capgemini survey also showed that IT automation can be responsible for several quick wins, including self-healing, event correlation, diagnostics, application releases, cybersecurity monitoring, and storage and server management tasks. These projects not only lead to massive IT cost savings but, more importantly, to an increase in reliability and responsiveness to customer demands and business services. That would indicate that, while automation is a great solution for manual work, it’s also a part of a high-level, strategic IT plan to innovate the business.

But as DevOps practices like agile methodology and continuous deployment and optimisation start to take hold within the modern enterprise, it stands to question: can automation be agile as well? This is the promise of artificial intelligence for IT operations, or AIOps, but if that’s not a possibility for your IT organisation today, it’s important to make sure that your automation practices are continuously optimised to fit the task. Setting and forgetting was a practice of the server era, and in a world of on-demand infrastructure, automation ought to be continuously optimised and evaluated for maximum benefit.

The new expectations of automation

IT automation projects can have serious ramifications if anything goes wrong, because when the machines execute a policy, they do it in a big way. This is perhaps the chief argument as to why it’s critical that progressive steps are used to define and evaluate both the process being automated and the automation itself – they mitigate the seriousness of any issue that can arise. This is why it’s important to consider the following:

  • Is this a good process, and is it worth automating?
  • How often does this process happen?
  • When it happens, how much time does it take?
  • Is there a human element that can't be replaced by automation?

Let’s break the steps down and see how it can provide the basis for an iterative approach to automation:

Is this a good process?

This may seem like a rudimentary question, but in fact, processes and policies are often set and forgotten, even as things change dramatically. Proper continuous optimisation or agile automation development will force an IT team to revisit existing policies and identify if it’s still right for the business service goals.

Some processes are delicate and automation may threaten their integrity, whereas others are high-level and automation neglects the routine tasks that underlie the eventual results. A good automation engineer understands what tasks are the best candidates for automation and sets policies accordingly.

How often does this process happen?

Patching, updating, load balancing, or orchestration can follow an on-demand or time-series schedule. As workloads become more ephemeral, moving to serverless, cloud-native infrastructure, these process schedules will change as well. An automation schedule ought to be continuously adapted to the workload need, customer demand, and infrastructure form. Particular as the business continues the march toward digital transformation, the nature and schedule of particular work may become more dynamic.

When it happens, how much time it takes?

This also depends on the underlying infrastructure. Some legacy systems require updates that may take hours, and some orchestration of workloads will be continuous. Automation must be tested to be efficient and effective on the schedule and frequency of the manual task.

Is there a human element that’s irreplaceable?

As much as you may want to, it’s difficult for automation to shift left (to more experienced tasks and teams) without the help of artificial intelligence or machine learning. Many times there is a human element involved in deriving insights, creating new workflows, program management or architecture that take place. When building an iterative automation practice, make sure you identify where human interaction must occur to evaluate and optimise.  In our lifetime, technology has advanced at lightning speed with robots now completing jobs that were once held by people. However, there are times when a machine just cannot deliver the same quality a human can.

Automation for all

Automation is perhaps one of the most defining signatures of the future of IT operations management. It relieves teams of routine work and helps improve overall efficiency, all while driving quick wins that turn an IT team into heroes. But don’t let automation be the end goal. Instead, consider it a tool, like any other tool, that can drive action from data. And until AI is an everyday option, it’s inherent on the IT professional to continuously optimise the data that drive that action.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Four reasons why your company might not be ready for DevOps just yet

Let’s get one thing straight: I’m a huge fan of DevOps. It has been shown to increase quality, reduce problems, and shorten development cycles. It’s often considered a panacea for large organisations looking to transform their development, production and operational lifecycles. But is it right for every business? Companies that do it successfully can reap the benefits of continuous deployment and testing, but companies that fail get trapped in endless loops of missed deadlines.

There are some criteria that any IT team should investigate before making the transformational shift to CI/CD. It involves taking a hard look at the existing culture, process, and even management style. There’s no shortage of ink spilled on articles that try to convince you why DevOps is the future. Instead, I want to focus on when and why DevOps doesn’t work to truly help identify if it’s right for you.

Is the culture ready?

Because the transformation to DevOps is simultaneously a change in process, tools, and philosophy, it requires a cultural shift in collective mindset that’s fraught with potential failure. DevOps success relies on three Cs: communication, collaboration, and coordination between different teams (including software developers, quality, operations teams, and executive stakeholders). The first challenge is to understand and unpack how these groups are aligned and interrelated. Then, the executive leadership must develop a working model of communication between them with incremental milestones to gradually shift culture toward more openness and connectivity.

If your business is too siloed or relies on legacy organisational structure, chances are good that this cultural shift may prove to be too difficult. I’ve seen it often fail in large organisations with entrenched leadership or processes. Some companies could be hundreds of years old, while others could be the latest and greatest modern organisations that are simply stuck in their existing ways. And if the culture won’t change, then the actual DevOps process is ultimately doomed.

Can the structure handle it?

DevOps is highly culture-dependent, but it also requires a shift in how software is architected, built, tested and deployed. At first, this may seem like common sense, but in reality, it’s often not discussed during the transition.

Monolithic software architecture with complex dependencies between different layers and teams can cause a DevOps evolution to struggle and fail. Often, quality is sacrificed in the name of agility and speed. Organisations using a modern, cloud-native microservices architecture are typically more successful in adopting a DevOps practice. In these organizations, product or service teams can operate independently while staying aligned toward the ultimate business goals or customer experience objectives. The company is already broken into purpose-built sprint pipelines that can move with agility.

Where to begin? Identify the warning signs

Before building your DevOps roadmap, it’s critical to spend some quality time soul-searching and stress testing your organisational culture. If you see any of these, you might face some steep uphill challenges in your evolution:

  • Your company has a well-defined process: Companies that are already in love with their culture and software development process will cling to it with white knuckles. These companies resist change and might not fit the DevOps profile
  • Your company wants to dive in head first: If your company just wants a DevOps process because it’s trendy and innovative, you might not be ready for it. Proper implementations require a true understanding of business outcomes with pros and cons. Remember, it’s a mindset, not just a movement
  • Your company wants a “department of DevOps”: Trying to create DevOps as a separate department, without bringing current Dev and Ops teams together, is a recipe for failure. DevOps isn’t a side hustle. It’s transformational
  • Your company has fiefdoms: Organisations whose Development and Ops teams are highly distributed and isolated from each other could struggle to bring them together without providing each team with some common leadership.

In short, DevOps must become the culture of the organisation, driven by the CEO and his/her team of functional and organisation leaders with a clear understanding of the implications and outcomes. It’s a mindset that requires a transformation in process, organisation, technology, and information to drive a meaningful, sustainable change. The rewards are huge. But as with any potential upside, your company will have to work for it.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.