IBM goes cloud-native with Red Hat OpenShift


Jane McCallion

1 Aug, 2019

IBM has wasted no time incorporating Red Hat into its portfolio, announcing today that its full software offering has been “transformed… to be cloud-native”.

This, the company claims, will allow customers to build mission-critical apps once and then run them on most public clouds, including AWS, Azure, Google Cloud Platform, Alibaba and, of course, its own IBM Cloud.

The move comes just three weeks after IBM was given regulatory approval to acquire open source stalwart Red Hat and it’s no coincidence that this initiative is “optimised” to run on the OpenShift containerisation platform.

In its cloud-native form, IBM’s software will be offered as pre-integrated, containerised modules called IBM Cloud Paks.

The first five of these Paks – for Data, for Applications, for Integration, for Automation, and for Multicloud Management – are available today. More will be forthcoming, it seems, but no timeframe or number has yet been given.

In addition to Cloud Paks, IBM made three other Red Hat-centred announcements today.

The first is Red Hat OpenShift on IBM Cloud, a “flexible, fully-managed service” that the company claims will “help enterprises modernise and migrate to a hybrid cloud infrastructure”.

The second is the news that Red Hat OpenShift is now available for IBM Z and LinuxONE, having previously only been available on Power Systems and Storage.

Finally, there are new consultancy and technology services available from IBM for Red Hat.

Arvind Krishna, senior vice president of cloud and cognitive software at IBM, said: “This will further position IBM as an industry leader in the more than $1 trillion dollar hybrid cloud opportunity.

“We are providing the essential tools we think enterprises need to make their multi-year journey to cloud on common, open standards that can reach across clouds, across applications and across vendors with Red Hat.”

IT operations in 2020: Five things to prepare for – from AIOps to multi-cloud and more

The threat of digital disruption has forced senior executives and technology leaders to rethink business models, data assets, and distribution channels, to create more innovative products and services that will delight customers and overcome more nimbler competitors. Over the last decade, enterprises have completely transformed the way they build, deploy, manage, and maintain mission-critical services as a response to increasing digitisation.

Developers have responded to the enterprise transformation challenge by adopting innovative technologies and practices including the consumption of public cloud services, the embrace of agile and DevOps for rapid software delivery, the shift from monolithic development patterns to microservices development, and machine learning models for process innovation.

IT operations teams have historically ensured the availability and performance of enterprise workloads by minimising change and avoiding disruption. Given the demands of digital business, digital operations teams will need to take advantage of established and emerging technology trends to drive product momentum, deliver compelling customer experiences, and ensure long-term corporate survival.

In 2020, IT operations teams will need to embrace these five shifts to scale up innovation and respond effectively to digital disruption:  

How IT operations can stay relevant in a DevOps world

At the 2009 Velocity conference, a session on 10+ Deploys Per Day: Dev and Ops Cooperation at Flickr by John Allspaw and Paul Hammond showed how enterprises could accelerate release velocity with automated infrastructure tooling, continuous integration and deployment processes, and shared metrics. This Velocity talk ignited the DevOps movement, calling for a new model of trust, collaboration, and accountability between Dev and Operations teams.

A decade later, DevOps has broad mainstream adoption, with site reliability engineers and DevOps specialists being the top earners in Stack Overflow’s 2019 Developer Survey. DevOps is key to enabling business agility and minimising friction, with Gartner predicting that 90 percent of the top 100 global companies will slash operational inefficiencies with DevOps practices by 2020. Meanwhile, a recent McKinsey study found that few business executives believe “their IT functions make meaningful contributions in areas that promote strong business performance.”

These trends might ensure that DevOps teams are the ones calling the shots with active participation in digital experience products leading to larger organisational budgets and greater organisational clout. Does this mean that IT operations will have to stay content managing legacy application and infrastructure portfolios (aka ‘keeping the lights on’)?

Takeaway: IT operations will need to combine their traditional focus on reliability, resilience, security, and efficiency with greater attention to release velocity, continuous improvement, and customer-centricity. Innovations in IT operations can support digital transformation initiatives and assure that the new speed of DevOps won’t put the business at risk.

AIOps: Not such an old-school incident management workflow

A recent IDC study finds that IT operations teams are the biggest buyers of artificial intelligence tools for rapid pattern recognition, seamless incident collaboration, and faster issue resolution. In 2020, it is time to move away from siloed and reactive to proactive and preventive incident management using the power of machine learning and data science. A modern AIOps solution can drastically reduce the human time spent identifying, logging, categorising, prioritising, responding, and closing incidents by:

  • Analysing and processing a wide variety of events across different monitoring tools so that duplicate and noisy alerts are automatically suppressed
  • Using machine data intelligence to get ahead of alert storms, speed up root cause(s) analysis, and reduce service disruptions
  • Sending real-time, contextual alerts to on-call service delivery teams with bidirectional integrations for IT service management tools  
  • Addressing routine incidents at scale using automated remediation so that human operators can focus on high-value business projects

Takeaway. Digital operations teams should start piloting AIOps initiatives to understand how machine learning-powered event management can reduce the human time spent on incident detection, first response, alert prioritisation, and root cause analysis.

New ways to control the chaos of multi-cloud management

Flexera’s 2019 State of the Cloud Report found that 84 percent of IT leaders use five different cloud providers as part of their enterprise cloud strategy. Given that AWS alone has 170+ unique services across 23 product categories, it is no easy task managing different cloud services across leading cloud platforms. So, what are the driving forces behind multi-cloud adoption?

Given the dominance of AWS which has a 35% market share in the cloud infrastructure services market, CIOs are looking to work with other cloud providers like Microsoft and Google to preempt fears of cloud lock-in. The other reason for selecting multi-cloud platforms is 451 Research’s best execution venue strategy of picking the right cloud environment for a specific type of business workload so that IT teams can optimise for both performance and cost.

Here are three factors that cloud teams will need to carefully consider while deploying a multi-cloud enterprise strategy:

  • Resource complexity: Cloud infrastructure teams will need to select the right instance type for their workload requirements across thousands of cloud SKU instances. Picking and optimising right-sized instances is an ongoing task and requires difficult tradeoffs based on architecture, demand, performance, resilience, and cost
     
  • Multi-cloud monitoring: While there are plenty of native monitoring tools like Amazon CloudWatch, Azure Monitor, and Google Stackdriver, these solutions are best employed for cloud-provider specific insights. Enterprises should either invest either in open source tooling (Prometheus/Graphite, Grafana) or third-party monitoring tools that can easily integrate, capture, and present insights from multi-cloud environments
     
  • Embed FinOps thinking in your cloud centre of excellence: Optimising cloud costs across instance types and pricing models (on-demand, dedicated, spot, and reserved) is a complex exercise. The emerging discipline of FinOps helps enterprises better plan and predict cloud budgets by bringing together best practices for optimising cloud spending. FinOps offers a new procurement model that emphasises shared accountability for cloud financial management across technology, finance, and business teams so that enterprises getter a better return for their cloud investments.

Takeaway. Enterprise IT teams should learn from FinOps pioneers on how to make the right tradeoffs between cost, performance, and resilience for cloud services. Cloud architects should experiment with both open source and commercial monitoring tools to understand how they can drive real-time visibility and ensure faster incident response for multi-cloud operations. 

Cloud transforms the enterprise data centre

Corporate data centres are increasingly taking on attributes of public cloud infrastructure with on-demand consumption and pay-per-use pricing models. Here are three trends that are a clear indication of how data centres are evolving in the cloud era:

  • Hybrid cloud models: For a long while, public cloud platforms refused to acknowledge that certain workloads could only operate on-prem due to latency, security, or compliance requirements. Cloud providers have now openly embraced the hybrid cloud value proposition, with Microsoft launching Azure Stack in 2017, followed by AWS Outposts in 2018, and Google Anthos in 2019. Hybrid cloud solutions allow enterprises to run workloads within their data centres and not worry about day-to-day management while letting cloud providers breach the final frontier of data centre gravity  
     
  • Consumption-based infrastructure models: Enterprises can leverage a host of innovative solutions (HPE GreenLake, Dell Flex on Demand, Lenovo TruScale Infrastructure Services, and Cisco Open Pay) that let them tap into flexible payment models for data centre resources. IT teams can defer capital expenditures, work with the latest hardware, track real-time usage, and outsource management to the OEM or a managed service provider, allowing them to purely focus on business outcomes
     
  • Write once, run anywhere with orchestration engines: Container orchestration engines like Kubernetes, Docker Swarm, and Apache Mesos have exploded in popularity as they allow IT teams to run cloud-native services anywhere and offer a consistent management framework for building and scaling distributed applications. Cloud-native services can be deployed across data centre and cloud environments using container orchestration engines, ensuring a high degree of portability, faster release velocity, and better operational control with abstracted infrastructure

Takeaway. Data centres are ripe for disruption and IT teams should outsource the heavy lifting involved in designing, deploying, monitoring, and maintaining mission-critical infrastructure. Data centre managers should work with both hyperscale and OEM providers to tap into the power and flexibility of hybrid cloud and consumption-based utility models.

How to tackle the looming skills crisis

Research firm IDC expects that 30% of IT roles involving emerging technology skills will remain unfilled through 2022. A recent survey found that 94% of IT decision-makers are finding it somewhat difficult, difficult, or very difficult to hire DevOps professionals, cloud native developers, and multi-cloud operators. Disruptive technology trends have ensured that  IT operations teams have to constantly upgrade their skills to remain relevant.

  • The popularity of cloud native infrastructure requires a new set of skills across lifecycle automation and configuration, observability and analysis, and security and compliance for driving reliable and scalable applications
  • The adoption of AIOps solutions needs IT practitioners who are familiar with advanced statistical techniques and can combine data-driven insights and human intuition to reduce application downtime and ensure a faster recovery

Takeaway. CIOs will need to invest heavily in skills development programs to attract and retain employees. IT leaders will use a mix of internally run programs, hands-on learning, and external providers to counteract the skills gap in a competitive job market.

Conclusion

In a world where change is the only constant, IT operations will need to become increasingly proactive and dynamic to meet the needs of the business. Technology operations management will emerge as a renewed discipline, where innovation is only limited by imagination.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

G Suite now offers enhanced security for high-risk users


Keumars Afifi-Sabet

1 Aug, 2019

Google has extended its advanced security programme to enterprise customers using its G Suite, Google Cloud Platform (GCP) and Cloud Identity products, giving IT administrators the ability to set stronger internal controls.

Organisations can enrol senior executives and those employees at high-risk of cyber attacks into Google’s Advanced Protection Program (APP), which will bring their level of security up to the standards of Google’s own employees.

Within the next few days, IT administrators can select the members of their organisation who they assess as needing stronger protections, and Google will automatically apply a set of stricter cyber security policies to their activities.

There are several changes to how those enrolled in the programme can access Google’s products, including enforced FIDO keys, blocking access to non-trusted third-party apps automatically, and enhanced scanning of incoming emails.

These changes will come alongside making Titan security keys, Google’s own FIDO key, available for purchase in Japan, Canada, France and the UK, as well as using machine learning to improve security alerts for IT administrators.

The use of such FIDO keys will be mandatory for those enrolled in the advanced security programme, meaning access to critical Google apps may be disrupted for users without them. Third-party apps will also be automatically blocked for APP users unless explicitly whitelisted.

The use of machine learning, meanwhile, will be directed towards analysing activity within the G Suite to detect unusual behaviour. In practical terms, IT administrators signed up to the service will receive a stream of anomalous activity alerts on a security dashboard.

This raft of added security protections will bolster the security across organisations signed up to Google’s enterprise products by both demanding more of high-risk employees and adding more robust provisions.

However, the majority of these practices can be seen as essential for good cyber security hygiene, regardless, and raise the question as to why they haven’t been introduced to customers up to now. It’s especially pertinent given Google employees have adhered to the APP regime since it was launched two years ago.

Google, at the time of launch, restricted the APP to those at elevated risk of attack and who are also “willing to trade off a bit of convenience for more protection”.

There is now, however, no stopping IT administrators from now enrolling their entire organisation to the programme should they deem it the best defence against cyber threats.