Todas las entradas hechas por rebeccafitzhugh

How DevOps and a hybrid model can make the most out of legacy applications

Do you have an application marked with cryptic warning signs and a wealth of cobwebs that is running on legacy hardware hidden away in the back corner of your data centre? If you’re in enterprise IT, chances are high that you do. These old platforms are often considered a bane to IT. More importantly, legacy applications can present a real headache when attempting to uplift the people, processes and tooling needed to embrace a hybrid cloud model.

Providing self-service, orchestrated workflows and delivery pipelines are just a few signature attributes to constructing a hybrid cloud. Imagine trying to apply these ideas to legacy technology, where the interface requires using an archaic console and software that looks like it was written for Windows 3.1. It’s not fun and often derails any efforts to deliver services living in an on-premises software-defined data centre (SDDC) or public cloud environment. It also splits up a team focused on working within the realm of a DevOps cultural model, because concepts such as infrastructure as code, stateless configuration and collaborative code check-ins crumble.

For most folks I speak with, legacy applications remain with a small skeleton crew that keeps the lights running and the old hardware humming.

Hybridity is a reality for most infrastructures rather than being purely in the public cloud or purely on-premises. This is where the importance of DevOps and embracing an API-first mentality is multiplied. Having an API-first approach isn’t necessarily about having a single code repository or a single application that does “all the things.” It is about leveraging APIs across multiple repositories and multiple applications to weave together a single, programmatic software fabric in which everything can communicate and integrate regardless of whether legacy or cloud-native.

The way to end chaos induced by the complexity with hybridity is a progressive, policy-driven state with proper implementation.

Here are four tips on how to integrate your legacy crew with your DevOps team to create a high-functioning hybrid cloud model:

Find tools that play well with others

Legacy applications do not have to be the anchor holding you back from adopting a DevOps strategy. Find solutions that treat legacy applications, modern applications and cloud-native applications as if they were all first-class citizens. Legacy applications tend to run on old OSes, including some that are no longer supported. Eventually this configuration becomes increasingly fragile, requiring manual care and feeding. Adopting an automation strategy can reduce risk associated with build, testing, deployment, remediation and monitoring.

Strive for simplicity – don’t create silos

Make sure the workflows that deliver services are applicable to the vast majority of use cases. Historically there has been little to no incentive to building mechanisms that are shared across the enterprise. Systems thinking advises that there are dependencies, but if you only improve one application in the cluster, there will be no benefit. The entire cluster must be addressed for successful progress. The use of RESTful APIs is turning tables, allowing for a single set of tooling across many applications, platforms and services. Share data across the organisation; leverage APIs to streamline how the team interacts with legacy workloads by extracting data from the legacy architecture.

Embrace a one-team mentality

Don’t form multiple teams or tiers of teams. If the decision is made to adopt DevOps, then it must be embraced by the entire organisation. As the old adage goes: “There are no legacy systems, just legacy thinking.” DevOps isn’t about developers and operations folks collaborating; rather, it’s about two separate silos becoming one team. Focus on improving communication and set aside time for learning.

Avoid infrastructure-specific solutions

Abstract storage and compute; work instead on the application layer and how to deliver services for those applications. Legacy applications often are coupled tightly to the underlying hardware, making it challenging and risky to manage any application component individually. This means maintenance and upgrades are incredibly time-consuming, difficult and even expensive. The adoption of infrastructure-as-code, whether on-premises or in the cloud, gives teams the permissions and tooling provision infrastructure on-demand. As you can imagine, not every legacy application can be easily migrated to this type of infrastructure, but many can.

Using automation increases the organisation’s agility by reducing human interaction and orchestrating dependencies across the organization. Self-service consumption of infrastructure further eliminates silos. DevOps is not incompatible with legacy applications, but it requires an organisation to evaluate what this implementation actually means and how to embark upon this transformation.

Never fear: You can still depend on a legacy application and embrace DevOps. It requires additional design considerations, elbow grease and a little bit of creativity. The fundamental element is to apply these principles in a consistent and efficient manner in the context of all applications, both legacy and modern.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.