Todas las entradas hechas por lorimacvittie

Moving from DevOps to modern ops: Why there is no room for silos when it comes to cloud security

It started with DevOps. Then there was NetOps. Now SecOps. Or is it DevSecOps? Or maybe SecDevOps?

Whatever you decide to call it, too often the end result is little more than the same old siloes with shiny new names. We've become so focused on "what do we call these folks" that we sometimes forget "what is it we're trying to accomplish".

Shakespeare said that a rose would smell as sweet by any other name. Let's apply that today to the number of factions rising in the operations game. Changing your name does nothing if you don't change your core behaviours and practices.

Back when cloud first rose – pun intended – there were plenty of pundits who dismissed enterprise efforts to build private (on-premises) cloud. Because it didn't fit the precise definition they wanted to associate with cloud. They ignored that the outcome was the measure of success, not measuring up to someone else's pedantic definition. They sought agility and efficiency and speed by changing the way infrastructure was provisioned, configured, and managed. They changed behaviours and practices through the use of technology.

Today the terminology wars are focused on X-Ops and what we should call the latest arrival, security.

I know I've used the terms, and sometimes I use them all at the same time. But perhaps what we need is fewer distinctions. Perhaps I should just say you're either adopting "modern ops" in terms of behaviours and practices or you're remaining "traditional ops" and that's all there is to it.

Modern ops employ technology like cloud and automation to build pipelines that codify processes to speed delivery and deployment of applications.

And they do it by changing behaviours and practices. They are collaborative and communicative. They use technology to modernise and optimise decades old processes that are impeding delivery and deployment. They work together, not in siloed X-Ops teams, to achieve their goal of faster, more frequent releases that deliver value to the business and delight consumers.

Focusing on what to call "security" as they get onboard with modern ops can be detrimental to the basic premise that delivery and deployment can only succeed at speed with a collaborative approach. Slapping new labels on a new focused team just builds differenter siloes; it doesn't smash them and open up the lines of communication that are required to operate at speed and scale.

It also unintentionally gives permission to other, non-security ops to abdicate security responsibilities to the <SecDevOps | DevSecOps> team. Because it's in their name, right?

That's an increasingly bad idea given that application security is a stack and thus requires a full stack to implement the right protections.  You need network security and transport security and you definitely need application security. The attack surface for an app includes all seven layers and, increasingly, the stack comprising its operational environment. There is no room for silos when it comes to security.

The focus of IT as its moving through its digital transformation should be to modernise ops – from the technology to the teams that use it to innovate and deliver value to the business. Modern ops are not consumed by concern for titles, they are passionate about producing results. Modern ops work together, communicate freely, and collaborate across concerns to build out an efficient, adaptive delivery and deployment pipeline.

That will take network, security, infrastructure, storage, and development expertise working together.

In the network, we use labels to tag traffic and apply policies that control what devices can talk to which infrastructure and applications. In container clusters we use labels to isolate and restrict, to constrain and to disallow.

Labels in organisations can have the same affect.

So maybe it would be better if we just said you are either modern ops or traditional ops. And that some are in a transitional state between the two. Let's stop spending so many cycles on what to call each other that we miss the opportunity to create a collaborative environment in which to deliver and deploy apps faster, more frequently, and most of all, securely.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Practical cloud considerations: Security and the decryption conundrum

Compute in the cloud may be cheap but it isn't free. Most of today’s apps are delivered via secure HTTP. That means TLS or the increasingly frowned upon SSL. It means cryptography, which traditionally has been translated to mean performance problems.

Thanks to advances in technology, CPUs are now incredibly fast and many client (and server-side) hardware natively integrates what was once specialised cryptographic hardware. This means that, on a per-connection basis, speed is not as much of an issue on an individual basis for cryptography as it once was.

But that doesn't mean that cryptography still isn’t a source of performance and operational expense. 

Applications today are not comprised of a single endpoint. There are multiple intermediaries and proxies through which a message must travel before that "single endpoint" is ever encountered. They are security and access control, load balancing and routing endpoints. Each needs to inspect the message – in the clear – in order to execute its designated role in the complex dance that is the modern data path.

Here is where the argument that cryptographic isn't as expensive starts to fall apart. On its own, a single endpoint introduces very little delay. However, when repeated multiple times at every endpoint in the data path, those individual delays add up to something more noticeable and, particularly in the case of public cloud, operationally expensive.

Cryptography is naturally a computationally expensive process. That means it takes a lot more CPU cycles to encrypt or decrypt a message than it does to execute business logic. In the cloud, CPU cycles are analogous to money being spent. In general, it's an accepted cost because the point is to shift capital costs to operational expense.

But the costs start to add up if you are decrypting and encrypting a message several times. You are effectively paying for the same cryptographic process multiple times. What might be computed to cost only a penny when executed once suddenly costs five pennies when executed five times. Do the math for the hundreds of thousands of transactions over the course of a day (or an hour) and the resulting costs are staggering.

Also remember that each CPU cycle consumed by cryptographic processing is a CPU cycle not spent on business logic. This means scaling out sooner than you might want to, which incurs even more costs as each additional instance is launched to handle the load.

Suffice to say that "SSL everywhere" should not result in "decrypt everywhere" architectures in the cloud.

Decrypt once

To reduce the costs and maximise the efficacy of the CPUs you're paying for, it is worth the time to design your cloud-based architecture on a "decrypt once" principle. "Decrypt Once" means you should minimise the number of endpoints in the data path that must decrypt and re-encrypt messages in transit.

Naturally, this requires forethought and careful consideration of different application services you're using to secure and scale applications. If you aren't subject to regulations or requirements that demand end-to-end encryption, architect your data path such that messages are decrypted as early as possible to avoid additional cycles wasted on decryption later. If you are required to maintain end-to-end encryption, the combining of services whenever possible will net you the most efficient use of compute resources.

Combining the services – i.e. load balancing with web application firewall – on a single platform means reducing the number of times you need to decrypt messages in transit. It also has the added advantage of reducing the number of connections and time on the network, which translates into performance benefits for users and consumers. But the real savings are in CPU cycles that aren't spent on repeated decryption and re-encryption. 

It may seem a waste of time to consider the impact of encryption and decryption for an app that's lightly used today. The pennies certainly aren't covering the cost of the effort. But as apps grow and scale and live over time, those pennies are going to add up to amounts that are impactful. Like pennies, microseconds add up. By considering the impact of cryptography across the entire data path, you can net benefits in the long run for both users and the business.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why standardisation is good for NetOps: Innovation instead of impediment

Standardisation is sometimes viewed as an assault on innovation. Being forced to abandon a polyglot buffet and adopt a more limited menu will always sound stifling. That may be because standardisation is often associated with regulatory compliance standards that have official sounding names like ISO 8076.905E and are associated with checklists, auditors and oversight committees.

The reality is that there are very few standards – in fact none that I can think of – governing enterprises' choice of languages, protocols and frameworks.

Enterprise standardisation is more driven by practical considerations such as talent availability, sustainability, and total cost of ownership over the (often considerable) lifetime of software and systems.

Studies have shown the average software lifespan over the past twenty years is around six to eight years. Interestingly, longevity tends to increase for larger programs, as measured by lines of code (LOC). Systems and software with over a million LOC appear to have lifespans over a decade, lasting 12 to 14 years. While you may dismiss this as irrelevant, it is important to realise that at the end of the day, network automation systems are software and systems. They need the same care and maintenance as software coming out of your development organisation. If you're going to treat your production pipeline as code, then you've got to accept that a significant percentage of that automated pipeline is going to be code.

Over the course of that software or system lifespan, it’s a certain bet that multiple sets of operators and developers will be responsible for updating, maintaining, operating, and deploying changes to that software or system. And this is exactly what gets at the heart of the push for standardisation – especially for NetOps taking the plunge into developing and maintaining systems to automate and orchestrate network deployment and operation, as well as application service infrastructure. 

Silos are for farms

If you or your team chooses Python while another chooses PowerShell, you are effectively building an operational silo that prevents skills sharing. This is a problem. The number one challenge facing NetOps, as reported in F5 and Red Hats’ State of Network Automation 2018 report, was a lack of skills (49% of surveyed NetOps). Therefore, it would seem foolish to create additional friction by introducing multiple languages and/or toolsets.

It is similarly a bad idea to choose languages and toolsets for which there is no local source of talent. If other organisations and nearby universities are teaching Python and you choose to go with PowerShell, you're going to have a hard time finding staff with the skills required for that system.

It is rare that an organisation standardises on a single language. However, they do tend to standardise on just a few. NetOps should take their cues from development and DevOps standards as this will expand the talent pool even further.

Time to value is valuable

Many NetOps organisations already find themselves behind the curve when it comes to satisfying DevOps and business demands to get continuous. The unfortunate reality of NetOps and network automation is that it's a heterogeneous ecosystem with very little pre-packaged integration available. In the State of Network Automation survey, this "lack of integration" was the second most cited challenge to automation, with 47% of NetOps agreeing.

Standardising on toolsets, and on infrastructure where possible (like application services), provides an opportunity to reduce the burden of integration across the entire organisation. What one team develops, others can leverage to reduce the time to value of other automation projects. Reuse is a significant factor in improving time to value.

We see reuse in developer proclivity toward open source and the fact that 80-90% of applications today are composed of third-party/open source components. This accelerates development and reduces time to value. The same principle can be applied to network automation by leveraging existing integrations. Establish a culture of sharing and reuse across operational domains to reap the benefits of standardisation.

Spurring innovation

Rather than impeding innovation, as some initially believe, standardisation can be a catalyst for innovation. By standardising and sharing software and systems across operational domains, you have a more robust set of minds and experiences able to collaborate on new requirements and systems. You're building a pool of talent within your organisation that can provide input, ideation, and implementation of new features and functionality – all without the sometimes-lengthy onboarding cycle.

Standardisation also speeds implementation. This is largely thanks to familiarity. The more you work with the same language and libraries and toolsets, the more capable you become. That means increased productivity that leads to more time considering how to differentiate and add value with new capabilities.

Standardisation is an opportunity

Standardisation can initially feel stifling, particularly if your pet language or toolset is cut from the team. Nevertheless, embracing standardisation as an opportunity to build out a strong foundation for automation systems and software can benefits the business. It also affords NetOps new opportunities to add value across the entire continuous deployment toolchain.

Even so, it is important not to standardise for the sake of it. Take into consideration existing skill sets and the availability of local talent. Survey universities and other businesses to understand the current state of automation and operations’ skill sets and talent to make sure you aren’t the only organisation adopting a given language or toolset.

For the best long-term results, don’t treat standardisation like security and leave it until after you’ve already completed an implementation. Embrace standardisation early in your automation endeavours to avoid being hit with operational and architectural debt that will weigh you down and make it difficult to standardise later.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.