All posts by lorimacvittie

Practical cloud considerations: Security and the decryption conundrum

Compute in the cloud may be cheap but it isn't free. Most of today’s apps are delivered via secure HTTP. That means TLS or the increasingly frowned upon SSL. It means cryptography, which traditionally has been translated to mean performance problems.

Thanks to advances in technology, CPUs are now incredibly fast and many client (and server-side) hardware natively integrates what was once specialised cryptographic hardware. This means that, on a per-connection basis, speed is not as much of an issue on an individual basis for cryptography as it once was.

But that doesn't mean that cryptography still isn’t a source of performance and operational expense. 

Applications today are not comprised of a single endpoint. There are multiple intermediaries and proxies through which a message must travel before that "single endpoint" is ever encountered. They are security and access control, load balancing and routing endpoints. Each needs to inspect the message – in the clear – in order to execute its designated role in the complex dance that is the modern data path.

Here is where the argument that cryptographic isn't as expensive starts to fall apart. On its own, a single endpoint introduces very little delay. However, when repeated multiple times at every endpoint in the data path, those individual delays add up to something more noticeable and, particularly in the case of public cloud, operationally expensive.

Cryptography is naturally a computationally expensive process. That means it takes a lot more CPU cycles to encrypt or decrypt a message than it does to execute business logic. In the cloud, CPU cycles are analogous to money being spent. In general, it's an accepted cost because the point is to shift capital costs to operational expense.

But the costs start to add up if you are decrypting and encrypting a message several times. You are effectively paying for the same cryptographic process multiple times. What might be computed to cost only a penny when executed once suddenly costs five pennies when executed five times. Do the math for the hundreds of thousands of transactions over the course of a day (or an hour) and the resulting costs are staggering.

Also remember that each CPU cycle consumed by cryptographic processing is a CPU cycle not spent on business logic. This means scaling out sooner than you might want to, which incurs even more costs as each additional instance is launched to handle the load.

Suffice to say that "SSL everywhere" should not result in "decrypt everywhere" architectures in the cloud.

Decrypt once

To reduce the costs and maximise the efficacy of the CPUs you're paying for, it is worth the time to design your cloud-based architecture on a "decrypt once" principle. "Decrypt Once" means you should minimise the number of endpoints in the data path that must decrypt and re-encrypt messages in transit.

Naturally, this requires forethought and careful consideration of different application services you're using to secure and scale applications. If you aren't subject to regulations or requirements that demand end-to-end encryption, architect your data path such that messages are decrypted as early as possible to avoid additional cycles wasted on decryption later. If you are required to maintain end-to-end encryption, the combining of services whenever possible will net you the most efficient use of compute resources.

Combining the services – i.e. load balancing with web application firewall – on a single platform means reducing the number of times you need to decrypt messages in transit. It also has the added advantage of reducing the number of connections and time on the network, which translates into performance benefits for users and consumers. But the real savings are in CPU cycles that aren't spent on repeated decryption and re-encryption. 

It may seem a waste of time to consider the impact of encryption and decryption for an app that's lightly used today. The pennies certainly aren't covering the cost of the effort. But as apps grow and scale and live over time, those pennies are going to add up to amounts that are impactful. Like pennies, microseconds add up. By considering the impact of cryptography across the entire data path, you can net benefits in the long run for both users and the business.

https://i1.wp.com/www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.png?w=474&ssl=1Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Why standardisation is good for NetOps: Innovation instead of impediment

Standardisation is sometimes viewed as an assault on innovation. Being forced to abandon a polyglot buffet and adopt a more limited menu will always sound stifling. That may be because standardisation is often associated with regulatory compliance standards that have official sounding names like ISO 8076.905E and are associated with checklists, auditors and oversight committees.

The reality is that there are very few standards – in fact none that I can think of – governing enterprises' choice of languages, protocols and frameworks.

Enterprise standardisation is more driven by practical considerations such as talent availability, sustainability, and total cost of ownership over the (often considerable) lifetime of software and systems.

Studies have shown the average software lifespan over the past twenty years is around six to eight years. Interestingly, longevity tends to increase for larger programs, as measured by lines of code (LOC). Systems and software with over a million LOC appear to have lifespans over a decade, lasting 12 to 14 years. While you may dismiss this as irrelevant, it is important to realise that at the end of the day, network automation systems are software and systems. They need the same care and maintenance as software coming out of your development organisation. If you're going to treat your production pipeline as code, then you've got to accept that a significant percentage of that automated pipeline is going to be code.

Over the course of that software or system lifespan, it’s a certain bet that multiple sets of operators and developers will be responsible for updating, maintaining, operating, and deploying changes to that software or system. And this is exactly what gets at the heart of the push for standardisation – especially for NetOps taking the plunge into developing and maintaining systems to automate and orchestrate network deployment and operation, as well as application service infrastructure. 

Silos are for farms

If you or your team chooses Python while another chooses PowerShell, you are effectively building an operational silo that prevents skills sharing. This is a problem. The number one challenge facing NetOps, as reported in F5 and Red Hats’ State of Network Automation 2018 report, was a lack of skills (49% of surveyed NetOps). Therefore, it would seem foolish to create additional friction by introducing multiple languages and/or toolsets.

It is similarly a bad idea to choose languages and toolsets for which there is no local source of talent. If other organisations and nearby universities are teaching Python and you choose to go with PowerShell, you're going to have a hard time finding staff with the skills required for that system.

It is rare that an organisation standardises on a single language. However, they do tend to standardise on just a few. NetOps should take their cues from development and DevOps standards as this will expand the talent pool even further.

Time to value is valuable

Many NetOps organisations already find themselves behind the curve when it comes to satisfying DevOps and business demands to get continuous. The unfortunate reality of NetOps and network automation is that it's a heterogeneous ecosystem with very little pre-packaged integration available. In the State of Network Automation survey, this "lack of integration" was the second most cited challenge to automation, with 47% of NetOps agreeing.

Standardising on toolsets, and on infrastructure where possible (like application services), provides an opportunity to reduce the burden of integration across the entire organisation. What one team develops, others can leverage to reduce the time to value of other automation projects. Reuse is a significant factor in improving time to value.

We see reuse in developer proclivity toward open source and the fact that 80-90% of applications today are composed of third-party/open source components. This accelerates development and reduces time to value. The same principle can be applied to network automation by leveraging existing integrations. Establish a culture of sharing and reuse across operational domains to reap the benefits of standardisation.

Spurring innovation

Rather than impeding innovation, as some initially believe, standardisation can be a catalyst for innovation. By standardising and sharing software and systems across operational domains, you have a more robust set of minds and experiences able to collaborate on new requirements and systems. You're building a pool of talent within your organisation that can provide input, ideation, and implementation of new features and functionality – all without the sometimes-lengthy onboarding cycle.

Standardisation also speeds implementation. This is largely thanks to familiarity. The more you work with the same language and libraries and toolsets, the more capable you become. That means increased productivity that leads to more time considering how to differentiate and add value with new capabilities.

Standardisation is an opportunity

Standardisation can initially feel stifling, particularly if your pet language or toolset is cut from the team. Nevertheless, embracing standardisation as an opportunity to build out a strong foundation for automation systems and software can benefits the business. It also affords NetOps new opportunities to add value across the entire continuous deployment toolchain.

Even so, it is important not to standardise for the sake of it. Take into consideration existing skill sets and the availability of local talent. Survey universities and other businesses to understand the current state of automation and operations’ skill sets and talent to make sure you aren’t the only organisation adopting a given language or toolset.

For the best long-term results, don’t treat standardisation like security and leave it until after you’ve already completed an implementation. Embrace standardisation early in your automation endeavours to avoid being hit with operational and architectural debt that will weigh you down and make it difficult to standardise later.

https://i1.wp.com/www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.png?w=474&ssl=1Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.