Archivo de la categoría: Black Duck Software

Securing Visibility into Open Source Code

Yellow road sign with a blue sky and white clouds: open sourceThe Internet runs on open source code. Linux, Apache Tomcat, OpenSSL, MySQL, Drupal and WordPress are built on open source. Everyone, every day, uses applications that are either open source or include open source code; commercial applications typically have only 65 per cent custom code. Development teams can easily use 100 or more open source libraries, frameworks tools and code snippets, when building an application.

The widespread use of open source code to reduce development times and costs makes application security more challenging. That’s because the bulk of the code contained in any given application is often not written by the team that developed or maintain it. For example, the 10 million lines of code incorporated in the GM Volt’s control systems include open source components. Car manufacturers like GM are increasingly taking an open source approach because it gives them broader control of their software platforms and the ability to tailor features to suit their customers.

Whether for the Internet, the automotive industry, or for any software package, the need for secure open source code has never been greater, but CISOs and the teams they manage are losing visibility into the use of open source during the software development process.

Using open source code is not a problem in itself, but not knowing what open source is being used is dangerous, particularly when many components and libraries contain security flaws. The majority of companies exercise little control over the external code used within their software projects. Even those that do have some form of secure software development lifecycle tend to only apply it to the code they write themselves – 67 per cent of companies do not monitor their open source code for security vulnerabilities.

The Path to Better Code

Development frameworks and newer programming languages make it much easier for developers to avoid introducing common security vulnerabilities such as cross-site scripting and SLQ injection. But developers still need to understand the different types of data an application handles and how to properly protect that data. For example, session IDs are just as sensitive as passwords, but are often not given the same level of attention. Access control is notoriously tricky to implement well, and most developers would benefit from additional training to avoid common mistakes.

Mike

Mike Pittenger, VP of Product Strategy at Black Duck Software

Developers need to fully understand how the latest libraries and components work before using them, so that these elements are integrated and used correctly within their projects. One reason people feel safe using the OpenSSL library and take the quality of its code for granted is its FIPS 140-2 certificate. But in the case of the Heartbleed vulnerability, the Heartbleed protocol is outside the scope of FIPS. Development teams may have read the documentation covering secure use of OpenSSL call functions and routines, but how many realised that the entire codebase was not certified?

Automated testing tools will certainly improve the overall quality of in-house developed code. But CISOs must also ensure the quality of an application’s code sourced from elsewhere, including proper control over the use of open source code.

Maintaining an inventory of third-party code through a spreadsheet simply doesn’t work, particularly with a large, distributed team. For example, the spreadsheet method can’t detect whether a developer has pulled in an old version of an approved component, or added new, unapproved ones. It doesn’t ensure that the relevant security mailing lists are monitored or that someone is checking for new releases, updates, and fixes. Worst of all, it makes it impossible for anyone to get a full sense of an application’s true level of exposure.

Know Your Code

Developing secure software means knowing where the code within an application comes from, that it has been approved, and that the latest updates and fixes have been applied, not just before the application is released, but throughout its supported life.

While using open source code makes business sense for efficiency and cost reasons, open source can undermine security efforts if it isn’t well managed. Given the complexity of today’s applications, the management of the software development lifecycle needs to be automated wherever possible to allow developers to remain agile enough to keep pace, while reducing the introduction and occurrence of security vulnerabilities.

For agile development teams to mitigate security risks from open source software, they must have visibility into the open source components they use, select components without known vulnerabilities, and continually monitor those components throughout the application lifecycle.

Written by Mike Pittenger, VP of Product Strategy at Black Duck Software.

Why visibility and control are critical for container security

Reacting to the steady flow of reported security breaches in open source components such as Heartbleed, Shellshock and Poodle is making organisations focus increasingly on making the software they build more secure, improving application delivery, agility and security. As organisations increasingly turn to containers to improve application delivery and agility, the security ramifications of the containers and their contents are coming under increased scrutiny.

An overview of today’s container security initiatives 

Container providers such as Docker and Red Hat, are aggressively moving towards reassuring the marketplace about container security. Ultimately, they are focusing on the use of encryption to secure the code and software versions running in Docker users’ software infrastructure to protect users from malicious backdoors included in shared application images and other potential security threats.

However, this method is slowly being put under scrutiny as it covers only one aspect of container security, excluding whether software stacks and application portfolios are free of known, exploitable versions of open source code.

Without open source hygiene, Docker Content Trust will only ever ensure that Docker images contain the exact same bits that developers originally put there, including any vulnerabilities present in the open source components. Therefore, they only amount to a partial solution.

A more holistic approach to container security

Knowing that the container is free of vulnerabilities at the time of initial build and deployment is necessary, but far from sufficient. New vulnerabilities are being constantly discovered and these can often impact older versions of open source components. Therefore, what’s needed is an informed open source technology that provides selection and vigilance opportunities to users.

Moreover, the security risk posed by a container also depends on the sensitivity of the data accessed via it, as well as the location of where the container is deployed. For example, whether the container is deployed on the internal network behind a firewall or if it’s internet-facing will affect the level of risk.

In this context, a publicly available attack makes containers subject to a range of threats, including cross-scripting, SQL injection and denial-of-services which containers deployed on an internal network behind a firewall wouldn’t be exposed to.

For this reason, having visibility into the code inside containers is a critical element of container security, even aside from the issue of security of the containers themselves.

It’s critical to develop robust processes for determining; what open source software resides in or is deployed along with an application, where this open source software is located in build trees and system architectures, whether the code exhibits security vulnerabilities and whether an accurate open source risk profile exists.

Will security concerns slow container adoption? – The industry analysts’ perspective

Enterprise organisations today are embracing containers because of their proven benefits; improved application scalability, fewer deployment errors, faster time to market and simplified application management. However, just as organisations have moved over the years from viewing open source as a curiosity to understanding its business necessity, containers seem to have reached a similar tipping point. The question now seems to be shifting towards whether security concerns about containers will inhibit further adoption. Industry analysts differ in their assessment of this.

By drawing a parallel to the rapid adoption of virtualisation technologies even before the establishment of security requirements Dave Bartoletti, Principal Analyst at Forrester Research, believes security concerns won’t significantly slow container adoption. “With virtualization, people deployed anyway, even when security and compliance hadn’t caught up yet, and I think we’ll see a lot of the same with Docker,” according to Bartoletti.

Meanwhile, Adrian Sanabria Senior Security Analyst at 451 Research believes enterprises will give containers a wide berth until security standards are identified and established. “The reality is that security is still a barrier today, and some companies won’t go near containers until there are certain standards in place”, he explains.

To overcome these concerns, organisations are best served to take advantage of the automated tools available to gain control over all the elements of their software infrastructure, including containers.

Hence, the presence of vulnerabilities in all types of software is inevitable, and open source is no exception. Detection and remediation of vulnerabilities, are increasingly seen as a security imperative and a key part of a strong application security strategy.

 

Bill_LedinghamWritten by Bill Ledingham, EVP of Engineering and Chief Technology Officer, Black Duck Software.