Category Archives: containers

Containers aren’t new, but ecosystem growth has driven development

kyle andersonContainers are getting a fair bit of hype at the moment, and February 2016 will see the first ever dedicated container-based conference take place in Silicon Valley in the US. Here, Business Cloud News talks to Kyle Anderson, who is the lead developer for Yelp, to learn about the company’s use of containers, and whether containers will ultimately live up to all the hype.

Business Cloud News: “What special demands does Yelp’s business put on its internal computing?”

Kyle Anderson: “I wouldn’t say they are very special. In some sense our computing demands are boring. We need standard things like capacity, scaling, and speed. But boring doesn’t quite cut it though, and if you can turn your boring compute needs into something that is a cut above the status quo, it can become a business advantage.”

BCN: “And what was the background to building your own container-based PaaS? What was the decision-making process there?”

KA: “Building our own container-based PaaS came from a vision that things could be better if they were in containers and could be scheduled on-demand.

“Ideas started bubbling internally until we decided to “just build it” with manager support. We knew that containers were going to be the future, not VMS. At the same time, we evaluated what was out there and wrote down what it was that we wanted in a PaaS, and saw the gap. The decision-making process there was just internal to the team, as most engineers at Yelp are trusted to make their own technical decisions.”

BCN: “How did you come to make the decision to open-source it?”

KA: “Many engineers have the desire to open-source things, often simply because they are proud of their work and want to share it with their peers.

“At the same time, management likes open-source because it increases brand awareness and serves as a recruiting tool. It was natural progression for us. I tried to emphasise that it needs to work for Yelp first, and after one and a half years in production, we were confident that it was a good time to announce it.”

BCN: “There’s a lot of hype around containers, with some even suggesting this could be the biggest change in computing since client-server architecture. Where do you stand on its wider significance?

KA: “Saying it’s the biggest change in computing since client-server architecture is very exaggerated. I am very anti-hype. Containers are not new, they just have enough ecosystem built up around them now, to the point where they become a viable option for the community at large.”

Container World is taking place on 16 – 18 February 2016 at the Santa Clara Convention Center, CA, USA.

Rackspace launches Carina ‘instant container’

Rackspace has launched a new ‘instant container’ offering which it says will take the strain out of building infrastructure.

The Carina by Rackspace, unveiled at the OpenStack Summit in Tokyo, has now been made available as a free beta version. Carina makes containers portable and easy to use, claims Rackspace, which devised the system for making containerized applications faster. The service uses bare-metal performance, a Docker Engine, a native container tooling and Docker Swarm in order maximise processing power without sacrificing any control.

Typically a user might be a developer, data scientist or cloud operator who wants to outsource the infrastructure management to Rackspace’s experts which, says the vendor, saves time them on building, managing and updating their container environment.

With container technology being one of the fastest-growing software development tools in computing, companies adopting this unknown technology are likely to face unforeseen management challenges. Though containers consume a fraction of the computing resources of typical virtual machines, they could eat up a lot of management time, warns Scott Crenshaw, Rackspace’s SVP of strategy and product. The savings yielded by Container technology’s instant availability, application scaling and high application density, could be neutralised by the time and money spent on learning new infrastructure management skills, he said.

The Carina service will save customer from that waste, said Crenshaw. “Our mission is to support OpenStack’s position as a leading choice for enterprise clouds,” said Crenshaw, “Carina design makes containers fast, simple and accessible to developers using OpenStack.”

With no hypervisor overhead, an easy installation process and instant support, everything is designed to run faster, said Nick Stinemates, VP business development of container maker Docker. “You can get started in under a minute. The Carina beta from Rackspace makes it fast and simple to start a Docker Swarm cluster. They have put the Docker experience front and centre without any abstraction,” said Stinemates

Carina is now available as a free beta offering on the Rackspace Public Cloud for US customers.

Companies with unmonitored Dockers could be dangerously exposed – study

Empty road and containers in harbor at sunsetWhile container technology is sweeping the board and being installed practically everywhere, its progress will be largely unmonitored, says a study. According to the research figures, the majority of Docker adopters could be sleepwalking into chaos.

The report, The State of Containers and the Docker Ecosystem 2015, found that 93% of organisations plan to use containers, with 78% of them opting for Docker.

The primary reason for using Docker was its convenience and speed, according to the survey group, of whom a massive majority (85%) nominated ‘Fast and easy deployment’ as their most important reason for using Docker. However, this haste could lead to mistakes, because over half (54%) told researchers that performance monitoring was not the major focus of attention as they rushed to adopt container technology.

The findings of the study shocked Bernd Greifeneder, CTO at performance manager Dynatrace, which commissioned the research.

“It’s crucial to monitor not just the containers themselves, but to understand how microservices and applications within the containers perform,” said Greifeneder, who works in Dynatrace’s Ruxit division, “monitoring application performance and scalability are key factors to success with container technology.”

Half the companies planning a container deployment in the coming six months to a year will do so in production, according to Greifeneder. Without monitoring, it will be difficult to manage, he said.

While most companies (56%) seem to realise the benefits of having reliable and production-ready solutions, fewer (40%) seemed to understand the flip side of the powers of automation and the dangers inherent in using ‘extraordinarily dynamic’ technology without monitoring its progress.

Since Docker was launched in 2013, more than 800 million containers have been pulled from the public Docker Hub. While container use is skyrocketing there are barriers to success that need to be addressed, Greifeneder argued.

The report was conducted by O’Reilly Media in collaboration with Ruxit. Survey participants represent 138 companies with fewer than 500 people from a variety of sectors including in software, consulting, publishing and media, education, cloud services, hardware, retail and government.

Why visibility and control are critical for container security

Reacting to the steady flow of reported security breaches in open source components such as Heartbleed, Shellshock and Poodle is making organisations focus increasingly on making the software they build more secure, improving application delivery, agility and security. As organisations increasingly turn to containers to improve application delivery and agility, the security ramifications of the containers and their contents are coming under increased scrutiny.

An overview of today’s container security initiatives 

Container providers such as Docker and Red Hat, are aggressively moving towards reassuring the marketplace about container security. Ultimately, they are focusing on the use of encryption to secure the code and software versions running in Docker users’ software infrastructure to protect users from malicious backdoors included in shared application images and other potential security threats.

However, this method is slowly being put under scrutiny as it covers only one aspect of container security, excluding whether software stacks and application portfolios are free of known, exploitable versions of open source code.

Without open source hygiene, Docker Content Trust will only ever ensure that Docker images contain the exact same bits that developers originally put there, including any vulnerabilities present in the open source components. Therefore, they only amount to a partial solution.

A more holistic approach to container security

Knowing that the container is free of vulnerabilities at the time of initial build and deployment is necessary, but far from sufficient. New vulnerabilities are being constantly discovered and these can often impact older versions of open source components. Therefore, what’s needed is an informed open source technology that provides selection and vigilance opportunities to users.

Moreover, the security risk posed by a container also depends on the sensitivity of the data accessed via it, as well as the location of where the container is deployed. For example, whether the container is deployed on the internal network behind a firewall or if it’s internet-facing will affect the level of risk.

In this context, a publicly available attack makes containers subject to a range of threats, including cross-scripting, SQL injection and denial-of-services which containers deployed on an internal network behind a firewall wouldn’t be exposed to.

For this reason, having visibility into the code inside containers is a critical element of container security, even aside from the issue of security of the containers themselves.

It’s critical to develop robust processes for determining; what open source software resides in or is deployed along with an application, where this open source software is located in build trees and system architectures, whether the code exhibits security vulnerabilities and whether an accurate open source risk profile exists.

Will security concerns slow container adoption? – The industry analysts’ perspective

Enterprise organisations today are embracing containers because of their proven benefits; improved application scalability, fewer deployment errors, faster time to market and simplified application management. However, just as organisations have moved over the years from viewing open source as a curiosity to understanding its business necessity, containers seem to have reached a similar tipping point. The question now seems to be shifting towards whether security concerns about containers will inhibit further adoption. Industry analysts differ in their assessment of this.

By drawing a parallel to the rapid adoption of virtualisation technologies even before the establishment of security requirements Dave Bartoletti, Principal Analyst at Forrester Research, believes security concerns won’t significantly slow container adoption. “With virtualization, people deployed anyway, even when security and compliance hadn’t caught up yet, and I think we’ll see a lot of the same with Docker,” according to Bartoletti.

Meanwhile, Adrian Sanabria Senior Security Analyst at 451 Research believes enterprises will give containers a wide berth until security standards are identified and established. “The reality is that security is still a barrier today, and some companies won’t go near containers until there are certain standards in place”, he explains.

To overcome these concerns, organisations are best served to take advantage of the automated tools available to gain control over all the elements of their software infrastructure, including containers.

Hence, the presence of vulnerabilities in all types of software is inevitable, and open source is no exception. Detection and remediation of vulnerabilities, are increasingly seen as a security imperative and a key part of a strong application security strategy.

 

Bill_LedinghamWritten by Bill Ledingham, EVP of Engineering and Chief Technology Officer, Black Duck Software.

OpenStack Liberty release features enhancements for SDN and containers

OpenStack SummitThe twelfth release of OpenStack will tackle the cloud software toolset’s size limitations and will offer new options for software defined networking, says the Openstack Foundation.

The new version, Liberty, will help cloud software builders to create more manageable and scalable enterprise services with ‘the broadest support for popular data centre technologies’ the foundation says.

The OpenStack Foundation says Liberty was designed in response to user requests for more detailed management controls. OpenStack has also been criticised for its inability to step up to large scale installations. As a result, its operating core has been strengthened and its production environment will include more powerful tools for managing new technologies, such as containers.

Improvements include a new common library adoption, better configuration management and a new role-based access control (RBAC) for the Heat orchestration and Neutron networking projects. These control improvements, which were specifically requested by cloud operators, will allow them to fine tune security settings at all levels of network and orchestration functions and APIs.

OpenStack’s scalability challenges are to be tackled with an updated model to support very large and multi-location systems. The foundation also promised that Liberty users will see better scaling and performance in the Horizon dashboard, Neutron networking Cinder block storage services and during upgrades to Nova’s computing services.

Liberty also marks the first full OpenStack use of the Magnum containers management project. Magnum will support popular container cluster management tools Kubernetes, Mesos and Docker Swarm. Magnum aims to simplify the adoption of container technology by tying into existing OpenStack services such as Nova, Ironic and Neutron. Further improvements are planned with new project, Kuryr, which integrates directly with native container networking components such as libnetwork.

The Heat orchestration project promises ‘dozens’ of new resources for management, automation and orchestration of the expanded capacity of Liberty.

1,933 individuals across more than 164 organizations contributed to OpenStack Liberty through upstream code, reviews, documentation and internationalization efforts. The top code committers to the Liberty release were HP, Red Hat, Mirantis, IBM, Rackspace, Huawei, Intel, Cisco, VMware, and NEC.

Tech News Recap for the Week of 9/14/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 9/14/2015.

Tech News RecapAT&T says malware secretly unlocked hundreds of thousands of phones. A survey indicates that companies will be moving to containers next year. MI5 chief says encryption is putting terrorists beyond the reach of the law. Big data projects have been increasing, but is it because of CIOs?

Tech News Recap

  • Survey Says That Companies Are Set To Move Big Into Containers Next Year
  • AT&T says malware secretly unlocked hundreds of thousands of phones
  • VMware NSX roadmap puts focus on SDDC and cloud security
  • Encryption puts terrorists beyond the reach of the law, says MI5 chief
  • DoD CIO plans to let contractors use commercial cloud services on DoD property
  • The Storage (R)Evolution or The Storage Superstorm?
  • Why (and how) VMware created a new type of virtualization just for containers
  • Big data projects gaining steam, but not due to the CIO
  • Is the Cloud Right for You?
  • How Sunny Delight juices up sales with cloud-based analytics
  • 10 ways automation may open up new IT job opportunities
  • Why IT Buyers Choose Hyperconverged Infrastructure
  • Dreamforce: Uber CEO Tells How the Cloud Made Ride-Sharing Possible
  • Should You Trust Your CEO With Cloud Computing Decisions?
  • Why the future of sports is in the cloud

There’s been a lot of articles around containers and container management tools. If you would like to learn more, download our whitepaper, “10 Things to Know About Docker

 

By Ben Stephenson, Emerging Media Specialist

Tech News Recap for the Week of 9/14/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 9/14/2015.

Tech News RecapAT&T says malware secretly unlocked hundreds of thousands of phones. A survey indicates that companies will be moving to containers next year. MI5 chief says encryption is putting terrorists beyond the reach of the law. Big data projects have been increasing, but is it because of CIOs?

Tech News Recap

  • Survey Says That Companies Are Set To Move Big Into Containers Next Year
  • AT&T says malware secretly unlocked hundreds of thousands of phones
  • VMware NSX roadmap puts focus on SDDC and cloud security
  • Encryption puts terrorists beyond the reach of the law, says MI5 chief
  • DoD CIO plans to let contractors use commercial cloud services on DoD property
  • The Storage (R)Evolution or The Storage Superstorm?
  • Why (and how) VMware created a new type of virtualization just for containers
  • Big data projects gaining steam, but not due to the CIO
  • Is the Cloud Right for You?
  • How Sunny Delight juices up sales with cloud-based analytics
  • 10 ways automation may open up new IT job opportunities
  • Why IT Buyers Choose Hyperconverged Infrastructure
  • Dreamforce: Uber CEO Tells How the Cloud Made Ride-Sharing Possible
  • Should You Trust Your CEO With Cloud Computing Decisions?
  • Why the future of sports is in the cloud

There’s been a lot of articles around containers and container management tools. If you would like to learn more, download our whitepaper, “10 Things to Know About Docker

 

By Ben Stephenson, Emerging Media Specialist

10 Things to Know About Docker

DockerIt’s possible that containers and container management tools like Docker will be the single most important thing to happen to the data center since the mainstream adoption of hardware virtualization in the 90s. In the past 12 months, the technology has matured beyond powering large-scale startups like Twitter and Yelp and found its way into the data centers of major banks, retailers and even NASA. When I first heard about Docker a couple years ago, I started off as a skeptic. I blew it off as skillful marketing hype around an old concept of Linux containers. But after incorporating it successfully into several projects at Spantree I am now a convert. It’s saved my team an enormous amount of time, money and headaches and has become the underpinning of our technical stack.

If you’re anything like me, you’re often time crunched and may not have a chance to check out every shiny new toy that blows up on Github overnight. So this article is an attempt to quickly impart 10 nuggets of wisdom that will help you understand what Docker is and why it’s useful.

Docker is a container management tool.

Docker is an engine designed to help you build, ship and execute applications stacks and services as lightweight, portable and isolated containers. The Docker engine sits directly on top of the host operating system. Its containers share the kernel and hardware of the host machine with roughly the same overhead as processes launched directly on the host machine.

But Docker itself isn’t a container system, it merely piggybacks off the existing container facilities baked into the OS, such as LXC on Linux. These container facilities have been baked into operating systems for many years, but Docker provides a much friendlier image management and deployment system for working with these features.

 

Docker is not a hardware virtualization engine.

When Docker was first released, many people compared it to virtual machine hypervisors like VMWare, KVM and Virtualbox. While Docker solves a lot of the same problems and shares many of the same advantages as hypervisors, Docker takes a very different approach. Virtual machines emulate hardware. In other words, when you launch a VM and run a program that hits disk, its generally talking to a “virtual” disk. When you run a CPU-intensive task, those CPU commands need to be translated to something the host CPU understands. All these abstractions come at a cost: two disk layers, two network layers, two processor schedulers, even two whole operating systems that need to be loaded into memory. These limitations typically mean you can only run a few virtual machines on a given piece of hardware before you start to see an unpleasant amount of overhead and churn. On the other hand, you can theoretically run hundreds of Docker containers on the same host machine without issue.

All that being said, containers aren’t a wholesale replacement for virtual machines. Virtual machines provide a tremendous amount of flexibility in areas where containers generally can’t. For example, if you want to run a Linux guest operating system on top of a Windows host, that’s where virtual machines shine.

 

Download the whitepaper to read the rest of the list of 10 Things You Need to Know About Docker

 

 

 

 

Whitepaper by Cedric Hurst, Principal at Spantree

YouTube brings Vitess MySQL scaling magic to Kubernetes

YouTube is working to integrate a beefed up version of MySQL with Kubernetes

YouTube is working to integrate a beefed up version of MySQL with Kubernetes

YouTube is working to integrate Vitess, which improves the ability of MySQL databases to scale in containerised environments, with Kubernetes, an open source container deployment and management tool.

Vitess, which is available as an open source project and pitched as a high-concurrency alternative to NoSQL and vanilla MySQL databases, uses a BSON-based protocol which creates very lightweight connections (around 32KB), and its pooling feature uses Go’s concurrency support to map these lightweight connections to a small pool of MySQL connections; Vitess can handle thousands of connections.

It also handles horizontal and vertical sharding, and can dynamically re-write queries that could impede the database performance.

Anthony Yeh, a software engineer at YouTube said the company is currently using the service to handle metadata for the company’s video service, which handles billions of daily video views and 300 hours of new video uploads per minute.

“Your new website is growing exponentially. After a few rounds of high fives, you start scaling to meet this unexpected demand. While you can always add more front-end servers, eventually your database becomes a bottleneck.”

“Vitess is available as an open source project and runs best in a containerized environment. With Kubernetes and Google Container Engine as your container cluster manager, it’s now a lot easier to get started. We’ve created a single deployment configuration for Vitess that works on any platform that Kubernetes supports,” he explained in a blog post on the Google Cloud Platform website. “In this environment, Vitess provides a MySQL storage layer with improved durability, scalability, and manageability.”

Yeh said the company is just getting started with the Kubernetes integration, but once users will be able to deploy Vitess in containers with Kubernetes on any cloud platform supported by it.