Archivo de la categoría: containers

Exponential Docker usage shows container popularity

Global Container TradeAdoption of Docker’s containerisation technology has entered a period of explosive growth with its usage numbers nearly doubling in the last three months, according to its latest figures.

A declaration on the company blog reports that Docker has now issued 2 billion ‘pulls’ of images. In November 2015 the usage figure stood at 1.2 bullion pulls and the Docker Hub from which these images are pulled was only launched in March 2013.

Docker’s invention of software defined autonomous complete file system that encapsulates all the elements of a server in microcosm – such as code, runtime, system tools and system libraries – has whetted the appetite of developers in the age of the cloud.

In January 2016, Docker users pulled images nearly 7,000 times per minute, which was four times the run rate a year ago. In that one month Docker enjoyed the equivalent of 15% of its total transaction from the past three years.

The number of ‘pulls’ is significant because each of these transactions indicates that a Docker engine is downloading an image to create containers from it. Development teams use Docker Hub to publish and use containerised software, and automate their delivery. The fact that two billion pulls have now taken place indicates the popularity of the technology and the exponential growth rate in the last three months is an indicator of the growing popularity of this variation of virtualisation.

There are currently over 400,000 registered users on Docker Hub. “Our users span from the largest corporations, to newly-launched startups, to the individual Docker enthusiast and their number is increasing every day,” wrote Docker spokesman and blog author Mario Ponticello.

Around a fifth of Docker’s two billion pulls come from its 93 ‘Official Repos’ – a curated set of images from Docker’s partners, including NGINX, Oracle, Node.js and Cloudbees. Docker’s security-monitoring service Nautilus maintains integrity of the Official Repos over time.

“As our ecosystem grows, we’ll be adding single-click deployment and security scanning to the Docker platform,” said Monticello.

A Rightscale study in January 2016 found that 17% of enterprises now have more than 1,000 virtual machines in the public cloud (up 4% in a year) while private clouds are showing even stronger appetite for virtualisation techniques with 31% of enterprises running more than 1,000 VMs, up from 22% in 2015.

IBM: “The level of innovation is being accelerated”

Angel DiazDr. Angel Diaz joined the research division of IBM in the late nineties, where he helped co-author many of the web standards we enjoy today. Nowadays, he’s responsible for all of IBM’s cloud and mobile technology, as well as architecture for its ambient cloud. Here, ahead of his appearance at Container World (February 16 – 18,  Santa Clara Convention Center, CA,) later this month, BCN caught up with him to find out more about the tech giant’s evolving cloud strategy.

BCN: How would you compare your early days at IBM, working with the likes of Tim Berners-Lee, with the present?

Dr. Angel Diaz: Back then, the industry was focused on developing web standards for a very academic purpose, in particular the sharing of technical information. IBM had a strategy around accelerating adoption and increasing skill. This resulted in a democratization of technology, by getting developers to work together in open source and standards.If you fast forward to where we are now with cloud, mobile, data, analytics and cognitive you see a clear evolution of open source.

The aperture of open source development and ecosystems has grown to include users and is now grounded on solid open governance and meritocracy models. What we have built is an open cloud architecture, starting with an open IaaS based on Open Stack, open PaaS with Cloud Foundry and an open container model with the Open Container Initiative and Cloud Native Computing Foundation. When you combine an open cloud architecture with open APIs defined by the Open API Initiative, applications break free. I have always said that no application is an island – these technologies make it so.

What’s the ongoing strategy at IBM, and where do containers come into it?

It’s very much hybrid cloud. We’ve been leveraging containers to help deliver hybrid applications and accelerate development through devOps, so that people can transform and improve their business processes. This is very similar to what we did in the early days of the web – better business processes means better business. At the end of the day – the individual benefits. Applications can be tailored to the way we like to work, and the way that we like to behave.

A lot of people in the container space, say, wow, containers have been around a long time, why are we all interested in this now? Well, it’s gotten easier to use, and open communities have rallied around it, and it provides a very nice way of marrying concepts of operations and service oriented architecture, which the industry missed in the 2000s.

What does all this innovation ultimately mean for the ‘real world’?

It’s not an exact analogy, but if we remember the impact of HTML, JavaScript – they allowed almost anyone to become a webmaster. That led to the Internet explosion. If you look at where we are now, what we’re doing with cloud: that stack of books you need to go buy has been reduced, the concept count of things you need to know to develop an application, the level of sophistication of what you need to know in order to build an application, scale an application, secure an application, is being reduced.

So what does that do? It increases participation in the business process, in what you end up delivering. Whether it’s human facing or whether it’s an internal business process, it reduces that friction and it allows you to move faster. What’s starting to happen is the level of innovation is being accelerated.

And how do containers fit into this process? 

Previously there was this strict line: you develop software and then operate it and make tweaks, but you never really fundamentally changed the architecture of the application. Because of the ability to quickly stand up containers, to quickly iterate, etc., people are changing their architectures because of operations and getting better operations because of it. That’s where the microservices notion comes in.

And you’ll be talking at Container World. What message are you bringing to the event?

My goal is to help people take a step back and understand the moment we’re in, because sometimes we all forget that. Whether you’re struggling with security in a Linux kernel or trying to define a micro service, you can forget what it is you’re trying to accomplish.

We are in a very special moment where it’s about the digital disruption that’s occurring, and the container technology we’re building here, allow much quicker iteration on the business process. That’s one dimension. The second is that, what IBM’s doing, in not just our own implementation of containers, but in the open source world, to help democratize the technology, so that the level of skill and the number of people who build on this grows.

AWS – we view open source as a companion

deepaIn one of the last installments of our series marking the upcoming Container World (February 16 – 18,  Santa Clara Convention Center, CA, USA), BCN talks to Deepak Singh, General Manager of Amazon EC2 Container Service, AWS

Business Cloud News: First of all – how much of the container hype is justified would you say?

Deepak Singh: Over the last 2-3 years, starting with the launch of Docker in March 2013, we have seen a number of AWS customers adopt containers for their applications. While many customers are still early in their journey, we have seen AWS customers such as Linden Labs, Remind, Yelp, Segment, and Gilt Group all adopt Docker for production applications. In particular, we are seeing enterprise customers actively investigating Docker as they start re-architecting their applications to be less monolithic.

How is the evolution of containers influencing the cloud ecosystem?

Containers are helping people move faster towards architectures that are ideal for the  AWS cloud. For example, one of the common patterns we have seen with customers using Docker is to adopt a microservices architecture. This is especially true for our enterprise customers who see Docker as a way to bring more applications onto AWS.

What opportunities does this open up to AWS?

For us, it all comes down to customer choice. When our customers ask us for a capability, then we listen. They come to us because they want something the Amazon way, easy to use, easy to scale, lower cost, and where they don’t have to worry about the infrastructure running behind it.

As mentioned, many of our customers are adopting containers and they expect AWS to support them. Over the past few years we have launched a number of services and features to make it easier for customers to run Docker-based applications. These include Docker support in AWS Elastic Beanstalk and the Amazon EC2 Container Service (ECS). We also have a variety of certified partners that support Docker and AWS and integrate with various AWS services, including ECS.

What does the phenomenon of open source mean to AWS? Is it a threat or a friend?

We view open source as a companion to AWS’s business model. We use open source and have built most AWS services on top of open source technology. AWS supports a number of open source applications, either directly or through partners. Examples of open source solutions available as AWS services include Amazon RDS (which supports MySQL, Postgres, and MariaDB), Amazon Elastic MapReduce (EMR), and Amazon EC2 Container Service (ECS). We are also an active member of the open source community. The Amazon ECS agent is available under an Apache 2.0 license, and we accept pull requests and allow our customers to fork our agent as well. AWS contributes code to Docker (e.g. CloudWatch logs driver), and was a founder member of the Open Container Initiative, which is a community effort to develop specifications for container runtimes.

As we see customers asking for services based on various open source technologies, we’ll keep adding those services.

You’ll be appearing at Container World this February. What do you think the biggest discussions will be about?

We expect customers will be interested in learning how they can run container-based applications in production, the most popular use cases, and hear about the latest innovations in this space.

Betting on the cloud

Dan-Scholnick_v2A long-time expert on enterprise IT and cloud platforms, Dan Scholnick (General Partner, Trinity Ventures) has the distinction of having been Docker’s first venture investor. BCN spoke to him to find out the secrets to being a top level IT investor.

Know your stuff. Scholnick has a technical background, with a computer science degree from Dartmouth College. After this he worked at Wily Technology with the legendary Lew Cirne, who went on to be the founder and CEO of New Relic. At Wily, Scholnick built the first version of the company’s application performance management product.

All this gave Scholnick a natural appreciation for products and technologies that get used in the data centre as core infrastructure. It partly was this understanding that alerted him to the potential significance of Docker’s processor, dotCloud.

Know how to spot talent: The other factor was that he could recognise dotCloud founder Solomon Hykes as a technology visionary. “He had a better understanding and view of how infrastructure technology was changing than almost anyone we had met,” says Scholnick.

Of course, dotCloud didn’t turn out as expected. “It turns out we were wrong about PaaS, but we were right about the containers. Fortunately for all of us involved in the company, that container bet ended up working out.”

Know when the future is staring you in the face: When Scholnick invested in dotCloud, containers had been around for quite a long time. But they were very difficult to use. “What we learned through the dotCloud experience was how to make containers consumable. To make them easier to consume, easier to use, easier to manage, easier to operate. That’s really what Docker is all about, taking this technology that has actually been around, is great technology conceptually but has historically been very hard to use, and make it usable.”

The rest is IT history. Arguably no infrastructure technology in history has ever taken off and gained mass adoption as quickly as Docker.

“To me, the thing that’s really stunning is to see the breadth and depth of Docker usage throughout the ecosystem,” says Scholnick. “It’s truly remarkable.”

Know what’s next: When BCN asked Scholnick what he thought the next big thing would be in the cloud native movement, he points to an offshoot of Docker and Containers: microservices. “I think we’re going to see massive adoption of microservices in the next 3-5 years and we’re likely going to see some big companies built around the microservices ecosystem,” he says.” Docker certainly has a role to play in this new market: Docker is really what’s enabling it.” and

Keeping in touch with real world uses of Containers is one the reasons Scholnick will be attending and speaking at Container World (February 16 – 18, 2016 Santa Clara Convention Center).

“As a board member at Docker and as an investor in the ecosystem, it’s always good to hear the anecdotal information about how are people using Docker – as well as what pieces do they feel are missing that would help them use containers more effectively. That’s interesting to me because it point to problems that are opportunities for Docker to solve, or opportunities for new start-ups that we can fund.”

Click here to download the Container World programme

Docker buys Unikernel Systems to make micro containers

containersUS based container software pioneer Docker has announced the acquisition of Cambridge start up Unikernel Systems, so it can create even tinier self contained virtual system instances.

Open source based Docker automates the running of applications in self contained units of operating system software (containers). It traditionally did this by creating a layer of abstraction from operating-system-level virtualization on Linux. This resource isolation allows multiple independent jobs to run within a single Linux instance, which obviates the need to spin up a new virtual machine. The technology provided by Unikernel, according to Docker, takes the autonomy of individual events to a new level, with independent entities running on a virtual server at an even smaller, more microcosmic level.

The new expertise bought by Docker means that it can give every application its own Virtual Machine with a specialized unikernel, according to Docker community marketing manager Adam Herzog.

Unikernel takes away the rigid distinction between operating system Kernels and the applications that run over them, creating more fluidity and exchange between the two. When source code is compiled a custom operating system is created for each application which makes for a much more efficient way of working and more effective functions. The key to efficiency of unikernels is their size and adaptability, according to the Docker blog. Being brought into the open source stable will make them more readily available to developers, it argued.

Unikernel was founded by ex-alumni from hypervisor company Xen including Anil Madhavapeddy, David Scott, Thomas Gazagnaire and Amir Chaudhry. Since unikernels can run on ‘bare metal’ (hardware without any operating system or hypervisor) they take the efficiency of virtual machines further, according to the Docker blog. Unikernels are an important part of the future of the container ecosystem since they effectively absorb the operating system into the containers, Scott says. Since an application only needs to take on the scraps of operating system code that it needs, Unikernels could eventually make the operating system redundant, it claimed.

Cloud academy: Rudy Rigot and his new Holberton School

rudy rigotBusiness Cloud News talks to Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA) keynote Rudy Rigot about his new software college, which opens today.

Business Cloud News: Rudy, first of all – can you introduce yourself and tell us about your new Holberton School?

Rudy Rigot: Sure! I’ve been working in tech for the past 10 years, mostly in web-related stuff. Lately, I’ve worked at Apple as a full-stack software engineer for their localization department, which I left this year to found Holberton School.

Holberton School is a 2-year community-driven and project-oriented school, training software engineers for the real world. No classes, just real-world hands-on projects designed to optimize their learning, in close contact with volunteer mentors who all work for small companies or large ones like Google, Facebook, Apple, … One of the other two co-founders is Julien Barbier, formerly the Head of Community, Marketing and Growth at Docker.

Our first batch of students started last week!

What are some of the challenges you’ve had to anticipate?

Since we’re a project-oriented school, students are mostly being graded on the code they turn in, that they push to GitHub. Some of this code is graded automatically, so we needed to be able to run each student’s code (or each team’s code) automatically in a fair and equal way.

We needed to get information on the “what” (what is returned in the console), but also on the “how”: how long does the code take to run?  How much resource is being consumed? What is the return code? Also, since Holberton students are trained on a wide variety of languages; how do you ensure you can grade a Ruby project, and later a C project, and later a JavaScript project, etc. with the same host while minimizing issues?

Finally we had to make sure that the student can commit code that is as malicious as they want, we can’t need to have a human check it before running it, it should only break their program, not the whole host.

So how on earth do you negotiate all these?

Our project-oriented training concept is new in the United States, but it’s been successful for decades in Europe, and we knew the European schools, who built their programs before containers became mainstream, typically run the code directly on a host system that has all of the software they need directly installed on the host; and then they simply run a chroot before running the student’s code. This didn’t solve all of the problem, while containers did in a very elegant way; so we took the container road!

HolbertonCloud is the solution we built to that end. It fetches a student’s code on command, then runs it based on a Dockerfile and a series of tests, and finally returns information about how that went. The information is then used to compute a score.

What’s amazing about it is that by using Docker, building the infrastructure has been trivial; the hard part has been about writing the tests, the scoring algorithm … basically the things that we actively want to be focused on!

So you’ve made use of containers. How much disruption do you expect their development to engender over the coming years?

Since I’m personally more on the “dev” end use of devops, I see how striking it is that containers restore focus on actual development for my peers. So, I’m mostly excited by the innovation that software engineers will be focusing on instead of focusing on the issues that containers are taking care of for them.

Of course, it will be very hard to measure which of those innovations were able to exist because containers are involved; but it also makes them innovations about virtually every corner of the tech industry, so that’s really exciting!

What effect do you think containers are going to have on the delivery of enterprise IT?

I think one takeaway from the very specific HolbertonCloud use case is that cases where code can be run trivially in production are getting rare, and one needs guarantees that only containers can bring efficiently.

Also, a lot of modern architectures fulfil needs with systems that are made of more and more micro-services, since we now have enough hindsight to see the positive outcomes on their resiliences. Each micro-service may have different requirements and therefore be relevant to be done each with different technologies, so managing a growing set of different software configurations is getting increasingly relevant. Considering the positive outcomes, this trend will only keep growing, making the need for containers keep growing as well.

You’re delivering a keynote at Container World. What’s the main motivation for attending?

I’m tremendously excited by the stellar line-up! We’re all going to get amazing insight from many different and relevant perspectives, that’s going to be very enlightening!

The very existence of Container World is exciting too: it’s crazy the long way containers have gone over the span of just a few years.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA)

Containers: 3 big myths

schneiderJoe Schneider is DevOps Engineer at Bunchball, a company that offers gamificaiton as a service to likes of Applebee’s and Ford Canada.

This February Schneider is appearing at Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA), where he’ll be cutting through the cloudy abstractions to detail Bunchball’s real world experience with containers. Here, exclusively for Business Cloud News, Schneider explodes three myths surrounding one of the container hype…

One: ‘Containers are contained.’

If you’re really concerned about security, or if you’re in a really security conscious environment, you have to take a lot of extra steps. You can’t just throw containers into the mix and leave it at that: it’s not as secure as VM.

When we instigated containers, at least, the tools weren’t there. Now Docker has made security tools available, but we haven’t transitioned from the stance of ‘OK, Docker is what it is and recognise that’ to a more secure environment. What we have done instead is try to make sure the edges are secure: we put a lot a of emphasis on that. At the container level we haven’t done much, because the tools weren’t there.

Two: The myth of the ten thousand container deployment

You’ll see the likes of Mesosphere, or Docker Swarm, say, ‘we can deploy ten thousand containers in like thirty seconds’ – and similar claims.  Well, that’s a really synthetic test: these kinds of numbers are 100% hype. In the real world such a capacity is pretty much useless. No one cares about deploying ten thousands little apps that do literally nothing, that just go ‘hello world.’

The tricky bit with containers is actually linking them together. When you start with static hosts, or even VMs, they don’t change very often, so you don’t realise how much interconnection there is between your different applications. When you destroy and recreate your applications in their entirety via containers, you discover that you actually have to recreate all that plumbing on the fly and automate that and make it more agile. That can catch you by surprise if you don’t know about it ahead of time.

Three: ‘Deployment is straightforward’

We’ve been running containers in production for a year now. Before then we were playing around a little bit with some internal apps, but now we run everything except one application on containers in production. And that was a bit of a paradigm change for us. The line that Docker gives is that you can take your existing apps and put them in a container that’s going to work in exactly the same way. Well, that’s not really true. You have to actually think about it a little bit differently: Especially with the deployment process.

An example of a real ‘gotcha’ for us was that we presumed Systemd and Docker would play nice together and they don’t. That really hit us in the deployment process – we had to delete the old one and start a new one using system and that was always very flaky. Don’t try to home grow your own one, actually use something that is designed to work with Docker.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA),

AWS opens up EC2 Container Registry to all

amazon awsCloud giant Amazon Web Services (AWS) has opened its technology for storing and managing application container images up to public consumption.

The AWC EC2 Container Registry Service (ECR) had been exclusively for industry insiders who attended the launch at the AWS re:Invent conference in Las Vegas in October. However, AWS has now decided to level the playing field, its Senior Product Manager Andrew Thomas revealed, guest writing on the blog of AWS chief technologist Jeff Barr. Thomas invited all interested cloud operators to apply for access.

As containers have become the de facto method for packaging application code all cloud service providers are competing to fine tune the process of running code within these constraints, as an alternative to using virtual machines. But developers have fed back teething problems to AWS, Thomas reports in the blog.

ECR, explains Thomas, is a managed Docker container registry designed to simplify the management of Docker container images which, developers have told Thomas, has proved difficult. Running a Docker image registry, in a large-scale job like an infrastructure project, involves pulling hundreds of images at once and this makes self-hosting too difficult, especially with the added complexity of spanning two or more AWS regions. AWS clients wanted fine-grained access control to images without having to manage certificates or credentials, Thomas said.

Management aside, there is a security dividend too, according to Thomas. “This makes it easier for developers to evaluate potential security threats before pushing to Amazon ECR,” he said. “It also allows developers to monitor their containers running in production.”

There is no charge for transferring data into the Amazon EC2 Container Registry. While storage costs 10 cents per gigabyte per month all new AWS customers will receive 500MB of storage a month for a year.

The Registry is integrated with Amazon ECS and the Docker CLI (command line interface), in order to simplify development and production workflows. “Users can push container images to Amazon ECR using the Docker CLI from the development machine and Amazon ECS can pull them directly for production,” said Thomas.

The service was effective from December 21st in the US East (Northern Virginia) with more regions on the way soon.

ElasticHosts launches elastic containers – could cut some running costs by 50%

containersCloud server company ElasticHosts has announced its new model of container technology can adapt automatically to fit volatile shifts in demand for resource and bill clients accordingly. The new Linux containers are designed to make management easier for resellers, service providers, web developers and web hosting companies.

ElasticHosts’ new containers are now available with cPanel v11.52, from third party control panel vendor cPanel. ElasticHosts claims it offers the first containers to integrate with cPanel v11.52, which now creates the possibility for much more precise billing according to the usage of server resources such as memory, processing power and storage. It also gives service providers the option to automatically adapt to changing circumstances, so clients only ever have to pay for what they use while there is no risk of hitting a performance barrier in periods of intense activity.

The control panel from cPanel can streamline the process of creating and managing websites, claims its vendor. Prior to the new release cPanel could only run on virtual machine servers with licensing according to the virtual private server (VPS) model. The new ability to ‘autoscale’ and the capacity for exact billing will lower costs for clients, according to ElasticHosts. The usage-based billing offered by containers means website owners no longer have to pay for periods when server capacity is underutilised or the site is idle, typically saving up to 50% on hosting costs, it claims.

“We worked closely with cPanel integrating and testing the product to make this a reality and believe our technologies complement each other well,” said ElasticHosts CEO Richard Davies, “containers are gaining real momentum.”

“Linux containers are an exciting technology and we have recognized the groundswell behind them in the internet community right now,” said Aaron Phillips, Chief Business Officer at cPanel.

Containers at Christmas: wrapping, cloud and competition

Empty road and containers in harbor at sunsetAs anyone that’s ever been disappointed by a Christmas present will tell you – shiny packaging can be very misleading. As we hear all the time, it’s what’s inside that counts…

What then, are we to make of the Docker hype, centred precisely on shiny, new packaging? (Docker is the vendor that two years ago found a way to containerise applications: other types of containers, operating system containers, have been around for a couple of decades)

It is not all about the packaging, of course. Perhaps we should say that it is more about on what the package is placed, and how it is managed (amongst other things) that matters most?

Regardless, containers are one part of a changing cloud, data centre and enterprise IT landscape, the ‘cloud native’ movement widely seen as driving a significant shift in enterprise infrastructure and application development.

What the industry is trying to figure out, and what could prove the most disruptive angle to watch as more and more enterprises roll out containers into production, is the developing competition within this whole container/cloud/data centre market.

The question of competition is a very hot topic in the container, devops and cloud space.  Nobody could have thought the OCI co-operation between Docker and CoreOS meant they were suddenly BFFs. Indeed, the drive to become the enterprise container of choice now seems to be at the forefront of both companies’ plans. Is this, however, the most dynamic relationship in the space? What about the Google-Docker-Mesos orchestration game? It would seem that Google’s trusted container experience is already allowing it to gain favour with enterprises, with Kubernetes taking a lead. And with CoreOS in bed with Google’s open source Kubernetes, placing it at the heart of Tectonic, does this mean that CoreOS has a stronger play in the enterprise market to Docker? We will wait and see…

We will also wait and see how the Big Cloud Three will come out of the expected container-driven market shift. Somebody described AWS as ‘a BT’ to me…that is, the incumbent who will be affected most by the new disruptive changes brought by containers, since it makes a lot of money from an older model of infrastructure….

Microsoft’s container ambition is also being watched closely. There is a lot of interest from both the development and IT Ops communities in their play in the emerging ecosystem. At a recent meet-up, an Azure evangelist had to field a number of deeply technical questions regarding exactly how Microsoft’s containers fair next to Linux’s. The question is whether, when assessing who will win the largest piece of the enterprise pie, this will prove the crux of the matter?

Containers are not merely changing the enterprise cloud game (with third place Google seemingly getting it very right) but also driving the IT Ops’ DevOps dream to reality; in fact, many are predicting that it could eventually prove a bit of a threat to Chef and Puppet’s future….

So, maybe kids at Christmas have got it right….it is all about the wrapping and boxes! We’ll have to wait a little longer than Christmas Day to find out.

Lucy Ashton. Head of Content & Production, Container WorldWritten by Lucy Ashton, Head of Content & Production, Container World