Category Archives: containers

AWS – we view open source as a companion

deepaIn one of the last installments of our series marking the upcoming Container World (February 16 – 18,  Santa Clara Convention Center, CA, USA), BCN talks to Deepak Singh, General Manager of Amazon EC2 Container Service, AWS

Business Cloud News: First of all – how much of the container hype is justified would you say?

Deepak Singh: Over the last 2-3 years, starting with the launch of Docker in March 2013, we have seen a number of AWS customers adopt containers for their applications. While many customers are still early in their journey, we have seen AWS customers such as Linden Labs, Remind, Yelp, Segment, and Gilt Group all adopt Docker for production applications. In particular, we are seeing enterprise customers actively investigating Docker as they start re-architecting their applications to be less monolithic.

How is the evolution of containers influencing the cloud ecosystem?

Containers are helping people move faster towards architectures that are ideal for the  AWS cloud. For example, one of the common patterns we have seen with customers using Docker is to adopt a microservices architecture. This is especially true for our enterprise customers who see Docker as a way to bring more applications onto AWS.

What opportunities does this open up to AWS?

For us, it all comes down to customer choice. When our customers ask us for a capability, then we listen. They come to us because they want something the Amazon way, easy to use, easy to scale, lower cost, and where they don’t have to worry about the infrastructure running behind it.

As mentioned, many of our customers are adopting containers and they expect AWS to support them. Over the past few years we have launched a number of services and features to make it easier for customers to run Docker-based applications. These include Docker support in AWS Elastic Beanstalk and the Amazon EC2 Container Service (ECS). We also have a variety of certified partners that support Docker and AWS and integrate with various AWS services, including ECS.

What does the phenomenon of open source mean to AWS? Is it a threat or a friend?

We view open source as a companion to AWS’s business model. We use open source and have built most AWS services on top of open source technology. AWS supports a number of open source applications, either directly or through partners. Examples of open source solutions available as AWS services include Amazon RDS (which supports MySQL, Postgres, and MariaDB), Amazon Elastic MapReduce (EMR), and Amazon EC2 Container Service (ECS). We are also an active member of the open source community. The Amazon ECS agent is available under an Apache 2.0 license, and we accept pull requests and allow our customers to fork our agent as well. AWS contributes code to Docker (e.g. CloudWatch logs driver), and was a founder member of the Open Container Initiative, which is a community effort to develop specifications for container runtimes.

As we see customers asking for services based on various open source technologies, we’ll keep adding those services.

You’ll be appearing at Container World this February. What do you think the biggest discussions will be about?

We expect customers will be interested in learning how they can run container-based applications in production, the most popular use cases, and hear about the latest innovations in this space.

Betting on the cloud

Dan-Scholnick_v2A long-time expert on enterprise IT and cloud platforms, Dan Scholnick (General Partner, Trinity Ventures) has the distinction of having been Docker’s first venture investor. BCN spoke to him to find out the secrets to being a top level IT investor.

Know your stuff. Scholnick has a technical background, with a computer science degree from Dartmouth College. After this he worked at Wily Technology with the legendary Lew Cirne, who went on to be the founder and CEO of New Relic. At Wily, Scholnick built the first version of the company’s application performance management product.

All this gave Scholnick a natural appreciation for products and technologies that get used in the data centre as core infrastructure. It partly was this understanding that alerted him to the potential significance of Docker’s processor, dotCloud.

Know how to spot talent: The other factor was that he could recognise dotCloud founder Solomon Hykes as a technology visionary. “He had a better understanding and view of how infrastructure technology was changing than almost anyone we had met,” says Scholnick.

Of course, dotCloud didn’t turn out as expected. “It turns out we were wrong about PaaS, but we were right about the containers. Fortunately for all of us involved in the company, that container bet ended up working out.”

Know when the future is staring you in the face: When Scholnick invested in dotCloud, containers had been around for quite a long time. But they were very difficult to use. “What we learned through the dotCloud experience was how to make containers consumable. To make them easier to consume, easier to use, easier to manage, easier to operate. That’s really what Docker is all about, taking this technology that has actually been around, is great technology conceptually but has historically been very hard to use, and make it usable.”

The rest is IT history. Arguably no infrastructure technology in history has ever taken off and gained mass adoption as quickly as Docker.

“To me, the thing that’s really stunning is to see the breadth and depth of Docker usage throughout the ecosystem,” says Scholnick. “It’s truly remarkable.”

Know what’s next: When BCN asked Scholnick what he thought the next big thing would be in the cloud native movement, he points to an offshoot of Docker and Containers: microservices. “I think we’re going to see massive adoption of microservices in the next 3-5 years and we’re likely going to see some big companies built around the microservices ecosystem,” he says.” Docker certainly has a role to play in this new market: Docker is really what’s enabling it.” and

Keeping in touch with real world uses of Containers is one the reasons Scholnick will be attending and speaking at Container World (February 16 – 18, 2016 Santa Clara Convention Center).

“As a board member at Docker and as an investor in the ecosystem, it’s always good to hear the anecdotal information about how are people using Docker – as well as what pieces do they feel are missing that would help them use containers more effectively. That’s interesting to me because it point to problems that are opportunities for Docker to solve, or opportunities for new start-ups that we can fund.”

Click here to download the Container World programme

Docker buys Unikernel Systems to make micro containers

containersUS based container software pioneer Docker has announced the acquisition of Cambridge start up Unikernel Systems, so it can create even tinier self contained virtual system instances.

Open source based Docker automates the running of applications in self contained units of operating system software (containers). It traditionally did this by creating a layer of abstraction from operating-system-level virtualization on Linux. This resource isolation allows multiple independent jobs to run within a single Linux instance, which obviates the need to spin up a new virtual machine. The technology provided by Unikernel, according to Docker, takes the autonomy of individual events to a new level, with independent entities running on a virtual server at an even smaller, more microcosmic level.

The new expertise bought by Docker means that it can give every application its own Virtual Machine with a specialized unikernel, according to Docker community marketing manager Adam Herzog.

Unikernel takes away the rigid distinction between operating system Kernels and the applications that run over them, creating more fluidity and exchange between the two. When source code is compiled a custom operating system is created for each application which makes for a much more efficient way of working and more effective functions. The key to efficiency of unikernels is their size and adaptability, according to the Docker blog. Being brought into the open source stable will make them more readily available to developers, it argued.

Unikernel was founded by ex-alumni from hypervisor company Xen including Anil Madhavapeddy, David Scott, Thomas Gazagnaire and Amir Chaudhry. Since unikernels can run on ‘bare metal’ (hardware without any operating system or hypervisor) they take the efficiency of virtual machines further, according to the Docker blog. Unikernels are an important part of the future of the container ecosystem since they effectively absorb the operating system into the containers, Scott says. Since an application only needs to take on the scraps of operating system code that it needs, Unikernels could eventually make the operating system redundant, it claimed.

Cloud academy: Rudy Rigot and his new Holberton School

rudy rigotBusiness Cloud News talks to Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA) keynote Rudy Rigot about his new software college, which opens today.

Business Cloud News: Rudy, first of all – can you introduce yourself and tell us about your new Holberton School?

Rudy Rigot: Sure! I’ve been working in tech for the past 10 years, mostly in web-related stuff. Lately, I’ve worked at Apple as a full-stack software engineer for their localization department, which I left this year to found Holberton School.

Holberton School is a 2-year community-driven and project-oriented school, training software engineers for the real world. No classes, just real-world hands-on projects designed to optimize their learning, in close contact with volunteer mentors who all work for small companies or large ones like Google, Facebook, Apple, … One of the other two co-founders is Julien Barbier, formerly the Head of Community, Marketing and Growth at Docker.

Our first batch of students started last week!

What are some of the challenges you’ve had to anticipate?

Since we’re a project-oriented school, students are mostly being graded on the code they turn in, that they push to GitHub. Some of this code is graded automatically, so we needed to be able to run each student’s code (or each team’s code) automatically in a fair and equal way.

We needed to get information on the “what” (what is returned in the console), but also on the “how”: how long does the code take to run?  How much resource is being consumed? What is the return code? Also, since Holberton students are trained on a wide variety of languages; how do you ensure you can grade a Ruby project, and later a C project, and later a JavaScript project, etc. with the same host while minimizing issues?

Finally we had to make sure that the student can commit code that is as malicious as they want, we can’t need to have a human check it before running it, it should only break their program, not the whole host.

So how on earth do you negotiate all these?

Our project-oriented training concept is new in the United States, but it’s been successful for decades in Europe, and we knew the European schools, who built their programs before containers became mainstream, typically run the code directly on a host system that has all of the software they need directly installed on the host; and then they simply run a chroot before running the student’s code. This didn’t solve all of the problem, while containers did in a very elegant way; so we took the container road!

HolbertonCloud is the solution we built to that end. It fetches a student’s code on command, then runs it based on a Dockerfile and a series of tests, and finally returns information about how that went. The information is then used to compute a score.

What’s amazing about it is that by using Docker, building the infrastructure has been trivial; the hard part has been about writing the tests, the scoring algorithm … basically the things that we actively want to be focused on!

So you’ve made use of containers. How much disruption do you expect their development to engender over the coming years?

Since I’m personally more on the “dev” end use of devops, I see how striking it is that containers restore focus on actual development for my peers. So, I’m mostly excited by the innovation that software engineers will be focusing on instead of focusing on the issues that containers are taking care of for them.

Of course, it will be very hard to measure which of those innovations were able to exist because containers are involved; but it also makes them innovations about virtually every corner of the tech industry, so that’s really exciting!

What effect do you think containers are going to have on the delivery of enterprise IT?

I think one takeaway from the very specific HolbertonCloud use case is that cases where code can be run trivially in production are getting rare, and one needs guarantees that only containers can bring efficiently.

Also, a lot of modern architectures fulfil needs with systems that are made of more and more micro-services, since we now have enough hindsight to see the positive outcomes on their resiliences. Each micro-service may have different requirements and therefore be relevant to be done each with different technologies, so managing a growing set of different software configurations is getting increasingly relevant. Considering the positive outcomes, this trend will only keep growing, making the need for containers keep growing as well.

You’re delivering a keynote at Container World. What’s the main motivation for attending?

I’m tremendously excited by the stellar line-up! We’re all going to get amazing insight from many different and relevant perspectives, that’s going to be very enlightening!

The very existence of Container World is exciting too: it’s crazy the long way containers have gone over the span of just a few years.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA)

Containers: 3 big myths

schneiderJoe Schneider is DevOps Engineer at Bunchball, a company that offers gamificaiton as a service to likes of Applebee’s and Ford Canada.

This February Schneider is appearing at Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA), where he’ll be cutting through the cloudy abstractions to detail Bunchball’s real world experience with containers. Here, exclusively for Business Cloud News, Schneider explodes three myths surrounding one of the container hype…

One: ‘Containers are contained.’

If you’re really concerned about security, or if you’re in a really security conscious environment, you have to take a lot of extra steps. You can’t just throw containers into the mix and leave it at that: it’s not as secure as VM.

When we instigated containers, at least, the tools weren’t there. Now Docker has made security tools available, but we haven’t transitioned from the stance of ‘OK, Docker is what it is and recognise that’ to a more secure environment. What we have done instead is try to make sure the edges are secure: we put a lot a of emphasis on that. At the container level we haven’t done much, because the tools weren’t there.

Two: The myth of the ten thousand container deployment

You’ll see the likes of Mesosphere, or Docker Swarm, say, ‘we can deploy ten thousand containers in like thirty seconds’ – and similar claims.  Well, that’s a really synthetic test: these kinds of numbers are 100% hype. In the real world such a capacity is pretty much useless. No one cares about deploying ten thousands little apps that do literally nothing, that just go ‘hello world.’

The tricky bit with containers is actually linking them together. When you start with static hosts, or even VMs, they don’t change very often, so you don’t realise how much interconnection there is between your different applications. When you destroy and recreate your applications in their entirety via containers, you discover that you actually have to recreate all that plumbing on the fly and automate that and make it more agile. That can catch you by surprise if you don’t know about it ahead of time.

Three: ‘Deployment is straightforward’

We’ve been running containers in production for a year now. Before then we were playing around a little bit with some internal apps, but now we run everything except one application on containers in production. And that was a bit of a paradigm change for us. The line that Docker gives is that you can take your existing apps and put them in a container that’s going to work in exactly the same way. Well, that’s not really true. You have to actually think about it a little bit differently: Especially with the deployment process.

An example of a real ‘gotcha’ for us was that we presumed Systemd and Docker would play nice together and they don’t. That really hit us in the deployment process – we had to delete the old one and start a new one using system and that was always very flaky. Don’t try to home grow your own one, actually use something that is designed to work with Docker.

Click here to learn more about Container World (February 16 – 18, 2016 Santa Clara Convention Center, USA),

AWS opens up EC2 Container Registry to all

amazon awsCloud giant Amazon Web Services (AWS) has opened its technology for storing and managing application container images up to public consumption.

The AWC EC2 Container Registry Service (ECR) had been exclusively for industry insiders who attended the launch at the AWS re:Invent conference in Las Vegas in October. However, AWS has now decided to level the playing field, its Senior Product Manager Andrew Thomas revealed, guest writing on the blog of AWS chief technologist Jeff Barr. Thomas invited all interested cloud operators to apply for access.

As containers have become the de facto method for packaging application code all cloud service providers are competing to fine tune the process of running code within these constraints, as an alternative to using virtual machines. But developers have fed back teething problems to AWS, Thomas reports in the blog.

ECR, explains Thomas, is a managed Docker container registry designed to simplify the management of Docker container images which, developers have told Thomas, has proved difficult. Running a Docker image registry, in a large-scale job like an infrastructure project, involves pulling hundreds of images at once and this makes self-hosting too difficult, especially with the added complexity of spanning two or more AWS regions. AWS clients wanted fine-grained access control to images without having to manage certificates or credentials, Thomas said.

Management aside, there is a security dividend too, according to Thomas. “This makes it easier for developers to evaluate potential security threats before pushing to Amazon ECR,” he said. “It also allows developers to monitor their containers running in production.”

There is no charge for transferring data into the Amazon EC2 Container Registry. While storage costs 10 cents per gigabyte per month all new AWS customers will receive 500MB of storage a month for a year.

The Registry is integrated with Amazon ECS and the Docker CLI (command line interface), in order to simplify development and production workflows. “Users can push container images to Amazon ECR using the Docker CLI from the development machine and Amazon ECS can pull them directly for production,” said Thomas.

The service was effective from December 21st in the US East (Northern Virginia) with more regions on the way soon.

ElasticHosts launches elastic containers – could cut some running costs by 50%

containersCloud server company ElasticHosts has announced its new model of container technology can adapt automatically to fit volatile shifts in demand for resource and bill clients accordingly. The new Linux containers are designed to make management easier for resellers, service providers, web developers and web hosting companies.

ElasticHosts’ new containers are now available with cPanel v11.52, from third party control panel vendor cPanel. ElasticHosts claims it offers the first containers to integrate with cPanel v11.52, which now creates the possibility for much more precise billing according to the usage of server resources such as memory, processing power and storage. It also gives service providers the option to automatically adapt to changing circumstances, so clients only ever have to pay for what they use while there is no risk of hitting a performance barrier in periods of intense activity.

The control panel from cPanel can streamline the process of creating and managing websites, claims its vendor. Prior to the new release cPanel could only run on virtual machine servers with licensing according to the virtual private server (VPS) model. The new ability to ‘autoscale’ and the capacity for exact billing will lower costs for clients, according to ElasticHosts. The usage-based billing offered by containers means website owners no longer have to pay for periods when server capacity is underutilised or the site is idle, typically saving up to 50% on hosting costs, it claims.

“We worked closely with cPanel integrating and testing the product to make this a reality and believe our technologies complement each other well,” said ElasticHosts CEO Richard Davies, “containers are gaining real momentum.”

“Linux containers are an exciting technology and we have recognized the groundswell behind them in the internet community right now,” said Aaron Phillips, Chief Business Officer at cPanel.

Containers at Christmas: wrapping, cloud and competition

Empty road and containers in harbor at sunsetAs anyone that’s ever been disappointed by a Christmas present will tell you – shiny packaging can be very misleading. As we hear all the time, it’s what’s inside that counts…

What then, are we to make of the Docker hype, centred precisely on shiny, new packaging? (Docker is the vendor that two years ago found a way to containerise applications: other types of containers, operating system containers, have been around for a couple of decades)

It is not all about the packaging, of course. Perhaps we should say that it is more about on what the package is placed, and how it is managed (amongst other things) that matters most?

Regardless, containers are one part of a changing cloud, data centre and enterprise IT landscape, the ‘cloud native’ movement widely seen as driving a significant shift in enterprise infrastructure and application development.

What the industry is trying to figure out, and what could prove the most disruptive angle to watch as more and more enterprises roll out containers into production, is the developing competition within this whole container/cloud/data centre market.

The question of competition is a very hot topic in the container, devops and cloud space.  Nobody could have thought the OCI co-operation between Docker and CoreOS meant they were suddenly BFFs. Indeed, the drive to become the enterprise container of choice now seems to be at the forefront of both companies’ plans. Is this, however, the most dynamic relationship in the space? What about the Google-Docker-Mesos orchestration game? It would seem that Google’s trusted container experience is already allowing it to gain favour with enterprises, with Kubernetes taking a lead. And with CoreOS in bed with Google’s open source Kubernetes, placing it at the heart of Tectonic, does this mean that CoreOS has a stronger play in the enterprise market to Docker? We will wait and see…

We will also wait and see how the Big Cloud Three will come out of the expected container-driven market shift. Somebody described AWS as ‘a BT’ to me…that is, the incumbent who will be affected most by the new disruptive changes brought by containers, since it makes a lot of money from an older model of infrastructure….

Microsoft’s container ambition is also being watched closely. There is a lot of interest from both the development and IT Ops communities in their play in the emerging ecosystem. At a recent meet-up, an Azure evangelist had to field a number of deeply technical questions regarding exactly how Microsoft’s containers fair next to Linux’s. The question is whether, when assessing who will win the largest piece of the enterprise pie, this will prove the crux of the matter?

Containers are not merely changing the enterprise cloud game (with third place Google seemingly getting it very right) but also driving the IT Ops’ DevOps dream to reality; in fact, many are predicting that it could eventually prove a bit of a threat to Chef and Puppet’s future….

So, maybe kids at Christmas have got it right….it is all about the wrapping and boxes! We’ll have to wait a little longer than Christmas Day to find out.

Lucy Ashton. Head of Content & Production, Container WorldWritten by Lucy Ashton, Head of Content & Production, Container World

Containers aren’t new, but ecosystem growth has driven development

kyle andersonContainers are getting a fair bit of hype at the moment, and February 2016 will see the first ever dedicated container-based conference take place in Silicon Valley in the US. Here, Business Cloud News talks to Kyle Anderson, who is the lead developer for Yelp, to learn about the company’s use of containers, and whether containers will ultimately live up to all the hype.

Business Cloud News: “What special demands does Yelp’s business put on its internal computing?”

Kyle Anderson: “I wouldn’t say they are very special. In some sense our computing demands are boring. We need standard things like capacity, scaling, and speed. But boring doesn’t quite cut it though, and if you can turn your boring compute needs into something that is a cut above the status quo, it can become a business advantage.”

BCN: “And what was the background to building your own container-based PaaS? What was the decision-making process there?”

KA: “Building our own container-based PaaS came from a vision that things could be better if they were in containers and could be scheduled on-demand.

“Ideas started bubbling internally until we decided to “just build it” with manager support. We knew that containers were going to be the future, not VMS. At the same time, we evaluated what was out there and wrote down what it was that we wanted in a PaaS, and saw the gap. The decision-making process there was just internal to the team, as most engineers at Yelp are trusted to make their own technical decisions.”

BCN: “How did you come to make the decision to open-source it?”

KA: “Many engineers have the desire to open-source things, often simply because they are proud of their work and want to share it with their peers.

“At the same time, management likes open-source because it increases brand awareness and serves as a recruiting tool. It was natural progression for us. I tried to emphasise that it needs to work for Yelp first, and after one and a half years in production, we were confident that it was a good time to announce it.”

BCN: “There’s a lot of hype around containers, with some even suggesting this could be the biggest change in computing since client-server architecture. Where do you stand on its wider significance?

KA: “Saying it’s the biggest change in computing since client-server architecture is very exaggerated. I am very anti-hype. Containers are not new, they just have enough ecosystem built up around them now, to the point where they become a viable option for the community at large.”

Container World is taking place on 16 – 18 February 2016 at the Santa Clara Convention Center, CA, USA.

Rackspace launches Carina ‘instant container’

Rackspace has launched a new ‘instant container’ offering which it says will take the strain out of building infrastructure.

The Carina by Rackspace, unveiled at the OpenStack Summit in Tokyo, has now been made available as a free beta version. Carina makes containers portable and easy to use, claims Rackspace, which devised the system for making containerized applications faster. The service uses bare-metal performance, a Docker Engine, a native container tooling and Docker Swarm in order maximise processing power without sacrificing any control.

Typically a user might be a developer, data scientist or cloud operator who wants to outsource the infrastructure management to Rackspace’s experts which, says the vendor, saves time them on building, managing and updating their container environment.

With container technology being one of the fastest-growing software development tools in computing, companies adopting this unknown technology are likely to face unforeseen management challenges. Though containers consume a fraction of the computing resources of typical virtual machines, they could eat up a lot of management time, warns Scott Crenshaw, Rackspace’s SVP of strategy and product. The savings yielded by Container technology’s instant availability, application scaling and high application density, could be neutralised by the time and money spent on learning new infrastructure management skills, he said.

The Carina service will save customer from that waste, said Crenshaw. “Our mission is to support OpenStack’s position as a leading choice for enterprise clouds,” said Crenshaw, “Carina design makes containers fast, simple and accessible to developers using OpenStack.”

With no hypervisor overhead, an easy installation process and instant support, everything is designed to run faster, said Nick Stinemates, VP business development of container maker Docker. “You can get started in under a minute. The Carina beta from Rackspace makes it fast and simple to start a Docker Swarm cluster. They have put the Docker experience front and centre without any abstraction,” said Stinemates

Carina is now available as a free beta offering on the Rackspace Public Cloud for US customers.