Archivo de la categoría: Docker

Exclusive DockerCon17 Savings!

  DockerCon 2017 is the perfect conference for developers who distribute apps that are built with containers. Many of our amazing customers are attending this sold-out event! Are you one of the 4,000+ attendees of DockerCon or wishing you could be there? Don’t worry—here at Parallels we’ve put together a limited time offer of exclusive […]

The post Exclusive DockerCon17 Savings! appeared first on Parallels Blog.

Running Docker on CentOS on ESXi

The post below was written by GreenPages Enterprise Consultant Chris Williams and was published on his Mistwire blog.

Recently I’ve been playing with containers a little bit in my lab. Today I’m going to show you how to get a Docker engine running on a CentOS 7 VM running on an ESXi host. It’s surprisingly easy!
First, what is Docker? It’s an engine that lays on top of an existing host OS and basically removes the “Guest OS” abstraction layer from the mix. This is good because the Guest OS is a big resource hog when you start having several of them per host.

So what does this mean? Is this (potentially) bad news for VMware and Microsoft?

Short answer: yes.

Long answer: Yeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeesssssssssssssssssssssssssssssssssssssssssssss*

*VMware and MS are working on projects to get in on the containery goodness, so I won’t speak about that here. Instead I’m going to walk through how to set up your first Docker engine ON CentOS ON ESXi in your existing vSphere environment.

 

To read the rest of Chris’ post, click here!

 

Looking for more information around Docker? Download this whitepaper, “10 Things to Know About Docker.”

 

Cloud and software jobs surge over last 12 months

New productRackspace has released the findings from its annual analysis of the IT job market which highlighted demand for positions in and around cloud computing are rising at a healthy rate.

Vacancies for AWS engineer roles increased by 125% over the last 12 months, where are those advertised for Microsoft Azure competencies also increased by 75% in the same period. The rise in job focused on tailoring cloud solutions for individual companies, and also migrating from legacy technologies, supports previous research and claims that cloud computing is penetrating the mainstream marketplace.

“Our industry moves so fast that we can’t rely entirely on traditional forms of education from schools and universities to fill skills gaps,” said Darren Norfolk, Managing Director for Rackspace in the UK. “Therefore, technology companies have a responsibility to address these shortages by growing and fostering talent through on the job training and experience.

“I expect the rise in demand for cloud related jobs to continue as a growing number of businesses adopt a multi cloud strategy, using platforms such as Microsoft Azure, Openstack and AWS. The highly competitive recruitment market for skills in these areas means that managing the platforms in-house could become more costly than it has been in the past.”

Software development is another area which has demonstrated healthy growth as the number of vacancies for individuals who have Docker expertise has risen by 341%, though this is down from the 991% increase which was reported in the 2015 findings. The accelerated rate in which new technologies are penetrating the market and being implemented by companies throughout the world is seemingly too fast for in-house resource to be trained on these competencies, leaving hiring new employees the only option for some. Docker expertise is now the second most sought after job function in the IT world, according to the research.

DevOps as a practise would also appear to be have accepted in the business world, as the number of roles grew 53% over the last twelve months, following a 57% increase from the findings last year. The rise in roles would appear to be an indicator DevOps has not been integrated within the IT ecosystem, though it may still be considered too early to be mainstream.

Docker bolsters security capabilities with Security Scanning launch

DockerDocker has announced the general availability of its Security Scanning product, an offering formerly known as Project Nautilus.

The service, which is available as add-on service to Docker Cloud private repositories and for Official Repositories located on Docker Hub, streamlines software compliance procedures by providing customers with a security profile of all their Docker images. The offering sits alongside Docker Cloud to automatically trigger a series of events as soon as an image is pushed to a repository, providing a complete security profile of the image itself.

“Docker Security Scanning conducts binary level scanning of your images before they are deployed, provides a detailed bill of materials (BOM) that lists out all the layers and components, continuously monitors for new vulnerabilities, and provides notifications when new vulnerabilities are found,” said Docker’s Toli Kuznets on the company’s blog.

“The primary concerns of app dev teams are to build the best software and get it to their customer as fast as possible. However, the software supply chain does not stop with developers, it is a continuous loop of iterations, sharing code with teams and moving across environments. Docker Security Scanning delivers secure content by providing deep insights into Docker images along with a security profile of its components. This information is then available at every stage of the app lifecycle.”

The offering itself splits each Docker image its respective layers and components, and evaluates the risk associated with each one. Risks are reported back to the CVE databases, linked to the specific layer and/or component, but are also monitored on an on-going basis.

New vulnerabilities found during the on-going monitoring process are reported to the CVE database, which will then assess all other software associated with that component/package to improve software compliance across the board. Docker believes software compliance and general risk management can be enhanced through the offering, but also throughout the lifecycle of the software itself.

“With this information, IT teams can proactively manage software compliance requirements by knowing what vulnerabilities impact what pieces of software, reviewing the severity of the vulnerability and making informed decisions on a course of action,” said Kuznets.

The offering is now available to all customers, with Docker currently offering a three month free trial.

Docker Security

Docker buys Conductant to catalyse coding development

CodingContainer technology pioneer Docker has bought start up Conductant, best known for creating the Aurora strand of the Apache Mesos clustering system. Conductant’s software is used to catalyse faster development of large scale code.

Announcing the acquisition on its website Docker spokesman Solomon Hykes placed more emphasis on the talent, rather than the technology, that is being brought in with the take over of an early stage start up. Welcoming the Conductant ‘team’ to the Docker ‘family’ Hykes outlined the contributions that founders Bill Farner, David Chung and John Sirois made to operating and scaling production systems at Google, Twitter and Zynga.

Farna, who created the Aurora Project, will lead the process of integrating the clustering technology into the fabric of its container software. Docker’s expansion policy is to buy emerging software tool makers and integrate them into its container software core, according to Hykes. In January BCN reported how Docker has acquired Unikernel Systems in order to channel its hypervisor and unikernel experience into the development of Docker’s container systems. “We believe our job is integrating these technologies in tools that are easy to use and help people create new things. We did this for Linux containers, to help make applications more portable,” wrote Hykes.

Aurora, an extension of the Apache Mesos clustering system, is specifically designed for hyper scale production environments. Hykes claimed it is recognized as the most scalable and operationally-robust component of the Mesos stack, which in turn helps to create the conditions for operations-driven development (ODD). The experiences of the Conductant team, operating global scale clouds for Google, Twitter and Zyng, forced them to develop new techniques for rapid development. Bill Farner’s team at Twitter built Aurora to automate massive server farms that could be managed by handful of engineers.

Docker now plans to incorporate the best ideas from Aurora into Docker Swarm, which allows for any app to go on any infrastructure on any scale, and integrate Aurora as an optional component of the official Docker stack. One option is to integrate Aurora with Docker Swarm to form a powerful large-scale web operations stack.

While Swarm is designed to be the standard base layer to scale all kinds of applications, Aurora is optimized for large-scale consumer apps reaching hundreds of millions of users. “By making two of the most popular open-source infrastructure projects interoperate better, we believe both communities will benefit,” said Hykes.

Docker launches DDC to support ‘container as a service’ offering

Container company Docker has announced a Docker Data Center along with the new concept of ‘containers as a service’ in a bid to extend its cloud based technology to customer sites.

The Docker Datacenter (DDC) resides on the customer’s premises and gives them a self service system for building and running applications across multiple production systems while under operations controls.

It has also announced the general availability of Docker Universal Control Plane, a service that has been undergoing beta-testing since November 2015, which underpins the running of the container as a service (CaaS).

The advantage of the DDC is that it creates a native environment for the lifecycle management for Dockerized applications. Docker claims that 12 Fortune 500 companies have been beta testing the DDC along with smaller and companies in a range of industries.

Since every company has different systems, tools and processes the DDC was designed to work with whatever the clients have got and adjust to their infrastructure without making them recode their applications, explained Docker spokesman Banjot Chanana on the Docker website. Networking plugins, for example, can be massively simplified if clients use Docker to define how app containers network together. They can do this by choosing from any number of providers to provide the underlying network infrastructure, rather than have to tackle the problem themselves. Similarly, connecting to an internal storage infrastructure is a lot easier. Application programming interfaces provided by the on site ‘CaaS’ allow developers to move stats and logs in and out of logging and monitoring systems more easily.

“This model enables a vibrant ecosystem to grow with hundreds of partners,” said Chanana, who promised that Docker users will have much better options for their networking, storage, monitoring and workflow automation challenges

Docker says its DDC is integrated with Docker’s commercial Universal Control Plane and Trusted Registry software. It achieved this with open source Docker projects Swarm (orchestration), Engine (container runtime), Content Trust (security) and Networking. Docker and its partner IBM provide dedicated support, product engineering teams and service level agreements.

Exponential Docker usage shows container popularity

Global Container TradeAdoption of Docker’s containerisation technology has entered a period of explosive growth with its usage numbers nearly doubling in the last three months, according to its latest figures.

A declaration on the company blog reports that Docker has now issued 2 billion ‘pulls’ of images. In November 2015 the usage figure stood at 1.2 bullion pulls and the Docker Hub from which these images are pulled was only launched in March 2013.

Docker’s invention of software defined autonomous complete file system that encapsulates all the elements of a server in microcosm – such as code, runtime, system tools and system libraries – has whetted the appetite of developers in the age of the cloud.

In January 2016, Docker users pulled images nearly 7,000 times per minute, which was four times the run rate a year ago. In that one month Docker enjoyed the equivalent of 15% of its total transaction from the past three years.

The number of ‘pulls’ is significant because each of these transactions indicates that a Docker engine is downloading an image to create containers from it. Development teams use Docker Hub to publish and use containerised software, and automate their delivery. The fact that two billion pulls have now taken place indicates the popularity of the technology and the exponential growth rate in the last three months is an indicator of the growing popularity of this variation of virtualisation.

There are currently over 400,000 registered users on Docker Hub. “Our users span from the largest corporations, to newly-launched startups, to the individual Docker enthusiast and their number is increasing every day,” wrote Docker spokesman and blog author Mario Ponticello.

Around a fifth of Docker’s two billion pulls come from its 93 ‘Official Repos’ – a curated set of images from Docker’s partners, including NGINX, Oracle, Node.js and Cloudbees. Docker’s security-monitoring service Nautilus maintains integrity of the Official Repos over time.

“As our ecosystem grows, we’ll be adding single-click deployment and security scanning to the Docker platform,” said Monticello.

A Rightscale study in January 2016 found that 17% of enterprises now have more than 1,000 virtual machines in the public cloud (up 4% in a year) while private clouds are showing even stronger appetite for virtualisation techniques with 31% of enterprises running more than 1,000 VMs, up from 22% in 2015.

Betting on the cloud

Dan-Scholnick_v2A long-time expert on enterprise IT and cloud platforms, Dan Scholnick (General Partner, Trinity Ventures) has the distinction of having been Docker’s first venture investor. BCN spoke to him to find out the secrets to being a top level IT investor.

Know your stuff. Scholnick has a technical background, with a computer science degree from Dartmouth College. After this he worked at Wily Technology with the legendary Lew Cirne, who went on to be the founder and CEO of New Relic. At Wily, Scholnick built the first version of the company’s application performance management product.

All this gave Scholnick a natural appreciation for products and technologies that get used in the data centre as core infrastructure. It partly was this understanding that alerted him to the potential significance of Docker’s processor, dotCloud.

Know how to spot talent: The other factor was that he could recognise dotCloud founder Solomon Hykes as a technology visionary. “He had a better understanding and view of how infrastructure technology was changing than almost anyone we had met,” says Scholnick.

Of course, dotCloud didn’t turn out as expected. “It turns out we were wrong about PaaS, but we were right about the containers. Fortunately for all of us involved in the company, that container bet ended up working out.”

Know when the future is staring you in the face: When Scholnick invested in dotCloud, containers had been around for quite a long time. But they were very difficult to use. “What we learned through the dotCloud experience was how to make containers consumable. To make them easier to consume, easier to use, easier to manage, easier to operate. That’s really what Docker is all about, taking this technology that has actually been around, is great technology conceptually but has historically been very hard to use, and make it usable.”

The rest is IT history. Arguably no infrastructure technology in history has ever taken off and gained mass adoption as quickly as Docker.

“To me, the thing that’s really stunning is to see the breadth and depth of Docker usage throughout the ecosystem,” says Scholnick. “It’s truly remarkable.”

Know what’s next: When BCN asked Scholnick what he thought the next big thing would be in the cloud native movement, he points to an offshoot of Docker and Containers: microservices. “I think we’re going to see massive adoption of microservices in the next 3-5 years and we’re likely going to see some big companies built around the microservices ecosystem,” he says.” Docker certainly has a role to play in this new market: Docker is really what’s enabling it.” and

Keeping in touch with real world uses of Containers is one the reasons Scholnick will be attending and speaking at Container World (February 16 – 18, 2016 Santa Clara Convention Center).

“As a board member at Docker and as an investor in the ecosystem, it’s always good to hear the anecdotal information about how are people using Docker – as well as what pieces do they feel are missing that would help them use containers more effectively. That’s interesting to me because it point to problems that are opportunities for Docker to solve, or opportunities for new start-ups that we can fund.”

Click here to download the Container World programme

Docker buys Unikernel Systems to make micro containers

containersUS based container software pioneer Docker has announced the acquisition of Cambridge start up Unikernel Systems, so it can create even tinier self contained virtual system instances.

Open source based Docker automates the running of applications in self contained units of operating system software (containers). It traditionally did this by creating a layer of abstraction from operating-system-level virtualization on Linux. This resource isolation allows multiple independent jobs to run within a single Linux instance, which obviates the need to spin up a new virtual machine. The technology provided by Unikernel, according to Docker, takes the autonomy of individual events to a new level, with independent entities running on a virtual server at an even smaller, more microcosmic level.

The new expertise bought by Docker means that it can give every application its own Virtual Machine with a specialized unikernel, according to Docker community marketing manager Adam Herzog.

Unikernel takes away the rigid distinction between operating system Kernels and the applications that run over them, creating more fluidity and exchange between the two. When source code is compiled a custom operating system is created for each application which makes for a much more efficient way of working and more effective functions. The key to efficiency of unikernels is their size and adaptability, according to the Docker blog. Being brought into the open source stable will make them more readily available to developers, it argued.

Unikernel was founded by ex-alumni from hypervisor company Xen including Anil Madhavapeddy, David Scott, Thomas Gazagnaire and Amir Chaudhry. Since unikernels can run on ‘bare metal’ (hardware without any operating system or hypervisor) they take the efficiency of virtual machines further, according to the Docker blog. Unikernels are an important part of the future of the container ecosystem since they effectively absorb the operating system into the containers, Scott says. Since an application only needs to take on the scraps of operating system code that it needs, Unikernels could eventually make the operating system redundant, it claimed.

Containers at Christmas: wrapping, cloud and competition

Empty road and containers in harbor at sunsetAs anyone that’s ever been disappointed by a Christmas present will tell you – shiny packaging can be very misleading. As we hear all the time, it’s what’s inside that counts…

What then, are we to make of the Docker hype, centred precisely on shiny, new packaging? (Docker is the vendor that two years ago found a way to containerise applications: other types of containers, operating system containers, have been around for a couple of decades)

It is not all about the packaging, of course. Perhaps we should say that it is more about on what the package is placed, and how it is managed (amongst other things) that matters most?

Regardless, containers are one part of a changing cloud, data centre and enterprise IT landscape, the ‘cloud native’ movement widely seen as driving a significant shift in enterprise infrastructure and application development.

What the industry is trying to figure out, and what could prove the most disruptive angle to watch as more and more enterprises roll out containers into production, is the developing competition within this whole container/cloud/data centre market.

The question of competition is a very hot topic in the container, devops and cloud space.  Nobody could have thought the OCI co-operation between Docker and CoreOS meant they were suddenly BFFs. Indeed, the drive to become the enterprise container of choice now seems to be at the forefront of both companies’ plans. Is this, however, the most dynamic relationship in the space? What about the Google-Docker-Mesos orchestration game? It would seem that Google’s trusted container experience is already allowing it to gain favour with enterprises, with Kubernetes taking a lead. And with CoreOS in bed with Google’s open source Kubernetes, placing it at the heart of Tectonic, does this mean that CoreOS has a stronger play in the enterprise market to Docker? We will wait and see…

We will also wait and see how the Big Cloud Three will come out of the expected container-driven market shift. Somebody described AWS as ‘a BT’ to me…that is, the incumbent who will be affected most by the new disruptive changes brought by containers, since it makes a lot of money from an older model of infrastructure….

Microsoft’s container ambition is also being watched closely. There is a lot of interest from both the development and IT Ops communities in their play in the emerging ecosystem. At a recent meet-up, an Azure evangelist had to field a number of deeply technical questions regarding exactly how Microsoft’s containers fair next to Linux’s. The question is whether, when assessing who will win the largest piece of the enterprise pie, this will prove the crux of the matter?

Containers are not merely changing the enterprise cloud game (with third place Google seemingly getting it very right) but also driving the IT Ops’ DevOps dream to reality; in fact, many are predicting that it could eventually prove a bit of a threat to Chef and Puppet’s future….

So, maybe kids at Christmas have got it right….it is all about the wrapping and boxes! We’ll have to wait a little longer than Christmas Day to find out.

Lucy Ashton. Head of Content & Production, Container WorldWritten by Lucy Ashton, Head of Content & Production, Container World