Category Archives: containers

2023 State of Tech in Biopharma report reveals tech strategies in era of data and AI 

Benchling has launched its inaugural 2023 State of Tech in Biopharma report, which has shed light on the obstacles that biopharma encounter when striving to fully implement and embrace these technologies.  The report surveyed 300 R&D and IT experts from biopharma companies large and small to do a first-ever investigation into biopharma’s use of an… Read more »

The post 2023 State of Tech in Biopharma report reveals tech strategies in era of data and AI  appeared first on Cloud Computing News.

Friction between finance and tech leaders prevents companies from controlling cloud spend

Vertice, an optimisation platform for SaaS and cloud spend, has unveiled the results of its global survey, ‘The State of Cloud Cost Optimisation’, which reveals that organisations are being held back from controlling their cloud spending and gaining ROI because of a lack of alignment between finance and tech leaders. Amidst cloud costs rising by… Read more »

The post Friction between finance and tech leaders prevents companies from controlling cloud spend appeared first on Cloud Computing News.

Running Docker on CentOS on ESXi

The post below was written by GreenPages Enterprise Consultant Chris Williams and was published on his Mistwire blog.

Recently I’ve been playing with containers a little bit in my lab. Today I’m going to show you how to get a Docker engine running on a CentOS 7 VM running on an ESXi host. It’s surprisingly easy!
First, what is Docker? It’s an engine that lays on top of an existing host OS and basically removes the “Guest OS” abstraction layer from the mix. This is good because the Guest OS is a big resource hog when you start having several of them per host.

So what does this mean? Is this (potentially) bad news for VMware and Microsoft?

Short answer: yes.

Long answer: Yeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeesssssssssssssssssssssssssssssssssssssssssssss*

*VMware and MS are working on projects to get in on the containery goodness, so I won’t speak about that here. Instead I’m going to walk through how to set up your first Docker engine ON CentOS ON ESXi in your existing vSphere environment.

 

To read the rest of Chris’ post, click here!

 

Looking for more information around Docker? Download this whitepaper, “10 Things to Know About Docker.”

 

Mozilla Firefox launches container feature for multiple online personas

FirefoxThe Mozilla Firefox team has announced it will integrate a new containers driven feature to allow users to sign into multiple accounts on the same site simultaneously.

While the concept of using technology to manage multiple accounts and different personas is not a new idea, the practicalities have been out of reach. With the new feature, users will be able to sign into multiple accounts in different contexts for such uses as personal emails, work accounts, banking, and shopping. Twitter is one of the most relevant examples in the immediate future, as it is not uncommon for individuals to have multiple twitter account for work and personal life.

“We all portray different characteristics of ourselves in different situations,” said Tanvi Vyas, one of the security engineers working on the project, on the company blog. “The way I speak with my son is much different than the way I communicate with my coworkers. The things I tell my friends are different than what I tell my parents. I’m much more guarded when withdrawing money from the bank than I am when shopping at the grocery store. I have the ability to use multiple identities in multiple contexts. But when I use the web, I can’t do that very well.

“The Containers feature attempts to solve this problem: empowering Firefox to help segregate my online identities in the same way I can segregate my real life identities.”

The Mozilla Firefox team are one of the first to have cracked the equation, though it does admit there are a number of challenges to come. Questions which the team now need to answer include:

  • How will users know what context they are operating in?
  • What if the user makes a mistake and uses the wrong context; can the user recover?
  • Can the browser assist by automatically assigning websites to Containers so that users don’t have to manage their identities by themselves?
  • What heuristics would the browser use for such assignments?

“We don’t have the answers to all of these questions yet, but hope to start uncovering some of them with user research and feedback,” said Vyas. “The Containers implementation in Nightly Firefox is a basic implementation that allows the user to manage identities with a minimal user interface.”

Containers for Web

Chef boosts application IQ with Habitat launch

artificial intelligence, communication and futuristicChef has launched a new open source project called Habitat, which it claims introduces a new approach for application automation.

The team claim Habitat is a unique piece of software which enables applications to be freed from dependency on a company’s infrastructure. When applications are wrapped in Habitat the runtime environment is no longer the focus and does not constrain the application itself. Due to this USP applications can run across numerous environments such as containers, PaaS, cloud infrastructure and on premise data centres, but also has the intelligence to self-organize and self-configure, the company claims.

“We must free the application from its dependency on infrastructure to truly achieve the promise of DevOps,” said Adam Jacob, CTO at Chef. “There is so much open source software to be written in the world and we’re very excited to release Habitat into the wild. We believe application-centric automation can give modern development teams what they really want — to build new apps, not muck around in the plumbing.”

Chef would generally be considered a challenger to the technology industry’s giants having only been founded in 2008, though the company has made positive strides in recent years specializing in the DevOps and containers arenas, two of the more prominent growth areas. Although both of these areas are prominent in marketing campaigns and conference presentations, applications into the real-world have been more difficult.

The Habitat product is built on the idea that infrastructure dictated the design of an application. Chef claims by making the application and its automation the unit of deployment, developers can focus on business value and planning features that will make their products stand out rather than on the constraints of infrastructure and particular runtime environments.

“The launch of Habitat is a significant moment for both Chef and the entire DevOps community in the UK and EMEA,” said Joe Pynadath, ‎GM of EMEA for Chef Software, Chef. “It marks our next evolution and will provide an absolutely transformative, paradigm shift to how our community and customers can approach application management and automation. An approach that puts the application first and makes them independent of their underlying infrastructure.  I am extremely excited to see the positive impact that our Chef community and customers throughout Europe will gain from this revolutionary technology.”

EMC launches storage provisioning framework for containers

Empty road and containers in harbor at sunsetEMC has announced the launch of libStorage, an open source vendor and platform-agnostic storage framework released through the EMC {code} program.

Containers have been one of the biggest buzzwords to hit the IT industry through 2015 and 2016, complications surrounding unification of the individual containers has been a challenge for developers. While several container platforms may be running in an environment, each has its own language, requiring users to treat them as silos, though EMC believe libStorage is the solution.

The offering is claimed to provide orchestration through a common model and API, creating a centralized storage capabilities for a distributed, container-driven ecosystem. libStorage will create one storage language to speak with all container platforms and one common method of support.

“The benefits of container technology are widely recognized and gaining ground all the time,” Josh Bernstein, VP of Technology at EMC {code}. “That provides endless opportunity to optimize containers for IT’s most challenging use cases. Storage is a critical piece of any technology environment, and by focusing on storage within the context of open source, we’re able to offer users—and storage vendors—more functionality, choice and value from their container deployments.”

The offering, which is available on GitHub, will support Cloud Foundry, Apache Mesos, DC/OS, Docker and Kubernetes.

“DC/OS users—from startups to large enterprises—love the portable container-operations experience our technology offers, and it’s only natural they would desire a portable storage experience as well,” Tobias Knaup, CTO at Mesosphere. “libStorage promises just this, ensuring users a consistent experience for stateful applications via persistent storage, whatever container format they’re running.”

Container Solutions brings production environment to the developers laptop

Global Container TradeLondon-based Container Solutions has released the latest version of its minimesos project, an open source testing and experiment tool for Apache Mesos, which it claims brings production orchestration testing to the development environment.

The new offering targets the challenge of moving microservice applications from a developer’s laptop to the production environment, which can prove to be complicated as the target platform is different than the local one. The offering allows developers to bring up a containerised Apache Mesos cluster on their laptop, creating a production-like environment on their desktops for building, experimenting and testing.

“When we started building a number of Mesos frameworks, we found it hard to run and test them locally,” said Jamie Dobson, CEO of Container Solutions. “So, we ended up writing a few scripts to solve the problem. Those scripts became minimesos, which lets you do everything on your laptop. We later integrated Scope so that developers could visualise their applications. This made minimesos even more useful for exploratory testing.”

The company claims developers can now start a Mesos cluster through the command line or via the Java API, which is logically isolated as each of the processes run in separate Docker containers. Minimesos is aslo integrated: it exposes framework, state and task information to its Cluster State API.

Joyent launches Container-Native offerings for public and hybrid cloud platform

JoyentJoyent has launched its next generation container-native (G4) and KVM-based (K4) instance package families, which are now available on its Triton-powered public cloud platform.

The company’s cloud platform runs on containers, as opposed to traditional VM’s which the majority of other cloud platforms run on, which it claims will notably improve efficiency. The software used to run the Triton Cloud service is 100% open source and available for customers to use to operate in their own private data centres within a hybrid cloud model.

“Workloads are more efficient on Triton Cloud,” said Bill Fine, Vice President Product and Marketing at Joyent. “This is because Triton allows you to run containers natively, without having to pre-provision (and pay for) virtual machine hosts. The result is less waste and more cost savings for you.

“Consider our recent blueprint to run WordPress in containers. A minimum running implementation requires six g4-highcpu-128M instances and costs just over $13 per month. That minimal site may be perfect for a small blog or staging a larger one. Should you need to scale it, you can resize the containers without restarting them or scale horizontally with Docker-compose scale (or another scheduler of your choice).”

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

Joyent’s value proposition and marketing campaigns are seemingly built on the claim it is cheaper and more efficient than AWS, as it would appear the team are set on taking the fight to the incumbent industry leader. The company claim there is a notable price-performance cost advantage, more specifically, Elasticsearch clusters on Triton complete query requests 50% to 70% faster, Sharded MongoDB clusters complete tasks 100% to 150% faster and Standard primary/replica Postgres configurations up to 200% faster, in comparison to AWS.

“The cost of running on Triton is about half the cost of running on AWS,” said Fine. “With enough experimentation and determination you might be able to narrow this cost gap by more efficiently bin-packing your containers into VMs on AWS, but on Triton those efficiencies are built in and the cost and complexity of VM host clustering is removed. Each container just runs (on bare metal) with the resources you specify.”

Docker bolsters security capabilities with Security Scanning launch

DockerDocker has announced the general availability of its Security Scanning product, an offering formerly known as Project Nautilus.

The service, which is available as add-on service to Docker Cloud private repositories and for Official Repositories located on Docker Hub, streamlines software compliance procedures by providing customers with a security profile of all their Docker images. The offering sits alongside Docker Cloud to automatically trigger a series of events as soon as an image is pushed to a repository, providing a complete security profile of the image itself.

“Docker Security Scanning conducts binary level scanning of your images before they are deployed, provides a detailed bill of materials (BOM) that lists out all the layers and components, continuously monitors for new vulnerabilities, and provides notifications when new vulnerabilities are found,” said Docker’s Toli Kuznets on the company’s blog.

“The primary concerns of app dev teams are to build the best software and get it to their customer as fast as possible. However, the software supply chain does not stop with developers, it is a continuous loop of iterations, sharing code with teams and moving across environments. Docker Security Scanning delivers secure content by providing deep insights into Docker images along with a security profile of its components. This information is then available at every stage of the app lifecycle.”

The offering itself splits each Docker image its respective layers and components, and evaluates the risk associated with each one. Risks are reported back to the CVE databases, linked to the specific layer and/or component, but are also monitored on an on-going basis.

New vulnerabilities found during the on-going monitoring process are reported to the CVE database, which will then assess all other software associated with that component/package to improve software compliance across the board. Docker believes software compliance and general risk management can be enhanced through the offering, but also throughout the lifecycle of the software itself.

“With this information, IT teams can proactively manage software compliance requirements by knowing what vulnerabilities impact what pieces of software, reviewing the severity of the vulnerability and making informed decisions on a course of action,” said Kuznets.

The offering is now available to all customers, with Docker currently offering a three month free trial.

Docker Security

Container adoption hindered by skills gap – survey

Empty road and containers in harbor at sunsetNew research from Shippable has highlighted the use of containers is increasing within the North American market, though the current skills gap is proving to be a glass ceiling for the moment.

Just over half of the respondents to the survey, said they were currently using containers in production and 14% confirmed they were using the technology in the development and testing stages. A healthy 89% believe the use of containers in their organization will increase over the next 12 months.

“Our research and personal experience shows that companies can experience exponential gains in software development productivity through the use of container technology and related tools,” said Avi Cavale, CEO at Shippable. “Companies are realizing the productivity and flexibility gains they were expecting, and use of container technology is clearly on the rise. That said, there are still hurdles to overcome. Companies can help themselves by training internal software teams and partnering with vendors and service providers that have worked with container technology extensively.”

Of those who are not using technology currently, a lack of in-house skills was listed as the main reason, however the survey highlighted security is still a concern, the ROI of the technology is still unproven, and also the company’s infrastructure is not designed to work with containers.

While the rise in awareness of containers has been relatively steady, there have been a number of reports which highlighted an unhealthy proportion of IT professionals do not understand how to use the technology, or what the business case is. The results here indicate there has at least been progress made in understanding the use case, as 74% of those who said they were using the technology are now shipping new software at least 10% faster using container technology, and eight% are shipping more than 50% faster than before.

“In the earlier years of computing, we had dedicated servers which later evolved with virtualisation,” say Giri Fox, Director of Technical Services at Rackspace. “Containers are part of the next evolution of servers, and have gained large media and technologist attention. In essence, containers are the lightest way to define an application and to transport it between servers. They enable an application to be sliced into small elements and distributed on one or more servers, which in turn improves resource usage and can even reduce costs.

“Containers are more responsive and can run the same task faster. They increase the velocity of application development, and can make continuous integration and deployment easier. They often offer reduced costs for IT; testing and production environments can be smaller than without containers. Plus, the density of applications on a server can be increased which leads to better utilisation.

“As a direct result of these two benefits, the scope for innovation is greater than its previous technologies. This can facilitate application modernisation and allow more room to experiment.”

The survey also showed us that while Docker maybe one of the foremost names in the containers world, this has not translated through to all aspects of usage. The most popular registry is Google Container Registry at 54%, followed by Amazon EC2 Container Registry on 45% and Docker Hub in third place with 34%. Public cloud was also the most popular platform, accounting for 31% of respondents. 52% of developers said they’re running containerized applications on Google Compute Engine, while 49% are running on Microsoft Azure and 43% on Amazon Web Services.

While containers are continuing to grow in popularity throughout the industry, the survey highlights the technology is not quite there yet. North America could be seen as more of a trend setting than Europe and the rest of the world, and the fact usage has only just tipped through 50%, there might still be some work before the technology could be considered mainstream. The results are positive, but there is still work to do.