Docker Provisioning Tips from the Experts

By: Morpheus Data

Follow these approaches to enhance deployment and management of your data resources – whether or not you use Docker.

 

What? You haven’t converted all your data to Docker yet? Are you crazy? Get with the container program, son.

Well, as with many hyped technologies, it turns out Dockers specifically (and containers in general) are not the cure for all your data-management ills. That doesn’t mean the technology can’t be applied in beneficial ways by organizations of all types. It just means you have to delve into Docker deliberately – with both eyes open and all assumptions pushed aside.

Matt Jaynes writes in Docker Misconceptions that Docker makes your systems more complex, and it requires expertise to administer, especially on multi-host production systems. First and foremost, you need an orchestration tool to provision, deploy, and manage the servers Docker is running on. Jaynes recommends Ansible, which doubles as a configuration manager.

However, Jaynes also suggests several simpler alternative optimization methods that rival those offered by Docker:

  1. Cloud images: App-management services such as Morpheus let you save a server configuration as an image that you replicate to create new instances in a flash. (Configuration management tools keep all servers configured identically as small changes are implemented subsequently.)
  2. Version pinning: You can duplicate Docker’s built-in consistency by using version pinning, which helps avoid conflicts caused by misconfigured servers.
  3. Version control: Emulate Docker’s image layer caching via version control deploys that use git or another version-control tool to cache applications on your servers. This lets you update via small downloads.
  4. Package deploys: For deploys that require compiling/minifying CSS and Javascript assets or some other time-consuming operation, you can pre-compile and package the code using a .zip file or a package manager such as dpkg or rpm.
Running Docker in a multi-host production environment requires management of many variables, including the following:
  1. A secured private image repository (index)
  2. The ability to orchestrate container deploys with no downtime
  3. The ability to orchestrate container-deploy roll-backs
  4. The ability to network containers on multiple hosts
  5. Management of container logs
  6. Management of container databases and other data
  7. Creation of images that can accommodate init, logs, and similar components
Finding the perfect recipe for provisioning via Chef

The recent arrival of Chef Provisioning introduces the concept of Infrastructure as Code, as John Keiser writes in a November 12, 2014, post on the Chef blog. Infrastructure as Code promises to let you write your cluster configuration as code, which makes clusters easier to understand. It also allows your clusters to become “testable, repeatable, self-healing, [and] idempotent,” according to Keiser.

Chef Provisioning’s features include the following:
  1. Application clusters can be described with a set of machine resources.
  2. Multiple copies of your application clusters can be deployed for test, integration, production, and other purposes.
  3. Redundancy and availability are improved because clusters can be spread across many clouds and machines.
  4. When orchestrating deployments, you’re assured the database primary comes up before any secondaries.
  5. machine_batch can be used to parallelize machines, which speeds up deployments.
  6. machine_image can be used to create images that make standardized rollouts faster without losing the ability to patch.
  7. load_balancer and the machine resource can be used to scale services easily.

Keiser provides the example of a recipe that deploys a database machine with MySQL on it. You simply install Chef and Provisioning, set CHEF_DRIVER to your cloud service, and run the recipe:

# mycluster.rb require ‘chef/provisioning’ machine ‘db’ do recipe ‘mysql’ end

For example, to provision to your default Amazon Web Services account in ~/.aws/config, set the CHEF_DRIVER variable to the following:

export CHEF_DRIVER=aws # on Unix set CHEF_DRIVER=aws # on Windows

Then you simply run the recipe:

chef-client -z mycluster.rb

Add machine_batch and parallelization to apply the configuration to multiple servers:

# mycluster.rb require ‘chef/provisioning’ machine_batch do machine ‘db’ do recipe ‘mysql’ end # Create 2 web machines 1.upto(2) do |i| machine “web#{i}” do recipe ‘apache2’ end end end

In this example, there are three machines in machine_batch, which provisions the machines in parallel. A loop is used to create multiple machines, so you can add more machines by changing 2 to your desired number, all of which will be created in the same time it takes to create one machine.

Finally, run this command:

chef-client -z mycluster.rb

The db machine will respond with “up to date” rather than indicating that it has been created because machine (like all Chef resources) is idempotent: It knows the db machine is already configured correctly, so it doesn’t do anything.

A simple example of Chef Provisioning in Docker

In an April 28, 2015, post on the Safari blog, Shane Ramey provides the example of using Docker with Chef Provisioning to create a network of nodes on a local workstation. The network’s three node types span four instances: a load balancer, two application servers, and one or two database servers. The rudimentary example merely installs packages and sets Chef node data. It has four functions:

  1. Define the machines in the environment: Chef node data and recipe run_list.
  2. Add, query, update, and delete machines by using the environment configuration file.
  3. Share the environment configuration file to allow others to run their own copy of the environment.
  4. Apply version control to the environment configuration.

There are six steps to the process:

  1. Copy the example file ~/deploy-environment.rb to the local machine.
  2. Download and install Docker: boot2docker for Mac and Windows, or Docker for Linux.
  3. Download and install ChefDK.
  4. Install the chef-provisioning-docker library for Chef.
  5. Set the local machine to use Chef Zero to serve cookbooks.
  6. Select Docker as your driver.

Now when you run docker images, the output will resemble that shown in the screen below:


After following the six steps for Chef Provisioning in Docker, running docker images will list information about each of your Docker images. Source: Safari blog

When the images launch, they communicate with a defined Chef server, which can be either the local workstation’s Development Chef Server or a production Chef server. Using a local machine as the Chef server requires that it be launched before the Docker instances can check in. To do so, run the following command:

cd ~/chef && chef-zero -H 0.0.0.0 # Listen on all interfaces

To launch the Docker images, run one or more of the following commands:


Launch Docker images by running one of these commands (after initializing the Chef server, if you’re using a local machine, as in the above example). Source: Safari blog

This technique allows developers working on the same application to share a single environment config file and run everything on the same local machine. Code can be uploaded quickly and simply for deployment in one or more networked environments.


To learn more best practices on container utilization and cloud management, follow Morpheus on Twitter or LinkedIn or sign up for a demo today!