Easier Deploys with CircleCI and Custom Docker Containers

container ship on water.jpg
 

In a recent project, we adopted a new method of using CircleCI for our testing and deploys, using a custom-made container image to run project builds and tests. Our repo includes a Dockerfile, a config.yml for CircleCI, and a yaml file for pre-commit.

We took this approach in the interest of eliminating repeated steps in our builds, such as installing the same tools for testing and deployment every time new code was pushed. This created consistency in build environments, spared us redundant work, and made this image available for future projects without having to copy and paste scripts between repos in the future.

Here, we’ll explain our reasoning, our methods, and how you can start adapting this process for your own tests and builds.

Choose your base image

We wanted a base image with the tooling CircleCI deems necessary for a primary container image. CircleCI has kindly produced a wide array of base images with the tools for various languages built in, so you can make an easy first choice depending on the dominant language in your chosen tools.

For our base, we chose Circle’s Python 3.6 image because we install two Python-based tools — awscli and pre-commit — with pip. CircleCI’s Python image ensured we would be able to install these without first installing Python. Whatever language or version you feel is right for you, CircleCI advises you to specify the version you want via a specific commit ID, rather than using :latest, unless you especially enjoy debugging surprises a few weeks or months after you finish this project. (We followed this convention in this blog post as well. Aside from the main repo link above, all links to our files go to the versions that were current as of the writing of this post.)

This can happen either through “latest” changing under you as CircleCI updates the image or by the build using the cached “latest” version when you reasonably expected a new version. We suggest being generally wary of mutable tags, and CircleCI itself warns that they “cannot guarantee that mutable tags will return an up-to-date version of an image.”

Either way, resist the temptation to take the answer that’s easy today but expensive tomorrow.

Layers of complexity and function

A quick note: the CircleCI image sets the user as “circleci,” so shell scripts will need to either preface certain lines with “sudo,” or you’ll need to set the user to root at the top of the Dockerfile and back to “circleci” at the bottom. Our Dockerfile reflects our choice to use “sudo” where needed, rather than switching users.

This step is where the heavy optimizing comes in. Each RUN piece adds an additional layer to the container image, which typically increases the size. One way to minimize this is to chain multiple commands using &&. Docker doesn’t recommend it, describing it as “failure-prone and hard to maintain,” but it’s one way to minimize the bloating that can come with multiple RUN commands.

With that twist in mind, our Dockerfile becomes fairly straightforward, reminiscent of a set of installation commands you’d use for Vagrant or Packer.  

After setting BUILD_DATE and VCS_REF, the ARGs established in our CircleCI config.yml file, and setting up our metadata (more on that in the next section), we install our tools, the reasons we want to build a customized image atop CircleCI’s provided one in the first place.

Our selections of additions for this custom image came from our test and deploy process; you can read a bit more about it here. We wanted to minimize install time for repeatedly used tools and avoid the risk of a repo being unavailable at a critical time. (Who wants to have deploys halted just because the internet is having a bad day?) You, of course, can and should swap out our RUN lines for whatever supports your own needs.

While these steps don’t add a tremendous amount of time for a single deploy, adhering to true CI/CD means that these steps happen over and over, and ideally pretty often, adding extra (avoidable) time to each build and additional chances for the downloads to fail. Having these tools prepackaged into a custom image means these downloads and installations happen only when you update the image.

When considering what you might put into a similar image for your purposes, look at your deploy steps and apply the DRY principle. If you’re in the place of considering this kind of optimization, a few clear contenders should emerge.

What we don’t suggest is putting caches on this image. For example, for this container, we don’t pre-cache the packages pre-commit will download. Let CircleCI’s dependency caching handle this.

Finishing touches

At this point, after the container is pushed, you can add some nice-to-haves. We added metadata to MicroBadger using the guidelines at Label Schema, specifying build date, name, description, and other identifying, clarifying information. You could use this space to set up a Slack notification using a webhook or any other little trumpeting announcements and last steps which might make your process easier.

Faster, pussycat, build, build!

From here, it’s a matter of iteration and establishing the process of future updates. Work out your commands on an instance of your base container, transfer them to a Dockerfile, and build your first image. Run it; does it work like you expect? Does it support all the operations you need? If not, return and revise, same as you would for building an AMI or other machine image.

A quick note: if you’re making a fairly layer-intense build, you may need to strategize to keep the container size manageable. If this is a concern for you, we recommend checking out Docker’s guide to best practices and their guide to multi-stage builds.

Sharing is caring

This section assumes that you want other people and orgs to be able to access your code and use your container. If this isn’t the case for you, consider these steps an educational suggestion.

When you connect your repo to CircleCI, you’ll have the option to set up a webhook for your repo to make builds and pushes happen upon commits, either to new or existing branches or merges to master. Here’s how we did it in config.yml:

- run:
  name: Release container
  command: |
  docker tag circleci-docker-primary trussworks/circleci-docker-primary:$CIRCLE_SHA1
  docker push trussworks/circleci-docker-primary:$CIRCLE_SHA1
  docker tag circleci-docker-primary trussworks/circleci-docker-primary:$CIRCLE_BRANCH 
  docker push trussworks/circleci-docker-primary:$CIRCLE_BRANCH

  if [[ $CIRCLE_BRANCH = master ]]; then
    # push default tag
    docker tag circleci-docker-primary trussworks/circleci-docker-primary
    docker push trussworks/circleci-docker-primary

    # notify microbadger to update
    # https://microbadger.com/images/trussworks/circleci-docker-primary
    curl -X POST $MICROBADGER_WEBHOOK
  fi

And with that, you have the lifecycle of your container:

  • Push a new branch or merge to master

  • The new commit builds via CircleCI

  • The successful build pushes the newly built container image to Docker Hub

  • Final steps, including notifications or metadata updates

  • A new container version, identifiable by SHA, is available to you and yours

We suggest (as we tend to do with these things) to assume automation will be part of your process and to keep it in mind with other best practices as you shape your process. Automation is part of completing the process, rather than a mere nice-to-have. If you and/or your org don’t feel similarly, perhaps consider it. If you try to automate yourself out of a job, you’ll never succeed, but you’ll make your working life and your deploys a lot easier.

If you don’t want to make your own right now, you’re welcome to make use of ours.

Bring it on home now

As in so many areas of programming, consider this approach whenever you’re doing a build or deploy that makes repeated use of a common environment — the same set of tools, the same environmental variables, or anything else that might otherwise need to be continually maintained, tailored, or created. Creating a container image helps avoid the “it worked on my machine” issue (as it did for us on a recent demo sprint - after some initial setup, no engineer had that particular “it won’t run” issue again). It also, as we mentioned, spares you from the travails of an uncertain internet. There are lots of official repositories to start from, including other CIs, such as Concourse and Jenkins, and tools common in web dev, including Nginx, MySQL, and Node. To get started, we suggest finding what steps in your process are being unnecessarily repeated and working back from that. Iterate, update, and enjoy your drama-free deploys.

Effusive thanks to Jeremy Avnet for his guidance throughout the creation of this post.