Super-Powering Your Enterprise Jenkins CI Pipeline with Docker-Compose

While Jenkins has the flexibility to add almost infinite plugins and functionalities; it was created in a time before containers existed. Originally, it was primarily interacted with via the UI and did not make code-first pipelines the norm until 2016; a move that coincided with the pipeline (previously the workflow) plugin coming out of beta and CloudBees open sourcing their pipeline visualization plugin.

But in 2018, containers are the norm. In an effort to help my teams at Capital One, I wanted to enable using containers in each build step with our Jenkins environment. Here is a specific fix to make a Jenkinsfile more readable and make using Docker images for build steps simpler.

Trouble With image.inside()

The “image.inside()” function is a nice convenience to running commands within a Docker container. Using it looks like this:

    node {
  checkout scm
  docker.withRegistry('https://registry.example.com') {
    docker.image('my-custom-image').inside {
      sh 'make test'
    }
  }
}
  

However, for your project, you may not want to run the Jenkins build agent as root, but instead as another user. Keeping these separate may make sense for some security situations, but the Docker Jenkins plugin still expects some form of privileged access. As we discovered, running the Jenkins build agent this way can cause the the “image.inside()” function to not work and throw exceptions.

While there is a workaround, it makes the code harder to read. In the syntactically correct but non-working (this isn’t actually our code) groovy code example below, without the “img.inside()” function working, we had to concatenate commands in bash when running them in the container. Not the most readable code; and a fix that could get out of hand quickly.

    node{
  docker.withRegistry(‘https://registry.example.com/', ‘svc-acct’) {
    def img = docker.image(‘org/go-builder-base-image:master’)
    img.pull()
    checkout scm
    stage(‘Build’) {
      sh ‘docker run — privileged -t -v $(pwd):/go/src/git.example.com/group/registrysync -w /go/src/git.example.com/group/registrysync group/go-builder-base-image:master bash -c “go get && go build”’
    }
    stage(‘Test’) {
      sh ‘docker run — privileged -t -v $(pwd):/go/src/git.example.com/group/registrysync -w /go/src/git.example.com/group/registrysync group/go-builder-base-image:master bash -c “go test”’
    }
  }
}
  

When you compare this code to the CI systems released over the past five years, where you can trivially specify a Docker image for each step, this expects a little more work from the developer.

Enter Docker Compose.

Isolating Build Steps Everywhere

Docker Compose is a tool used to orchestrate one or more containers, networks, and volumes at a higher level than just using docker run. It is available on every one of our Jenkins nodes, and everywhere you have Docker installed. If you don’t have the Docker plugin installed on Jenkins, I highly recommend it. In reality it’s part of Docker’s orchestration system, Swarm, but it does not have to be used in that way.

Docker Compose provides a more structured and cleaner way to pass variables and values to your Docker commands. Unlike a dockerfile, you can access environment variables and pass them to the docker command. The Docker team has a great overview of Docker Compose here.

Let’s go back to our current need — to make a Jenkinsfile more readable and make using Docker images for build steps simpler.

The basic pattern which we’ll use is to define a service for each build step in a Docker Compose file, and an associated dockerfile, for each of those steps.

In this example, we’ve got a Go project we want to compile. Creating a dockerfile to do that is pretty simple. We’ll use a base image with the Go environment installed in which we can call “go build”.

    FROM registry.example.com/group/go-builder-base-image:master
ARG PROJECT_PATH
VOLUME $PROJECT_PATH
WORKDIR $PROJECT_PATH
CMD CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags ‘-s’ -x .
  

You can see we specify a PROJECT_PATH arg which we then use to specify the volume and set the working directory. Remember — environment variables are not available inside dockerfiles.

We can now use this dockerfile as a build step to compile our Go project in our Docker-Compose file.

    version: '3'
  services:
    compile:
      build:
        context: .
        dockerfile: compile.dockerfile
      args:
        - PROJECT_PATH=$PROJECT_PATH
      volumes:
        - .:$PROJECT_PATH
  

In Docker Compose files you can use environment variables, and that’s exactly what we’ve done with the $PROJECT_PATH. This, as you may have guessed, has the current project’s path defined in the Jenkins build environment. We use it when attaching the volume to ensure our project is in a specific place in the container.

Now that we have the plumbing, let’s look at how this can be used.

“docker-compose -f pipeline-compose.yml run — rm compile”

This command will tell docker-compose to use the pipeline-compose.yml file and run the compile “service” and remove any images created. It is quite a bit easier to read than the workaround we initially used. Here it is to refresh your memory:

“docker run — privileged -t -v $(pwd):/go/src/git.example.com/group/registrysync -w /go/src/git.example.com/group/registrysync group/go-builder-base-image:master go build”

Now, let’s put our new docker-compose step into our jenkinsfile. This is one way of doing it, but with the flexibility of Jenkins, there are other ways.

    node{
  docker.withRegistry(‘https://registry.example.com/', ‘svc-acct’) {
  
    checkout scm
    stage(‘Build’) {
      sh ‘docker-compose –f build-compose.yml run –rm compile’
    
    }
  }
}
  

If we do the same thing with the test step, we then have something which looks like this:

    node{
  docker.withRegistry(‘https://registry.example.com/', ‘svc-acct’) {
    checkout scm
    stage(‘Build’) {
      sh ‘docker-compose –f build-compose.yml run –rm compile’
    }
    stage(‘Test) {
      sh ‘docker-compose –f build-compose.yml run –rm test’
    }
  }
}
  

This example is a simple one, but it shares the concept well enough.

There are a couple benefits to using Docker Compose in this way:

1) You can orchestrate other containers for dependencies of steps; e.g. a mock API or DB. (You were doing that already, right? RIGHT?)

2) You can perform every step on your local machine with no dependencies outside of Docker being installed.

That second benefit is really nice and not something which everyone thinks about. This gives teams the confidence to know that when they commit, the steps they run locally will also work in the CI environment.

Conclusion

Not every company or project has the same constraints on their Jenkins environment so not everyone will run into this same issue with the “image.inside()” function. I hope that even if you don’t have the same challenges we’ve run into, you will still consider using Docker Compose with your Jenkins environment as the benefits to your pipeline extend well beyond this specific workaround.


Darien Ford, Senior Director of Software Engineering

Darien Ford is Senior Director of Software Engineering at Capital One who has supported enterprise Kubernetes initiatives. As accountable executive for container orchestration, he drove the adoption of a company-wide managed container platform. Darien now leads the product and go-to-market teams for an open source and commercial software innovation group. He has worked as an engineering leader across multiple industries--including live video streaming, ad technology, and cell phone gaming.

Related Content