Continuous Delivery with Docker on Mesos in less than a minute – Part 1

Docker Compose

Except for a few small shortcuts, this post will demonstrate how to start your own functional Continuous Delivery pipeline for a Node.js project in less than one minute.

The setup currently runs on a single Linux host and the only requirements are default installations of Docker and Docker Compose. The main goal is to demonstrate the concept of the Continuous Delivery process implemented using Docker and Mesos.

Writing proper tests, setting up production ready cluster of Mesos and other complex tasks are out of scope for this post but will come later on. In the future, we are also planning to expand this setup to run on variety of clouds like GCE, AWS and add additional components like FlockerWeave or Consul.

All the code used in this post can be found at the Container Solutions GitHub:

We will use the following tools:

  • HelloWorld project in Node.js
  • Git
  • Jenkins
  • Docker
  • Docker Registry
  • Docker Compose
  • Mesos
  • Mesosphere Marathon
  • Zookeeper for internal use of Mesos and Marathon

Development Environment

Let’s begin the journey with a simple HelloWorld application written in Node.js and running in a Docker container.

On the illustration you can see developers working on their local environments and running their build, test and other processes within Docker containers represented by light blue squares. Once their code is ready and tested they will commit it to the central Git repo and the new piece of code will continue its journey towards production.


My guess is that if you are reading this article, you won’t have too much trouble understanding and implementing this step on your own.

First is the the Node.js application app.js:

and its configuration file package.json:

Once we have these two, we can dockerize them by using Google’s image for Node.js and adding the following Dockerfile:

Our development environment is ready. Now we can build the container and see if the image works.

Now you can go to and see “Hello World”!
If you change the app.js to show another string, rebuild the image and start a new container, you should be able to see your changes after refreshing the browser.

Screen Shot 2015-02-28 at 14.17.34

This setup has absolutely no tests of any type. This is not because I think they are not needed, but because I wanted to simplify the setup and keep the emphasis on Docker and Mesos/Marathon. Any good developer should understand that a development project is not ready for production if there are no tests.

Now we are free to move to the next step.

Continuous Integration/Delivery


At this step we need to have a Version Control system and a build server to pickup the changes and run the build and tests. We will use Jenkins for this purpose, and also for publishing the built artefacts to an artefacts repository. In our case we are using Docker for the deployment, therefore the final artefact will be a Docker image with our application and it will be pushed to a local Docker Registry once it’s ready.

At this point I make another shortcut. I won’t be using a git repository management tool like GitLab or GitHub to reduce complexity. Connecting Jenkins to Git server is a common task that most of configuration managers can perform in their sleep. If you would like to deploy a Git server in Docker, you can use this one for GitLab:

Now, let’s start building up our docker-compose.yml. Docker Compose will be used here as an orchestration engine to deploy all central services in a single command.Docker Compose is a development tool and should be replaced with more complex automation before setting up a real production system.
A few days ago Docker released the first version of Docker Compose, which will replaceDocker Compose. I didn’t test it yet, but Docker Compose should be a drop in replacement forDocker Compose with little to no changes.

Here is the docker-compose.yml for setting up local Docker registry:

It uses standard the Docker registry image and mounts one volume for persistent storage outside the containers. This is need to keep the built images after we restart the container.

After running

We should have Docker Registry running on http://localhost:5000

The next step is to build an image and and push it to the registry.

At this point you will get the following error from the docker daemon:
Forbidden. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add –insecure-registry to the daemon’s arguments.
The solution for this can be found at this StackOverflow page
The quickest and easiest solution is to add the following line to /etc/default/docker and restart the docker daemon:

After you fix this issue and push the image to the repo, you can go to this URL to check that your image was successfully pushed: http://localhost:5000/v1/repositories/containersol/nodejs_app/tags/latest

Screen Shot 2015-02-28 at 14.21.13

Next we can add Jenkins setup to our docker-compose.yml file and restart the system:

The Jenkins setup requires some explanation. First, I’m using a common trick for mounting the docker binary and the socket used by docker daemon inside the Jenkins container. This is needed to allow Jenkins to run docker commands on the host from within the container. The problem is, that only users in the docker group have access to it and the Jenkins container runs with the jenkins user rather than the root user. For this reason I’m building my own Jenkins image using the following Dockerfile:

Run this command inside the folder with the Dockerfile above, to build the image:

This is an ugly hack that I would love to replace with a better solution at some point. If you have easy answer for me, please comment on the post.

You can also see that Docker Compose will create persistent storage for Jenkins container and mount current folder inside the container. This will be needed later when we will run build, push and deploy scripts.

To finish the continuous integration part of the setup we need to add couple of small scripts for building and pushing docker images and configure Jenkins to run them. with the addition of an image version. Every Jenkins build will create a new tag for the docker image in the same way as we would do with other build artefacts. To clarify – I’m not claiming that this is the right way to do versioning. It is a very complex topic and I might write about it some other time, but for now I’ll just skip the lengthy explanations.

Pay attention to the name of the built image. It includes first the URL of the registry, than the full name of the image and then the tag.
Also, it is worth mentioning that Jenkins is running inside container and will use this script to build the image, but the docker commands executed within Jenkins container will actually run on the host due to the mounting of the socket used by Docker. will push the image build in the previous step. It will use the same version as the previous script.

The last step to finish continuous integration is to restart the system using Docker Compose and configure a Jenkins job that will run and Jenkins is running at the URL http://localhost:8081
It will be just a standard build bob that will execute two shell commands with build version as a parameter

Screen Shot 2015-02-28 at 17.28.21

Now we can execute the build and see our new image deployed to the registry. Note that this URL includes the new tag

Screen Shot 2015-02-28 at 17.31.54

In Part 1, I showed how to dockerize a node.js application on the development machine and deploy Jenkins and the Docker registry for continuous integration of the node.js app.
In Part 2 I will continue the setup for Mesos and Marathon and complete the Continuous Delivery cycle.

The following two tabs change content below.

Pini Reznik

Pini has 15+ years of experience in delivering software in Israel and Netherlands. Starting as a developer and moving through technical, managerial and consulting positions in Configuration Management and Operations areas, Pini acquired deep understanding of the software delivery processes and currently helping organisations around Europe with improving software delivery pipeline by introducing Docker and other cutting edge technologies.

Latest posts by Pini Reznik (see all)


  1. You say:

    “This is an ugly hack that I would love to replace with a better solution at some point. If you have easy answer for me, please comment on the post.”

    Could you say why this is ugly hack? I seem like very good idea more simpler than classic docker in docker. I am using in very some more complex scenarios and work perfect.

    • By “ugly hack” I mean – using groupid as a number.
      This will be unpredictable as you can never know what is the gid in you’re moving to another host.

      Alternatively, I could run Jenkins as root to get the access to the docker socket.
      From security point of view, it is actually more or less the same, as if you have access to the docker socket, you basically can execute arbitrary commands on the host.

  2. I am using jenkins sharing docker.socket like your example with docker compose. All works fine but the performmance is very poor compared with a traditional jenkins war installed in host. I am sharing docker because my jobs executes docker commands. I have no ideas why my jobs are slow.

    image: jenkins:1.580.3
    – /etc/localtime:/etc/localtime
    – /var/jenkins_home:/var/jenkins_home
    – /var/jenkins_data:/var/jenkins_data
    – /root/.ssh:/root/.ssh
    – /var/run/docker.sock:/var/run/docker.sock
    – /usr/bin/docker:/usr/bin/docker
    – “8080:8080”
    – JAVA_OPTS=-Xms1024M -Xmx2048M

    image: nginx:1.7.8
    – /home/jenkins/nginx-conf:/etc/nginx/conf.d
    – /var/log/nginx:/var/log/nginx
    – “80:80”
    – jenkins

    Have you any ideas?
    I thank you in advance for your help.

  3. Hi Pini,

    The jenkins build happens within the docker container, but when you push you use the private registry.
    don’t you need to start jenkins with –net=host to be able to resolve the registry ?


  4. hi pini,

    thanks for posting, i’m trying to get a similar setup going as well.

    when you mount the docker binary (/usr/bin/docker), that implies that the image is required to be running the same OS as the host, right?


    • Same OS in the sense of the same kernel. This is due to the fact that docker binary is statically complied and has no dependencies.
      So, you can safely mount docker binary from RHEL into Ubuntu container for example.

  5. I add the internal user – go in my case – to the docker group at runtime with


    # define default command
    CMD if [ -n “$DOCKER_GID_ON_HOST” ]; then groupadd -g $DOCKER_GID_ON_HOST docker && gpasswd -a go docker; fi;

    When starting the container you can pass in this variable with

    -e “DOCKER_GID_ON_HOST=$(getent group docker | cut -d: -f3)”


    -e “DOCKER_GID_ON_HOST=$(cat /etc/group | grep docker | cut -d: -f3)”

    You could run in to trouble though if that gid is already taken. You might need some additional logic to shuffle gids around 🙂

    • I like this solution,the default CMD executes, the group is created and I get a docker container id seemingly indicating that I have a running container. However docker ps fails to list it as running, it shows that it exited immediately.

    • This is nice in theory but doesn’t work.

      To have it work at runtime, you need root privileges to be able to run ‘groupadd’ etc, but at runtime we are just running as the ‘jenkins’ user. The Jenkins image does not have ‘sudo’, and either way, the ‘jenkins’ user is not in the sudo’ers group.

  6. Now you have to use the new image node instead of google/nodejs

    but after the build (which was sucessfully I try to run the container): exec: “/nodejs/bin/npm”: stat /nodejs/bin/npm: no such file or directory

  7. Hi there,

    Thanks for your very very helpful tuto.

    Some correction for me in order to run (Ubuntu 14.04 LTS Host)

    Jenkins Dockerfile

    RUN groupadd -g 125 docker && usermod -a -G docker jenkins
    => RUN groupadd -g [the host docker group id] docker && usermod -a -G docker jenkins

    I had issue with “docker: error while loading shared libraries: cannot open shared object file: No such file or directory” and “libapparmor”
    => Add below lines in Jenkins Dockerfile before “USER jenkins”

    RUN apt-get update
    RUN apt-get install -y libsystemd-journal0
    RUN apt-get install -y libapparmor-dev

    In Jenkins UI :

    The workspace dir is the default run dir for Jenkins and it depends on your project name so IMO it’s difficult to put a relative path for the *.sh file

    I modified the shell calls with the directory which is linked to . host directory
    => /var/jenkins_data/ ${BUILD_ID}
    => /var/jenkins_data/ ${BUILD_ID}

    I’m going to the next step 🙂



    • I was trying in Ubuntu 14.04 also and found similar issues.

      For the Jenkins part, instead of installing the missing components into the container, I’d choose to reuse the host ones:
      image: cddemo/jenkins_with_docker
      – .:/var/jenkins_home/workspace
      – /var/run/docker.sock:/var/run/docker.sock
      – /usr/bin/docker:/usr/bin/docker
      – /lib/x86_64-linux-gnu/
      – /usr/lib/x86_64-linux-gnu/
      – /lib/x86_64-linux-gnu/
      – /lib/x86_64-linux-gnu/
      – /lib/x86_64-linux-gnu/
      – /lib/x86_64-linux-gnu/
      – “8081:8080”

      I tried many times to let Jenkins call .sh files directly but failed so I had to copy the content to be the build steps.
      The errors were as below
      Started by user anonymous
      Building in workspace /var/jenkins_home/jobs/nodejs_app/workspace
      [workspace] $ /bin/sh -xe /tmp/
      + /var/jenkins_home/workspace/ 23
      /tmp/ 2: /tmp/ /var/jenkins_home/workspace/ not found
      Build step ‘Execute shell’ marked build as failure
      Finished: FAILURE
      Don’t know why.

      Anyway, @Pini, it’s really a great tutorial. Thanks.

  8. This might be a slightly less painful group workaround/hack. Your mileage might vary.
    If the image is for yourself only and you have access to all server which it will run on,

    On the server:
    1. groupmod –gid 1234 docker
    2. restart the docker daemon

    In the Dockerfile
    RUN groupmod –gid 1234 docker && usermod -aG docker jenkins

    That way you only have to run “groupmod –gid 1234 docker” on any server once, and all will be good.

Leave a Reply

Your email address will not be published. Required fields are marked *