Running Docker in Jenkins (in Docker)

In this post we’re going to take a quick look at how you can mount the Docker sock inside a container in order to create “sibling” containers. One of my colleagues calls this DooD (Docker-outside-of-Docker) to differentiate from DinD (Docker-in-Docker), where a complete and isolated version of Docker is installed inside a container. DooD is simpler than DinD (in terms of configuration at least) and notably allows you to reuse the Docker images and cache on the host. By contrast, you may prefer to use DinD if you want to keep your images hidden and isolated from the host.

To explain how DooD works, we’ll take a look at using DooD with a Jenkins container so that we can create and test containers in Jenkins tasks. We want to create these container with the Jenkins user, which makes things a little more tricky than using the root user. This is very similar to the technique described by Pini Reznik in Continuous Delivery with Docker on Mesos In Less than a Minute, but we’re going to use sudo to avoid the issues Pini faced with adding the user to the Docker group.

We’ll be using the official Jenkins image as a base, which makes everything pretty straightforward.

Create a new Dockerfile with the following contents:

We need to give the jenkins user sudo privileges in order to be able to run Docker commands inside the container. Alternatively we could have added the jenkins user to the Docker group, which avoids the need to prefix all Docker commands with ‘sudo’, but is non-portable due to the changing gid of the group (as discussed in Pini’s article).

The last two lines process any plug-ins defined in a plugins.txt file. Omit the lines if you don’t want any plug-ins, but I would recommend at least the following:

If you don’t want to install any plug-ins, either create an empty file or remove the relevant lines from the Dockerfile. None of the plug-ins are required for the purposes of this blog.

Now build and run the container, mapping in the Docker socket and binary.

You should now have a running Jenkins container, accessible at http://localhost:8080 that is capable of running Docker commands. We can quickly test this out with the following steps:

  • Open the Jenkins home page in a browser and click the “create new jobs” link.
  • Enter the item name (e.g. “docker-test”), select “Freestyle project” and click OK.
  • On the configuration page, click “Add build step” then “Execute shell”.
  • In the command box enter “sudo docker run hello-world”
  • Click “Save”.
  • Click “Build Now”.

With any luck, you should now have a green (or blue) ball. If you click on the ball and select “Console Output”, you should see something similar to the following:

jenkins3

Great! We can now successfully run Docker commands in our Jenkins container. Be aware that there is a significant security issue, in that the Jenkins user effectively has root access to the host; for example Jenkins can create containers that mount arbitrary directories on the host. For this reason, it is worth making sure that the container is only accessible internally to trusted users and considering using a VM to isolate Jenkins from the rest of the host.

There are other options, principally Docker in Docker (DinD) and using HTTPS to talk to the Docker daemon. DinD isn’t really more secure due to the need to use a privileged mode container, but does avoid the need for sudo. The main disadvantage of DinD is that you don’t get to reuse the image cache from the host (although this may be useful if you want a clean environment for your test containers that is isolated from the host). Exposing the socket via HTTPS doesn’t require sudo and keeps using the host’s image, but is arguably the least secure due to the increased attack surface from opening a port.

I plan to take a more in-depth look at securely setting up Docker on an HTTPS socket in a later blog.

The following two tabs change content below.

Adrian Mouat

Adrian Mouat is Chief Scientist at Container Solutions and the author of the O'Reilly book "Using Docker". He has been a professional software developer for over 10 years, working on a wide range of projects from small webapps to large data mining platforms.

Latest posts by Adrian Mouat (see all)

26 Comments

  1. Very helpful post.

    In DinD way you can reuse outside docker cache sharing -v /var/lib/docker:var/lib/docker.

    In DooD way some many scenarios include to share -v /var/lib/jenkins:/var/lib/jenkins.

    • Thanks Montells – good points. I think sharing directories between two Docker engines is pretty dangerous though – bad things could happen!

  2. When using docker run inside the jenkins container with volumes, you are actually sharing a folder of the host, not a folder within the jenkins container. To make that folder “visible” to jenkins (otherwise it is out of your control), that location should have a parent location that matches the volume that was used to run the jenkins image itself.

    So, an example may enlighten things. I started jenkins using:

    -v /home/ernest/data/jenkins:/var/jenkins_home

    In jenkins, I have job running a docker image using:

    docker run –rm -v /home/ernest/data/jenkins/workspace/artreyu/target:/target -t artreyu-builder

    That container will produce a binary in /target which ends up in /home/ernest/data/jenkins/workspace/artreyu/target on the host which will be available in the jenkins container at target/artreyu because of mounting its parent location.

    • thank you @Adrian!

      also i solved this issue by simply just added ‘jenkins’ user in a ‘users’ group!

    • Thanks for the article – confirmed what I’d been trying (and failing) to do. Hopefully you can help me even further! ๐Ÿ™‚

      I’m necessarily (for underlying system architectural reasons) adding an additional layer of abstraction that is causing me grief with the “add the jenkins user to the docker group” approach. My Jenkins master is running separately and is firing up a docker container for a jenkins slave on which it is then executing a docker build. The slave container doesn’t currently appear to be able to connect to the host’s docker engine with the docker client unless it is using sudo – which I understand is because it can’t connect to the unix socket. The socket is bound on the container (-v /var/run/docker.sock:/var/run/socker.sock) but I’m slightly confused about how access rights work on the bound socket.

      I’m using a pipeline in Jenkins to execute the build and it calls docker from within the Jenkinsfile rather than executing it manually using sh. This keeps the script neater and means I can take advantage of the groovy features (no pun intended) that Cloudbees have made available with their Docker plugin. the problem is I can’t make it use sudo when executing docker commands so I either need to solve THAT problem, or I need to solve the docker executing without sudo problem in my jenkins slave container. Any ideas? ๐Ÿ™‚ Any help would be greatly appreciated!

    • @Ruy

      Regarding access rights for mounted sockets, they are just the same as they are on the host, but you have uid issue to deal with. So if you give the jenkins user access to the socket on the host, that doesn’t mean the jenkins user in the container has access, as they may or may not have the same uid.

      I’ve not used the cloudbees plugins, so I can’t help there.

  3. Maybe it’s because it’s the end of a long day today, but what if your Jenkins is already bound to port 8080? I can’t easily change the port for Jenkins.

    • You’re right, I don’t really follow ๐Ÿ™‚

      Jenkins runs in a container, where it’s free to use whatever port it likes.

  4. The build of docker I am using on CentOS comes with a dynamically linked docker executable client. I found it necessary to install docker engine into my images, because simply mapping it in gave linker errors when running docker commands. I still map in the socket, and get the desired behavior of using the host system daemon, just minor image bloat.

  5. Since Docker is now dynamically linked, it has dependencies on various libraries (check with ldd /usr/bin/docker)
    We may see libapparmor.so.1 Not found. So we need to add this library to jenkins image.
    Add RUN apt-get install -y libapparmor-dev to the Dockerfile can help ๐Ÿ™‚

    • Yeah, I had the same problem recently. I ended up taking @Blake’s solution and installing Docker into the image, which is slightly annoying. However, it’s also a lot more futureproof and portable than copying in libs. The main issue is that the Docker client and engine can get out-of-sync.

      I’ll update the article when I get a chance.

  6. Currently not working in a EC2 Amazon container with docker daemon

    From the jenkins docker container I get this message even if I install this libdevmapper package using apt-get.

    docker: error while loading shared libraries: libdevmapper.so.1.02: cannot open shared object file: No such file or directory

    • Hi
      As you have said i tried installing docker inside docker, but now it says :
      jenkins@4b6639d8c129:/$ docker
      /usr/bin/docker: 2: .: Can’t open /etc/sysconfig/docker

      I have created a container “myjenk” as above and started with :
      docker run -d -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -p 8080:8080 myjenk

    • @Vidya Don’t mount the docker binary if you’ve installed it inside the container, just mount the socket.

  7. I’m using Docker version:
    Client:
    Version: 1.10.2
    API version: 1.22
    Go version: go1.5.3
    Git commit: c3959b1
    Built: Mon Feb 22 21:37:01 2016
    OS/Arch: linux/amd64

    Server:
    Version: 1.10.2
    API version: 1.22
    Go version: go1.5.3
    Git commit: c3959b1
    Built: Mon Feb 22 21:37:01 2016
    OS/Arch: linux/amd64

    When I’m accessing the Jenkins-container and try to perform a Docker-command I got this error: “error while loading shared libraries: libsystemd-jornal.so.0: cannot open shared object file: No such file or directory”

    • As mentioned above, this was caused by Docker moving to dynamic libraries. Rather than mounting the Docker binary, just install it inside the Jenkins container and only mount the Docker socket.

  8. I am trying to run docker-machine y docker-compose, but the only comand that is valid is docker. Someone knows how to try to run docker-machine y docker-compose in Jenkins

    • You’d have to download and install docker-compose in the Dockerfile, then make sure it’s on your path. I’m not sure about docker-machine, that’s going to cause some problems; you don’t want to be creating a VM inside a container…

Leave a Reply

Your email address will not be published. Required fields are marked *