Running Docker in Jenkins (in Docker)



In this post we’re going to take a quick look at how you can mount the Docker sock inside a container in order to create “sibling” containers. One of my colleagues calls this DooD (Docker-outside-of-Docker) to differentiate from DinD (Docker-in-Docker), where a complete and isolated version of Docker is installed inside a container. DooD is simpler than DinD (in terms of configuration at least) and notably allows you to reuse the Docker images and cache on the host. By contrast, you may prefer to use DinD if you want to keep your images hidden and isolated from the host.

To explain how DooD works, we’ll take a look at using DooD with a Jenkins container so that we can create and test containers in Jenkins tasks. We want to create these container with the Jenkins user, which makes things a little more tricky than using the root user. This is very similar to the technique described by Pini Reznik in Continuous Delivery with Docker on Mesos In Less than a Minute, but we’re going to use sudo to avoid the issues Pini faced with adding the user to the Docker group.

We’ll be using the official Jenkins image as a base, which makes everything pretty straightforward.

Create a new Dockerfile with the following contents:

We need to give the jenkins user sudo privileges in order to be able to run Docker commands inside the container. Alternatively we could have added the jenkins user to the Docker group, which avoids the need to prefix all Docker commands with ‘sudo’, but is non-portable due to the changing gid of the group (as discussed in Pini’s article).

The last two lines process any plug-ins defined in a plugins.txt file. Omit the lines if you don’t want any plug-ins, but I would recommend at least the following:

If you don’t want to install any plug-ins, either create an empty file or remove the relevant lines from the Dockerfile. None of the plug-ins are required for the purposes of this blog.

Now build and run the container, mapping in the Docker socket and binary.

You should now have a running Jenkins container, accessible at http://localhost:8080 that is capable of running Docker commands. We can quickly test this out with the following steps:

  • Open the Jenkins home page in a browser and click the “create new jobs” link.
  • Enter the item name (e.g. “docker-test”), select “Freestyle project” and click OK.
  • On the configuration page, click “Add build step” then “Execute shell”.
  • In the command box enter “sudo docker run hello-world”
  • Click “Save”.
  • Click “Build Now”.

With any luck, you should now have a green (or blue) ball. If you click on the ball and select “Console Output”, you should see something similar to the following:


Great! We can now successfully run Docker commands in our Jenkins container. Be aware that there is a significant security issue, in that the Jenkins user effectively has root access to the host; for example Jenkins can create containers that mount arbitrary directories on the host. For this reason, it is worth making sure that the container is only accessible internally to trusted users and considering using a VM to isolate Jenkins from the rest of the host.

There are other options, principally Docker in Docker (DinD) and using HTTPS to talk to the Docker daemon. DinD isn’t really more secure due to the need to use a privileged mode container, but does avoid the need for sudo. The main disadvantage of DinD is that you don’t get to reuse the image cache from the host (although this may be useful if you want a clean environment for your test containers that is isolated from the host). Exposing the socket via HTTPS doesn’t require sudo and keeps using the host’s image, but is arguably the least secure due to the increased attack surface from opening a port.

I plan to take a more in-depth look at securely setting up Docker on an HTTPS socket in a later blog.

If you’d like to learn more about cloud native, grab a copy of the new book.

New Call-to-action


  1. Very helpful post.

    In DinD way you can reuse outside docker cache sharing -v /var/lib/docker:var/lib/docker.

    In DooD way some many scenarios include to share -v /var/lib/jenkins:/var/lib/jenkins.

    • Thanks Montells – good points. I think sharing directories between two Docker engines is pretty dangerous though – bad things could happen!

  2. When using docker run inside the jenkins container with volumes, you are actually sharing a folder of the host, not a folder within the jenkins container. To make that folder “visible” to jenkins (otherwise it is out of your control), that location should have a parent location that matches the volume that was used to run the jenkins image itself.

    So, an example may enlighten things. I started jenkins using:

    -v /home/ernest/data/jenkins:/var/jenkins_home

    In jenkins, I have job running a docker image using:

    docker run –rm -v /home/ernest/data/jenkins/workspace/artreyu/target:/target -t artreyu-builder

    That container will produce a binary in /target which ends up in /home/ernest/data/jenkins/workspace/artreyu/target on the host which will be available in the jenkins container at target/artreyu because of mounting its parent location.

  3. Somehow I need to run docker without ‘sudo’ command.

    Can you tell me if it’s possible?

    Thank you!

    • thank you @Adrian!

      also i solved this issue by simply just added ‘jenkins’ user in a ‘users’ group!

    • Thanks for the article – confirmed what I’d been trying (and failing) to do. Hopefully you can help me even further! πŸ™‚

      I’m necessarily (for underlying system architectural reasons) adding an additional layer of abstraction that is causing me grief with the “add the jenkins user to the docker group” approach. My Jenkins master is running separately and is firing up a docker container for a jenkins slave on which it is then executing a docker build. The slave container doesn’t currently appear to be able to connect to the host’s docker engine with the docker client unless it is using sudo – which I understand is because it can’t connect to the unix socket. The socket is bound on the container (-v /var/run/docker.sock:/var/run/socker.sock) but I’m slightly confused about how access rights work on the bound socket.

      I’m using a pipeline in Jenkins to execute the build and it calls docker from within the Jenkinsfile rather than executing it manually using sh. This keeps the script neater and means I can take advantage of the groovy features (no pun intended) that Cloudbees have made available with their Docker plugin. the problem is I can’t make it use sudo when executing docker commands so I either need to solve THAT problem, or I need to solve the docker executing without sudo problem in my jenkins slave container. Any ideas? πŸ™‚ Any help would be greatly appreciated!

    • @Ruy

      Regarding access rights for mounted sockets, they are just the same as they are on the host, but you have uid issue to deal with. So if you give the jenkins user access to the socket on the host, that doesn’t mean the jenkins user in the container has access, as they may or may not have the same uid.

      I’ve not used the cloudbees plugins, so I can’t help there.

  4. Maybe it’s because it’s the end of a long day today, but what if your Jenkins is already bound to port 8080? I can’t easily change the port for Jenkins.

    • You’re right, I don’t really follow πŸ™‚

      Jenkins runs in a container, where it’s free to use whatever port it likes.

  5. The build of docker I am using on CentOS comes with a dynamically linked docker executable client. I found it necessary to install docker engine into my images, because simply mapping it in gave linker errors when running docker commands. I still map in the socket, and get the desired behavior of using the host system daemon, just minor image bloat.

  6. Since Docker is now dynamically linked, it has dependencies on various libraries (check with ldd /usr/bin/docker)
    We may see Not found. So we need to add this library to jenkins image.
    Add RUN apt-get install -y libapparmor-dev to the Dockerfile can help πŸ™‚

    • Yeah, I had the same problem recently. I ended up taking @Blake’s solution and installing Docker into the image, which is slightly annoying. However, it’s also a lot more futureproof and portable than copying in libs. The main issue is that the Docker client and engine can get out-of-sync.

      I’ll update the article when I get a chance.

  7. I had to add “\” at the end of the lines πŸ™‚

    RUN apt-get update && \
    apt-get install -y sudo && \
    rm -rf /var/lib/apt/lists/*

  8. Currently not working in a EC2 Amazon container with docker daemon

    From the jenkins docker container I get this message even if I install this libdevmapper package using apt-get.

    docker: error while loading shared libraries: cannot open shared object file: No such file or directory

    • Hi
      As you have said i tried installing docker inside docker, but now it says :
      jenkins@4b6639d8c129:/$ docker
      /usr/bin/docker: 2: .: Can’t open /etc/sysconfig/docker

      I have created a container “myjenk” as above and started with :
      docker run -d -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -p 8080:8080 myjenk

    • @Vidya Don’t mount the docker binary if you’ve installed it inside the container, just mount the socket.

  9. I’m using Docker version:
    Version: 1.10.2
    API version: 1.22
    Go version: go1.5.3
    Git commit: c3959b1
    Built: Mon Feb 22 21:37:01 2016
    OS/Arch: linux/amd64

    Version: 1.10.2
    API version: 1.22
    Go version: go1.5.3
    Git commit: c3959b1
    Built: Mon Feb 22 21:37:01 2016
    OS/Arch: linux/amd64

    When I’m accessing the Jenkins-container and try to perform a Docker-command I got this error: “error while loading shared libraries: cannot open shared object file: No such file or directory”

    • As mentioned above, this was caused by Docker moving to dynamic libraries. Rather than mounting the Docker binary, just install it inside the Jenkins container and only mount the Docker socket.

  10. I am trying to run docker-machine y docker-compose, but the only comand that is valid is docker. Someone knows how to try to run docker-machine y docker-compose in Jenkins

    • You’d have to download and install docker-compose in the Dockerfile, then make sure it’s on your path. I’m not sure about docker-machine, that’s going to cause some problems; you don’t want to be creating a VM inside a container…

  11. I just found this page. The way I run DooD is that:
    I created a user ‘jenkins’ on the host with a home folder (not the same folder as I use for the master volume mount).
    I configured it’s authorized_keys with a newly generated key and put that key in the jenkins ssh credential configuration.
    I then configured a new node as an SSH node with the IP of my host server.
    I then set the master node as only for matching jobs so that they will prefer running on the node.
    I added the jenkins user to the docker group.

    Thus I have a running docker to build images, no need for passwordless sudo and the group does not have to be numerically baked into the Dockerfile.

    This seems to work. I’m sure someone will come up with a myriad of reasons why this is a bad plan, so my question is what are they?

    • If it works for you, that’s great. To me it seems a lot of work to enable SSH and a security issue (although so is sharing the Docker socket). Another minor issue is that you need to make sure the UIDs of the Docker users match.

  12. I’m stuck trying to docker inside a jenkins container. I thought someonw might help.
    I’ve mounted docker socks ans well as binaries.
    My goal is to have jenkins create images and push them on an external registry.
    While docker build command runs fine, I’m struggling to make docker-compose build command to work.
    I keep getting a ” error.

    Any idea anyone?

    • Same question without typo πŸ˜‰

      I’m stuck trying to docker inside a jenkins container. I thought someone might help.
      I’ve mounted docker socks as well as docker and docker-compose binaries.

      My goal is to have jenkins create images and push them onto an external registry.
      While docker build command runs fine, I’m struggling to make docker-compose build command to work.
      I keep getting a ” .IOError: [Errno 2] No such file or directory: u’./docker-compose.yml’ ” error.

      Any idea anyone?

  13. Actually all u need is to run

    sudo chown -R jenkins:jenkins /var/run/docker.sock

    inside the container.
    Won’t affect permissions on host, will allow docker ps from jenkins without sudo.

    • Ha, well I guess it will work. Unfortunately I think most people will be entirely unhappy with this solution. The jenkins user is now effectively root on the host, it assumes that the UID of jenkins is the same in the container and on the host and is not portable between hosts.

  14. Another approach, use the “run -u” flag so you can keep the host docker GID out of the Dockerfile. Two steps:

    1) Get the host docker group id
    grep docker /etc/group
    docker run –rm -v /etc/group:/host-etc-group busybox grep docker host-etc-group

    2) Pass the above into the run command:
    docker run -d -u jenkins: -v /var/run/docker.sock:/var/run/docker.sock -v jenkins_home:/var/jenkins_home -p 8080:8080 –name myjenkins customjenkins:1.0.0

    At runtime docker will dynamically assign the GID passed on the -u to the user (i.e. jenkins) running the container process. This way your jenkins image remains portable and you don’t need to sudo your docker commands, thus allowing use of docker-workflow plugins and such. Note that I did install the docker binaries in the jenkins image but used the hosts docker.sock file, both windows and linux.

    What do you think? Any drawbacks to this approach? Will reply to any feedback or questions.

    • Run above should read as follows (greater than/less than characters were dropped when posted and I can’t edit the it.

      docker run -d -u jenkins:GID -v /var/run/docker.sock:/var/run/docker.sock -v jenkins_home:/var/jenkins_home -p 8080:8080--name myjenkins customjenkins:1.0.0

  15. Great post and Q&A!
    Does anyone have experience running this approach with a more recent version of docker?
    I am stuck with trying this approach on CentOS and docker 1.10.3, getting this error for docker load:

    [root@jenkins /]# cat /var/lib/jenkins/software-repository/centos/centos7.2.tar.gz | docker -D -H unix:///docker.sock load
    An error occurred trying to connect: Post http:///docker.sock/v1.22/images/load: write unix /var/run/docker.sock: broken pipe

    Background: we have been using this kind of Jenkins inside Docker approach (similar to the one described in the original post) for quite some time with docker 1.9.1 and Centos 7.2. Now for some reason we needed to upgrade to docker 1.10.3. The discussion here helped me to figure out that I need to install docker inside the Jenkins image and map only the socket, not the binary. I am using this command to start the Jenkins container:

    docker run –privileged=true –rm -v /var/run/docker.sock:/docker.sock:Z -v /var/lib/jenkins:/var/lib/jenkins:Z –name=jenkins1 –hostname=jenkins -p 80:8080

    All docker commands seem to work fine now, also with 1.10.3, except for the docker load (got error above). With 1.9.1 I did not have this problem.
    Does anyone have a solution to this, or experience with the approach and even newer versions of docker? Thanks!

  16. Thanks Adrian for this post and many others you have written regards to docker security. Completely share your security concerns about possible options to get this working.

    We are running Jenkins alongside docker daemon “without” packaging it in a container. We have a constrain of using a proprietary privileged manager tool that does not allow password less sudo. In this case, enabling mutual TLS auth and connecting to docker daemon socket locally with HTTPs port seems to be the only plausible solution. In addition to this, we are not opening the docker daemon port for outside world through IP tables.

    Is this a reasonable solution? Does running Jenkins in docker with everything else same, change anything in terms of security?

    Any insights would be greatly appreciated.

    • If you’ve thought about and are using TLS, I think you should be ok. I would make sure this all only exposed to the internal network or VPN.

      At the end of the day, allowing Jenkins to run Docker means that anyone that can run Jenkins stuff pretty much has full access to the host. For this reason, you probably don’t want the host to be running anything sensitive alongside jenkins. Note that this is true regardless of how you expose the Docker socket.

      Finally, since I wrote the article, Docker in Docker has come on a lot and might actually be a better solution now. I know uses DinD.

  17. hi,

    i first encountered this error:

    docker: Error response from daemon: Mounts denied: more info.
    hs /usr/jenkins and /usr/local/bin/docker
    are not shared from OS X and are not known to Docker.
    You can configure shared paths from Docker -> Preferences… -> File Sharing.

    And than when i changed the docker preferences i get below error:

    docker: Error response from daemon: error while creating mount source path ‘/usr/local/bin/docker’: chown /usr/local/bin/docker: no such file or directory.

    any ideas?

  18. Nice tutorial but I can’t run docker in the container: docker: error while loading shared libraries: cannot open shared object file: No such file or directory
    Any suggestions? Thanks in advance!!

    • Yeah, so Docker changed to use dynamic libraries. The easiest solution is to install Docker in the container (via the Dockerfile) rather than volume mount the binary.

  19. //ugly as hell, but works

    FROM jenkins
    USER root
    RUN apt-get update \
    && apt-get install -y sudo apt-transport-https ca-certificates curl software-properties-common \
    && rm -rf /var/lib/apt/lists/*
    RUN echo “jenkins ALL=NOPASSWD: ALL” >> /etc/sudoers

    RUN curl -fsSL | sudo apt-key add –

    RUN sudo add-apt-repository “deb [arch=amd64] $(lsb_release -cs) stable”

    RUN sudo apt-get update

    RUN apt-cache policy docker-ce

    RUN sudo apt-get install -y docker-ce

    RUN sudo systemctl status docker

    USER jenkins

  20. hello Adrian Mouat

    as your steps it worked well for running a python web app which hosted on localhost:5000 with jenkins+gogs, one hour ago, I used DooD solution
    but some mistakes appearred just now, prompted for “The connection was reset”, and nothing I did.

    below is my Dockerfile:

    FROM python:3.4-alpine
    ADD ./requirements.txt /code/
    ADD ./ /code/
    WORKDIR /code
    RUN pip install -r requirements.txt
    CMD [“python”, “”]

    thanks for your reply πŸ™‚

    • Hi,

      Thanks for the comment. I don’t have a lot to go on here and I don’t really have time to debug this for you, but please leave an update if you manage to figure it out.

      Good luck,


Comments are closed.