This blog was written by Adrian before he got on a plane back home. I said I’d post it for him. Enjoy.
If you follow the blogosphere or hacker news crowd, you will have seen several articles and comments criticizing the state of security in Docker. Some of these were made by people who knew of what they talked: Dan Walsh said “Containers Don’t Contain”; Jonathan Rudenberg talked of a “total systemic failure of all logic related to image security”; and Alex Larsson criticized an article on running desktop apps in Docker saying“this gives the apps root access”. There were plenty of others of questionable accuracy and quality that I won’t mention.
Given all this, you would be forgiven for thinking that Docker should be kept away from anything but toy applications and demos. The truth is that used properly Docker will only make any system more secure. (I’ve added an explanation at the end as to why the previous quotes aren’t as bad as they may sound).
The basic reason is one of Defence-in-Depth. The idea of defence-in-depth is to provide multiple levels of defences for an attacker to breach, similar to how a castle relies on multiple defences such as moats, thick walls and inner keeps. Containers provide an extra level of defence via isolation and control over applications. If we take an existing application running on bare metal and wrap it in a container, we have added an extra layer of defence for our would-be-attacker to breach.
Comparing container security to VM security is good in terms of understanding the underlying issues, but falsely suggests that VMs and containers are an either-or proposition. The reality is that in the short term at least most deployments will use both technologies, with groups of containers running inside VMs. If you have a mutli-tenant deployment, each user’s containers will run in separate VMs, ensuring an extra layer of isolation between users. Similarly, you may choose to run containers processing sensitive datasuch as bank account details on separate hosts or VMs to containers exposed to the public such as your NodeJS frontend.
In the medium to long term, we will start seeing more deployments running containers outside of VMs. There are already several technologies trying to bring a VM level of isolation to containers, such as LXD which brings hypervisor-level security to containers, and the Triton infrastructure which uses SmartOS technologies including Zones to provide isolation guarantees. There is also an argument that the significantly reduced complexity of containers compared to VMs means that in the long term they are likely to be more secure; for example Docker doesn’t need complicated C++ code related to device virtualisation. The dangers of such code was recently highlighted by the VENOM vulnerability which exploited an obscure part of code related to floppy-disk driver virtualisation in VMs to gain access to the host.
You probably noticed that I equivocated previously by saying that Docker has to be “used properly” to be secure. It is possible to abuse Docker and make things less secure than running them on the host. The main culprit here is images that run their applications as the root user. Should an attacker manage to exploit a vulnerability in the application, they will be root in the container, but worse, if they then manage to break container isolation, they will be root on the host (users are not namespaced in containers, which is partly what Dan Walsh meant by “Containers Don’t Contain”). The solution is simple; don’t give applications running in a container more rights than you would on the host. You wouldn’t run Mongo as root on your host or in a VM, so don’t do it in a container. If you take some basic care in how you start your applications, containers can only help you.
There are plenty of knobs that can be twiddled and techniques that can be implemented to further increase the isolation and limits imposed by Docker. These include:
- Running minimal images. By reducing the number of binaries and services running in containers, you significantly reduce the surface-area of potential attacks.
- Using read-only filesystems. If an attacker can’t write to file, they can’t upload malicious scripts, deface HTML files or overwrite sensitive data.
- Limiting kernel calls. By locking down the system calls that a container can make you can again reduce the surface-area of potential attacks. This can be done by using Linux capabilities and SELinux (in the future there will be seccomp support as well).
- Restricting networking. By restricting Docker networking so only linked containers can communicate you stop attackers with access to a container from being able to probe and compromise other containers. This can be achieved by using the --icc=false and --iptables flags when starting the Docker daemon.
- Limiting memory and CPU. By limiting the amount of CPU and memory allocated to a container you can prevent denial-of-service attacks where one container grabs all the resources and stops other containers from running.
In order to build a secure distributed system, you need to build security in layers. Containers add a very strong layer. Used properly, a Docker based system is both secure and efficient. Add in techniques like those mentioned above and you can reach a higher level of security than a pure VM based solution.
So the answer is “yes” — Docker is safe for production.
Regarding the quotes:
- “Containers Don’t Contain” is a great article, and the basic point is that not all resources in containers are namespaced; users, devices and various other things are shared between containers. This is something to be aware of, but does not mean Docker is inherently insecure.
- Jonathan’s issue was with the way images are checksummed and verified. This situation improved considerably in Docker 1.6 with the implementation of digests, but won’t be fully resolved until signing is in place (until then be wary of downloaded images).
- Alex Larsson’s issue was in regard to running X11 apps in containers – if you do this I recommend you build the container image yourself to avoid trojaned binaries, make sure the image defines a user and do not give the image access to the Docker socket.
Latest posts by Jamie Dobson (see all)
- Where Do We Play and What Do Our Customers Value? - October 17, 2017
- The Structure of Day 2 Problems - October 10, 2017
- How Retailers Use Cloud Native Technologies to Capitalise on Existing Physical Assets - September 12, 2017