Docker, Miscellaneous

Docker Image-Builder Qualities

 

When I was writing up the last post I spent a fair amount of time with Adrian figuring out what kind of things I would want in my perfect docker-build tool. I’m not going to go into too much detail around each of the build tools as I will follow up with a more in-depth post on the current tooling soon.

So without further ado, my wish-list (in no particular order):

harbour-boat-sea-plane-804041

Fast

This one is a little vague as it catches a few things. We want a build system which produces our builds as quickly as possible. Builds happen every time someone invokes the CI/CD pipeline. An Image build is one of the fundamental cogs in the process from commit to production. What this means is we need a fast system, even small savings in build-time can compound over days, weeks and months into non-trivial savings.

A single company can sometimes be building thousands of images per day, Some of these companies don’t use or trust the cache, so images must be rebuilt from scratch every time. Luckily, work is being done to increase confidence in the caching layer. Debian, Nix and a whole host of others are working on producing Reproducible Builds. Unfortunately, the work done in this area doesn’t seem to have been used as inspiration for current Image build systems. The current system in the docker build-cache checks if the line in the Dockerfile has been run before, and if so uses the resulting output as a cache. This can have issues when the commands are stateful such as `apt-get update` meaning packages can be out of date in production while still being ‘seen’ as cacheable. There is definitely scope for speed increase without sacrificing other qualities.

Repeatable

A build needs to be repeatable. If two developers build the same image on their machines, they should be bit-for-bit compatible. I touched on this in the previous section as a method of helping increase throughput of image builds. It is really important that anyone can independently verify the source code of an image by building it in their own environment. Google has written a really interesting post on Site Reliability on this topic, it is definitely worth a read.

I personally think this is because in many ways containers are being used as lightweight virtual-machines whereas they should be seen more as an environment containing the minimum set of dependencies to run correctly (Oracle is calling them micro-containers). We can see the mindset is changing in places: smith, Unikernel, Nix and Bazel to be more deterministic. With this implemented, several other advantageous properties come for free.

Minimal

Image Bloat can be a huge problem, especially when network costs are so high. The benefits of stripping out build dependencies and unused code can be significant (around 700Mb for carts). Docker approached the issue with Multi-Stage builds which is pretty effective in selectively copying out required dependencies from the build environment. This needs to be taken further, to the point of a minimal container being built up from scratch. A lot of work has been put into moving to Alpine linux, reducing the base image size by an order of magnitude. A heroic effort by many people around the world, we appreciate what is being done.

Another benefit of a minimal image is the reduced attack surface exposed. We want our containers to be secure, with good defaults that minimize the amount of damage any single container can do. The docker ecosystem is a complicated interaction of many different technologies, This is a low-hanging fruit for reducing the surface of attack.

Verifiable

Being able to verify the source of an image is essential. One wants to be certain that code being run in production has been marked as safe and built within a trusted environment. Without verifying an image before running it, one could easily tag a malicious image and have it deployed with no red flags being raised. Verification is essential, and is being worked on within the Docker Notary and Quay.io Enterprise.

Programmable

Having a powerful build tool is essential, we can see this from the continuing presence of the Makefile. It is very useful to be able to create abstractions and variables which may modify the functionality and ultimately the outcome of the build. The Dockerfile syntax does not allow for arbitrary power, it is a Regular Language which severely limits the expressive power of the language. We can see this in the many out-of-band build-scripts that were present in almost all projects to separate out build-tools from the production image (now fixed by the multi-stage build). Creating a more expressive language with which to describe the image we wish to create would allow for a more explicit build (we could return to simply running:

  
docker build . 

Declarative

Having declarative system for defining the desired state of an image makes the most sense. We are moving towards this in everything else, kubernetes manifest files, puppet, ansible, NixOps, Bazel, docker-compose.yml, reactjs. These are all tools which try to encode state into a single declarative set of files, or declaratively describe the desired state of the system. This should be the case for Dockerfiles too. Take for example the nix config for squid proxy:

 
{ stdenv, fetchurl, perl, openldap, pam, db, cyrus_sasl, libcap
, expat, libxml2, openssl }:

stdenv.mkDerivation rec {
  name = "squid-3.5.27";

  src = fetchurl {
    url = "http://www.squid-cache.org/Versions/v3/3.5/${name}.tar.xz";
    sha256 = "1v7hzvwwghrs751iag90z8909nvyp3c5jynaz4hmjqywy9kl7nsx";
  };

  buildInputs = [
    perl openldap pam db cyrus_sasl libcap expat libxml2 openssl
  ];

  configureFlags = [
    "--enable-ipv6"
    "--disable-strict-error-checking"
    "--disable-arch-native"
    "--with-openssl"
    "--enable-ssl-crtd"
    "--enable-linux-netfilter"
    "--enable-storeio=ufs,aufs,diskd,rock"
    "--enable-removal-policies=lru,heap"
    "--enable-delay-pools"
    "--enable-x-accelerator-vary"
  ];

  meta = with stdenv.lib; {
    description = "A caching proxy for the Web supporting HTTP, HTTPS, FTP, and more";
    homepage = http://www.squid-cache.org;
    license = licenses.gpl2;
    platforms = platforms.linux;
    maintainers = with maintainers; [ fpletz ];
  };
}
 

There is only one call to the network which then verifies what it downloads with a hash. The rest of the system is completely offline (The download can also be done ahead of time). When we look at this file, we can clearly see what the dependencies of the project are for building, we can see what flags were used to build the project. When we compare this to the equivalent Dockerfile, we see that the information is dependent on a network connection, when and where it was run and potentially has issues with the cache not being fully utilised due to all actions taking place in a single command. The file is messy, it is hard to determine what each command is, where each command begins and ends and what each command is doing. This is because of having to inject in magic environment variables to get the package manager to play nicely. Moving on from the RUN command, we copy things into the image, setup some metadata then set the entrypoint to a bash script. Personally I have found entrypoints to be a bit of a hack. I see their utility in older programs that weren’t built to support running inside a container but most programs today are built from the ground up to support it. There is no need to have an entrypoint script.

 
FROM sameersbn/ubuntu:14.04.20170123
MAINTAINER sameer@damagehead.com
ENV SQUID_VERSION=3.3.8 \
SQUID_CACHE_DIR=/var/spool/squid3 \
SQUID_LOG_DIR=/var/log/squid3 \
SQUID_USER=proxy
 
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 80F70E11F0F0D5F10CB20E62F5DA5F09C3173AA6 \
 && echo "deb http://ppa.launchpad.net/brightbox/squid-ssl/ubuntu trusty main" >> /etc/apt/sources.list \
 && apt-get update \
 && DEBIAN_FRONTEND=noninteractive apt-get install -y squid3-ssl=${SQUID_VERSION}* \
 && mv /etc/squid3/squid.conf /etc/squid3/squid.conf.dist \
 && rm -rf /var/lib/apt/lists/*
 
COPY squid.conf /etc/squid3/squid.conf
COPY entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
EXPOSE 3128/tcp 
VOLUME ["${SQUID_CACHE_DIR}"]
ENTRYPOINT ["/sbin/entrypoint.sh"]

Secure

I think this is a huge issue. There have been studies on how images are insecure on the Dockerhub. Many are out of date, or have vulnerabilities that have already been patched in newer versions. There needs to be a way to integrate security checking into the workflow, with possible auto-busting of the cache when new patches become available. One point I would like to bring to the foreground is the issue of deprecation. There is currently a docker image that is officially unsupported `ubuntu:15.10` with no way to mark it as dangerous to use or unsupported. This needs to be done properly as the current method is simply to eyeball the last build date for a tag and make a call on whether or not it is still safe.

Portable

It should be possible to specify the architecture that one would like to target (such as x86/ARM) with little extra work. Currently it is possible to do so, but only using 3rd party tooling. Adrian has a good demo of how to build a multi-arch image these solutions. Docker has released an official post about the topic but building for different architectures is still a pretty manual process. There is definitely scope for easing this process, I believe there is work being done with qemu to support this goal.

Final Words

Every one of these points needs work. There is some progress in areas, such as the Docker Trusted Registry for security, or Box which uses embedded Ruby to build images but there is still a long way to go before container images are performing effectively and reliably.

Comments
Leave your Comment