Docker Official Images – The Good, The Bad & The Ugly

The Docker Official Repositories program was launched back in June. It is intended to provide a curated set of images that people can use with some guarantee of quality. This blog post will take a quick look at how to find and use these images as well as pointing out some potential issues.

The core of the Docker platform lies in the distribution of portable, reliable images which can potentially save users a lot of effort, particularly with difficult to configure software. Whenever possible, it makes sense to use trusted and tested images rather than rolling our own. So the first question is, how do we find an image?

Most public images, including the official ones, are stored on-line at the Docker Hub. You can search the Docker Hub on-line or at the command line by using the docker search command. For example:

The need for official repositories becomes immediately clear; how do you choose between the 835 versions? Most people will take the obvious route – choose the official build, which is clearly marked and at the top of the list. On a side-note, it’s a little tricky to get a list of official images – you can’t restrict searches to official images only as far as I can tell. However, they all seem to be built from the docker-library/official-images github project, which can be used as a sort of index. I’m sure the search facilities will be improved soon enough to address this.

Let’s have a look at how easy it is to the use the images. In the case of Redis (a key-value store), and most other images, there are pretty good instructions on the Docker Hub. And, in a few moments I got something working. We start by grabbing the image:

Note that because it is an official image, we get a verification message. Now lets launch our Redis container as a daemon:

Now that we have a running Redis server, we need to talk to it somehow. The simplest solution is to launch another Redis container in interactive and use the Redis CLI. Note that we are using “–link” which fills some environment variables with the address and port of the Redis instance we launched in the last step.

Pretty neat huh?

If you want to use your own Redis config, you just inherit from the Redis image and overwrite the config file in /data.

The official repositories have come in for some criticism(1) regarding quality, provenance(2) and size. However, it definitely makes sense to have “official” images which the community can pull together on rather than 835 roughly equivalent images. In order to address some of the criticisms, I would suggest that Docker consider creating a beta or candidacy phase for all official images. This would give the community and Docker a chance to test and investigate the images before they become officially blessed. At the moment, Docker are running the risk of users being burned by “bad” images, which could have very nasty consequences.

It will be interesting to see what comes next for the Hub. Once the quality and provenance issues are addressed, I wonder if we will see work on stacks – bundles of images designed to work together such as a Postgres database image optimised for working with an Python image using SQLAlchemy.

In a later post, I’ll dig a bit deeper into the images and see what lessons we can learn for creating our own containers.


 

(1) Such as this blog post (note that some of the criticisms have already been addressed, including documentation). Comments on Hub repositories often complain about the base image used, either for being too big or unstable (for example the Java repository).

(2) I spoke a about the importance of provenance in my previous blog post. Looking at the Dockerfiles and process for creating the official images, a few things could be improved. In particular, some images download files from the internet without checking hashes (for example Jenkins and WordPress). All official images should do something similar to the Redis Dockerfile where downloads are checked against a stored hash, which ensures that the downloaded file is the same file that was used by the maintainer.

In the case of apt-get installed packages, I would argue these should all be tied to a specific version, otherwise there is no guarantee that the version tested by the maintainer is the same as that being distributed.


 

More articles? Sign up here!

The following two tabs change content below.

Adrian Mouat

Adrian Mouat is Chief Scientist at Container Solutions and the author of the O'Reilly book "Using Docker". He has been a professional software developer for over 10 years, working on a wide range of projects from small webapps to large data mining platforms.

Latest posts by Adrian Mouat (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *