Test driving the Docker Universal Control Plane

 

by Michael Müller

The Docker Universal Control Plane (UCP) is the commercial management solution for containerised applications inside a public cloud or on-premise. It is built on top of Docker Swarm. All UCP components are also running as containers.

This post will guide you through the installation of UCP and the integration of Interlock to dynamically configure HAProxy as a load balancer.

Installation

Docker provides an ucp tool to install the required components. The ucp tool runs via Docker and launches the appropriate containers.  It comes with subcommands such as “install”, “join” or “uninstall”. For an overview of all possible commands, use:

There is an interactive installation method which guides you through the process, launch it like this.

As we use vagrant for this setup --host-address is used to supply the installer with an accessible host address. In this case the static assigned IP address of the bridged interface (172.17.9.102). To be able to manage your host containers from within a container you need to mount the Docker socket into the container.

The command will pull several images and will prompt for values it needs to complete the installation. When it completes, the command prompts you to login into the UCP GUI.

To add a controller replica for high availability,  run the UCP tool on another server with the “–replica” flag. To achieve high availability you need at least 2 replicas.

Now add an Engine node to the UCP cluster:

The UCP cluster is now up and running, let’s see what we can do with it.

Running

We’ll deploy a container via the GUI therefore navigate to Containers and Deploy Container

 

Fill out the form to start the container with the desired values

Press “Run Container” on the right and the container will be started. This container is accessible on port 8080. If everything went well, you’ll see:

Picture3

You can test this by:

It will just return a simple “Hello world”.
The same can be achieved via the CLI. Connect to one of the nodes and run:

You’ll see the container running using:

Or the GUI:

Picture4

To verify that the application is running, access the container from outside by identifying the node and port that the container is running on.

Let’s add a load balancer to the setup and service discovery so containers get automatically registered.

We will use HAProxy as the load balancer, also running in a container. For service discovery, we use Interlock from Docker employee Evan Hazlett. Interlock is described as “Dynamic, event-driven Docker plugin system using Swarm”. It makes use of the event stream of Docker Swarm  and the Docker API. This allows interlock to listen for container events like starting and stopping and take action based on them.

It uses the Docker API to fetch some attributes about the started or stopped containers, such as service name, hostname, host IP and port. Based on this information, a new HAProxy configuration is deployed.
We’ll use docker-compose to launch interlock on the UCP Cluster. That way, interlock will also show up as new application in UCP. The compose file looks like this:

On the master, we just have to issue:

This will bring up a container with HAProxy running. You can access the stats page of HAProxy by accessing: http://172.17.9.102/haproxy?stats

pic5

In the GUI under the application tab, you’ll now find the newly created application.

pic6

Now let’s start a new container from the CLI and register this container with our newly created HAProxy:

-P publishes all exposed ports inside the container to random ports on the host. We also pass an environment variable -e INTERLOCK_DATA='{"hostname":"cli","domain":"ucp-ha.demo"}' to tell interlock that this container will have the URL http://cli.ucp-ha.demo/.

The HAProxy config is automatically recreated, which you can see on the status page of HAProxy:

The application inside the container we just launched can be access via the browser or via curl. It will just prompt “Hello World”:

Below you can see how interlock works and recreates the HAProxy configuration.

The same can be achieved via the GUI: You need to enter the container details like above, add the environment variable for interlock and select to automatically expose all ports.

This again will generate the corresponding HAProxy config

You can now easily scale your application up using the GUI. Navigate to your container and click on the sliders icon. Then navigate to the right top to “scale”. Enter the number you want to scale up e.g.: 10 and this will spin up 10 new containers and reconfigure HAProxy.

Closing thoughts

The setup of a cluster for Docker containers is simple and straightforward. With the addition of interlock, it’s a good starting point for a container infrastructure.

 

But there is still work to do: Rescheduling is missing even though there is now basic support for it in Docker swarm 1.1 (experimental). I couldn’t find a way to convince UCP to start Docker Swarm in experimental mode, but I’m pretty sure this will be added in one of the next releases.

If you want to know more about rescheduling in Docker Swarm, please read the article Rescheduling containers on node failures with Docker Swarm 1.1 by Maximilian Schoefmann.

The following two tabs change content below.

Michael Müller

Michael has 15+ years of international experience in IT. Before joining Container Solutions, Michael was Head of IT and Cloud Innovations at Swisscom. Together with his team, he established DevOps, Microservices and Containerized Infrastructures at Swisscom.

4 Comments

    • Hi Steve! I hope I get your question right. Interlock is responsible for creating the HAProxy config. When you run a container the config get’s created. If you’re now sending request with the host header you used for running the containers, traffic gets routed to the containers.

    • Right. I think I figured it out, though it was not as “automagic” as I’d hoped. I now still need to create DNS records upstream and point them to the HAProxy instance. From there, they are reachable. I was hoping that the front-end proxy could also dynamically update the upstream DNS (much like a traditional DHCP server can in a physical/VM world).

    • Good that you figured it out. You could create a domain or subdomain for all stuff running in UCP and add an entry in your DNS like (where 1.2.3.4 is the IP of your HAProxy)

      example.com. IN A 1.2.3.4
      *.example.com. IN CNAME example.com.

      So traffic for app.example.com will hit the HAProxy and HAProxy will route the traffic to the container of app

Leave a Reply

Your email address will not be published. Required fields are marked *