Dynamic Zookeeper Cluster with Docker

Dynamic Zookeeper Cluster with DockerA while ago I came across a nice feature in Zookeeper, namely the ability to dynamically reconfigure a cluster. That means adding and removing nodes on the fly, something we've been looking for in our Terraform setup of Mesos clusters. The feature was added in the 3.5-branch, which is not stable yet. But let that not keep us from trying it out.

The idea we had was to put Zookeeper in a Docker image and run a container from that on every master node in our Mesos cluster. When the container starts, it should either form a cluster on its own or it should connect to the existing cluster and become a member.

Let's start with the Dockerfile. (Of course the image is also available on the Docker Hub.) We start from Ubuntu:vivid, install git, ant and OpenJDK, clone the git repo and build the jar. Then we copy the sample configuration file and add 2 lines to specify we're not in stand alone mode and where the dynamic part of the configuration will live. Finally we'll add an init-file which we'll use as our entrypoint.


FROM ubuntu:vivid
 
RUN apt-get update \
 && apt-get -y install git ant openjdk-8-jdk \
 && apt-get clean
RUN mkdir /tmp/zookeeper
WORKDIR /tmp/zookeeper
RUN git clone https://github.com/apache/zookeeper.git .
RUN git checkout release-3.5.1-rc2
RUN ant jar
RUN cp /tmp/zookeeper/conf/zoo_sample.cfg /tmp/zookeeper/conf/zoo.cfg
RUN echo "standaloneEnabled=false" >> /tmp/zookeeper/conf/zoo.cfg
RUN echo "dynamicConfigFile=/tmp/zookeeper/conf/zoo.cfg.dynamic" >> /tmp/zookeeper/conf/zoo.cfg
ADD zk-init.sh /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/zk-init.sh"]
	
	

When we start the Zookeeper instance, we specify the id and an optional ip-address of an existing node in the cluster. I there's an existing node, we'll query it for it's configuration and add the existing servers to the dynamic configuration file. We'll also add the info of the current node. Then we'll initialize the node and start it. Then we ask the existing cluster to reconfigure, adding the information of the current server. Finally we stop Zookeeper on the current node and start it again in the foreground, so our container will keep running. If there's no prior node, we'll just start our Zookeeper node on it's own.


#!/bin/sh
 
MYID=$1
ZK=$2
 
HOSTNAME=`hostname`
IPADDRESS=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1`
cd /tmp/zookeeper
 
if [ -n "$ZK" ] 
then
  echo "`bin/zkCli.sh -server $ZK:2181 get /zookeeper/config|grep ^server`" >> /tmp/zookeeper/conf/zoo.cfg.dynamic
  echo "server.$MYID=$IPADDRESS:2888:3888:observer;2181" >> /tmp/zookeeper/conf/zoo.cfg.dynamic
  cp /tmp/zookeeper/conf/zoo.cfg.dynamic /tmp/zookeeper/conf/zoo.cfg.dynamic.org
  /tmp/zookeeper/bin/zkServer-initialize.sh --force --myid=$MYID
  ZOO_LOG_DIR=/var/log ZOO_LOG4J_PROP='INFO,CONSOLE,ROLLINGFILE' /tmp/zookeeper/bin/zkServer.sh start
  /tmp/zookeeper/bin/zkCli.sh -server $ZK:2181 reconfig -add "server.$MYID=$IPADDRESS:2888:3888:participant;2181"
  /tmp/zookeeper/bin/zkServer.sh stop
  ZOO_LOG_DIR=/var/log ZOO_LOG4J_PROP='INFO,CONSOLE,ROLLINGFILE' /tmp/zookeeper/bin/zkServer.sh start-foreground
  
else
  echo "server.$MYID=$IPADDRESS:2888:3888;2181" >> /tmp/zookeeper/conf/zoo.cfg.dynamic
  /tmp/zookeeper/bin/zkServer-initialize.sh --force --myid=$MYID
  ZOO_LOG_DIR=/var/log ZOO_LOG4J_PROP='INFO,CONSOLE,ROLLINGFILE' /tmp/zookeeper/bin/zkServer.sh start-foreground
fi
	
	

To run a proper cluster you need to setup a separate host for each zookeeper instance. It's perfectly possible however to create a test setup on your local machine.
We start the first node like this:

docker run --net=host --name zk1 containersol/zookeeper 1
If you want to run a test setup on one host, leave out the --net=host part, or else you'll have a conflict when starting the other zookeeper nodes.

The console will show a bunch of INFO messages and at some point we'll see a message:
LEADING - LEADER ELECTION TOOK - 13.
We need the ip of our node. This will be the ip of your host, or in a single host setup you will need to inspect the container like this:

docker inspect zk1|grep IPAddress

We specify the IP address when starting the second node:

docker run --net=host --name zk2 containersol/zookeeper 2
This time we also see a few WARNINGS and even an ERROR, but in the end we have two nodes running. We throw in a third for good measure:
docker run --net=host --name zk3 containersol/zookeeper 3

We can query the nodes individually for their config, to see if they all recognize each other. To do this, we use the cli script in one of the containers.

  
docker exec -it zk1 bin/zkCli.sh -server :2181 config| grep ^server
(out) server.1=:2888:3888:participant;0.0.0.0:2181
(out) server.2=:2888:3888:participant;0.0.0.0:2181
(out) server.3=:2888:3888:participant;0.0.0.0:2181

Querying the other servers should yield the same results.

This is only a small part of our effort to create a scalable Mesos Cluster with Terraform, you can check out the progress on the Containers-branch of the GitHub repo!

Comments
Leave your Comment