Continuous Delivery with Docker on Mesos in less than a minute – Part 2

In the Part 1 I showed how to dockerize a node.js application on the development machine and later deploy Jenkins and Docker registry using Docker Compose and use them for continuous integration of the node.js app.
In Part 2 I continue the setup for Mesos and Marathon and complete the Continuous Delivery cycle.

Cloud

15.02.11_diagram3

If you have never heard about Mesos or Marathon, at this point it will be useful to read a bit about them: here, here or here.

Now, when we have functional development and continuous integration environments, we can start building the Mesos cluster.
Here is the full docker-compose.yml file that includes all parts of the system. In addition to previously configured Jenkins and Docker registry we have Mesos master, single Mesos slave, Mesosphere Marathon and Zookeeper for internal Mesos communication.

There is not much to explain in the docker-compose.yml. All of the environment parameters are taken from the usage instructions for the images on the Docker Hub.
The Mesos slave container also using the trick with the mounted socket, but nothing needs to be fixed here since slave is running the root user that has permissions to access the socket.
It is important to note that the Jenkins container now includes a link to Marathon. This is required to be able to post the requests from the Jenkins container to the Marathon container. We will see it in the next part about deployment.

Now we can restart the system and see it all up and running:

The containers will start very quickly (as they always do), but it will take about 30 seconds until all the services are online (on an Ubuntu VM running on MacBook Air).
Mesos will start on http://localhost:5050. You can also see one active slave in the following screenshot. The slave has no publicly exposed port in this setup.

Screen Shot 2015-02-28 at 18.08.56

Marathon will be accessible at http://localhost:8080

Screen Shot 2015-02-28 at 18.09.14

Deployment

The last part of the journey is to deploy our freshly built docker image on Mesos using Marathon.

15.02.11_diagram4

First we need to create the configuration file for scheduling an application on Marathon, let’s call it app_marathon.json:

Here again there are some shortcuts. For example, an important missing piece is a health-check that should tell Marathon when the application is running and when it isn’t.

Once we have the json file to publish we can add the last script deploy.sh that will remove the currently running application and redeploy it using the new image. There are better upgrade strategies but, again, I won’t discuss them here.

The last step would be to add deploy.sh to Jenkins configuration and run the build.

Screen Shot 2015-02-28 at 19.18.17

And after the build has successfully finished we can see the application running on Marathon:

Screen Shot 2015-02-28 at 19.21.59

And we can see our app on http://localhost:31000/:

Screen Shot 2015-02-28 at 19.23.29

Now you can try changing your application and triggering the build in Jenkins. In just few seconds it will find it’s way through Jenkins, the Docker Hub and Marathon to Mesos!

Here is a short video showing the complete setup in action:

Future Improvements

There are two main direction to improve this system – adding more functionality and deepening the quality of the setup.

The list of possible extensions is very long. Here are some examples:

  • Extending HelloWorld example to be a proper web application
  • Adding multiple languages
  • Using a multi-container setup deployed on Mesos
  • Adding automated tests on multiple levels (unit tests, systems tests, performance tests, etc.
  • Triggering Jenkins builds automatically from a Git hook
  • Deploying to public clouds such as GCE, AWS etc
  • Running on multiple hosts.
  • HAProxy setup
  • Auto-scaling with simulation of load using jmeter
  • Deploying a microservices based system
  • Using Flocker for persistent storage
  • Using Weave for networking containers
  • Using Consul for automatic service discovery
  • Adding system monitoring
  • Adding centralised logging

My preferred next step would be to focus of the part of the system facing external users and add HAProxy and auto-scaling capabilities as demonstrated in the next diagram:
15.02.11_diagram5

Final Words

I started working on this set-up mainly to help developers and administrators to learn how to use Docker and Mesos in context of continuous delivery. Over the course of the last few months I saw that the full system is very complex and hard to explain and even harder to set-up and start playing with it.

For the same reason we asked our friend Marta Marszal who created excellent visuals to make the story much more clear.

Please feel free to suggest improvement and grow this example system to help other people to learn and experiment.

You are also free to download the diagrams as separate images, or as a PDF with a full story and re-use them any way you like (please don’t forget to mention us).
CD_with_Mesos_Docker

Also all the sources can be found in GitHub: https://github.com/ContainerSolutions/cd_demo

15.02.11_diagram6

The following two tabs change content below.

Pini Reznik

Pini has 15+ years of experience in delivering software in Israel and Netherlands. Starting as a developer and moving through technical, managerial and consulting positions in Configuration Management and Operations areas, Pini acquired deep understanding of the software delivery processes and currently helping organisations around Europe with improving software delivery pipeline by introducing Docker and other cutting edge technologies.

Latest posts by Pini Reznik (see all)

16 Comments

  1. in my test, throws this error:

    [root@kubernetes cd_demo]# docker logs a01cf532d5bf
    I0321 08:17:19.334563 1 logging.cpp:172] INFO level logging started!
    I0321 08:17:19.335260 1 main.cpp:142] Build: 2014-11-22 05:29:57 by root
    I0321 08:17:19.335286 1 main.cpp:144] Version: 0.21.0
    I0321 08:17:19.335300 1 main.cpp:147] Git tag: 0.21.0
    I0321 08:17:19.335312 1 main.cpp:151] Git SHA: ab8fa655d34e8e15a4290422df38a18db1c09b5b
    Failed to create a containerizer: Could not create DockerContainerizer: Failed to execute ‘docker version’: exited with status 127

    what happen? how to resolver?
    link github issue: https://github.com/redjack/docker-mesos/issues/11

    • Os: centos 7
      Linux localhost.localdomain 3.10.0-123.13.2.el7.x86_64 #1 SMP Thu Dec 18 14:09:13 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

      Docker –version
      Docker version 1.3.2, build 39fa2fa/1.3.2

      Docker info :
      Containers: 1
      Images: 37
      Storage Driver: devicemapper
      Pool Name: docker-253:1-436219236-pool
      Pool Blocksize: 65.54 kB
      Data file: /var/lib/docker/devicemapper/devicemapper/data
      Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
      Data Space Used: 1.643 GB
      Data Space Total: 107.4 GB
      Metadata Space Used: 2.47 MB
      Metadata Space Total: 2.147 GB
      Library Version: 1.02.84-RHEL7 (2014-03-26)
      Execution Driver: native-0.2
      Kernel Version: 3.10.0-123.13.2.el7.x86_64
      Operating System: CentOS Linux 7 (Core)

    • I can’t make sharing docker deamon and socket succesfull from centos to debian neither debian to centos. Centos use devicemapper like storage devise and debian use aufs which ones create a conflict. This example run fine in debian.

  2. Hi,

    I am using docker-compose to launch the whole stack but the slave doesnt register with the master.

    Following is the log on launch

    Recreating cddemo_zookeeper_1…
    Recreating cddemo_master_1…
    Recreating cddemo_slave_1…
    Recreating cddemo_marathon_1…
    Attaching to cddemo_zookeeper_1, cddemo_master_1, cddemo_marathon_1
    zookeeper_1 | JMX enabled by default
    zookeeper_1 | Using config: /opt/zookeeper/bin/../conf/zoo.cfg
    zookeeper_1 | 2015-06-26 12:53:34,581 [myid:] – INFO [main:QuorumPeerConfig@103] – Reading configuration from: /opt/zookeeper/bin/../conf/zoo.cfg
    zookeeper_1 | 2015-06-26 12:53:34,592 [myid:] – INFO [main:DatadirCleanupManager@78] – autopurge.snapRetainCount set to 3
    zookeeper_1 | 2015-06-26 12:53:34,592 [myid:] – INFO [main:DatadirCleanupManager@79] – autopurge.purgeInterval set to 0
    zookeeper_1 | 2015-06-26 12:53:34,592 [myid:] – INFO [main:DatadirCleanupManager@101] – Purge task is not scheduled.
    zookeeper_1 | 2015-06-26 12:53:34,593 [myid:] – WARN [main:QuorumPeerMain@113] – Either no config or no quorum defined in config, running in standalone mode
    zookeeper_1 | 2015-06-26 12:53:34,602 [myid:] – INFO [main:QuorumPeerConfig@103] – Reading configuration from: /opt/zookeeper/bin/../conf/zoo.cfg
    zookeeper_1 | 2015-06-26 12:53:34,602 [myid:] – INFO [main:ZooKeeperServerMain@95] – Starting server
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:host.name=99f5fab81571
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.version=1.7.0_65
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.vendor=Oracle Corporation
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.class.path=/opt/zookeeper/bin/../build/classes:/opt/zookeeper/bin/../build/lib/*.jar:/opt/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper/bin/../lib/netty-3.7.0.Final.jar:/opt/zookeeper/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/zookeeper/bin/../zookeeper-3.4.6.jar:/opt/zookeeper/bin/../src/java/lib/*.jar:/opt/zookeeper/bin/../conf:
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.io.tmpdir=/tmp
    zookeeper_1 | 2015-06-26 12:53:34,609 [myid:] – INFO [main:Environment@100] – Server environment:java.compiler=
    zookeeper_1 | 2015-06-26 12:53:34,611 [myid:] – INFO [main:Environment@100] – Server environment:os.name=Linux
    zookeeper_1 | 2015-06-26 12:53:34,611 [myid:] – INFO [main:Environment@100] – Server environment:os.arch=amd64
    zookeeper_1 | 2015-06-26 12:53:34,611 [myid:] – INFO [main:Environment@100] – Server environment:os.version=3.19.7-200.fc21.x86_64
    zookeeper_1 | 2015-06-26 12:53:34,611 [myid:] – INFO [main:Environment@100] – Server environment:user.name=root
    zookeeper_1 | 2015-06-26 12:53:34,611 [myid:] – INFO [main:Environment@100] – Server environment:user.home=/root
    zookeeper_1 | 2015-06-26 12:53:34,611 [myid:] – INFO [main:Environment@100] – Server environment:user.dir=/opt/zookeeper
    zookeeper_1 | 2015-06-26 12:53:34,615 [myid:] – INFO [main:ZooKeeperServer@755] – tickTime set to 2000
    zookeeper_1 | 2015-06-26 12:53:34,615 [myid:] – INFO [main:ZooKeeperServer@764] – minSessionTimeout set to -1
    zookeeper_1 | 2015-06-26 12:53:34,615 [myid:] – INFO [main:ZooKeeperServer@773] – maxSessionTimeout set to -1
    zookeeper_1 | 2015-06-26 12:53:34,716 [myid:] – INFO [main:NIOServerCnxnFactory@94] – binding to port 0.0.0.0/0.0.0.0:2181
    zookeeper_1 | 2015-06-26 12:53:37,079 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.49:34260
    zookeeper_1 | 2015-06-26 12:53:37,083 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.49:34260; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:37,097 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.49:34260
    zookeeper_1 | 2015-06-26 12:53:37,098 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.49:34262
    zookeeper_1 | 2015-06-26 12:53:37,098 [myid:] – INFO [SyncThread:0:FileTxnLog@199] – Creating new log file: log.1f
    zookeeper_1 | 2015-06-26 12:53:37,098 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.49:34263
    zookeeper_1 | 2015-06-26 12:53:37,098 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.49:34262; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:37,098 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.49:34262
    zookeeper_1 | 2015-06-26 12:53:37,099 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.49:34263; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:37,099 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.49:34263
    zookeeper_1 | 2015-06-26 12:53:37,108 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.49:34265
    zookeeper_1 | 2015-06-26 12:53:37,108 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.49:34265; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:37,108 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.49:34265
    zookeeper_1 | 2015-06-26 12:53:37,110 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0000 with negotiated timeout 10000 for client /172.17.0.49:34260
    zookeeper_1 | 2015-06-26 12:53:37,117 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0001 with negotiated timeout 10000 for client /172.17.0.49:34262
    zookeeper_1 | 2015-06-26 12:53:37,117 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0002 with negotiated timeout 10000 for client /172.17.0.49:34263
    zookeeper_1 | 2015-06-26 12:53:37,118 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0003 with negotiated timeout 10000 for client /172.17.0.49:34265
    master_1 | I0626 12:53:36.598446 1 logging.cpp:172] INFO level logging started!
    master_1 | I0626 12:53:36.598634 1 main.cpp:167] Build: 2014-11-22 05:29:57 by root
    master_1 | I0626 12:53:36.598644 1 main.cpp:169] Version: 0.21.0
    master_1 | I0626 12:53:36.598647 1 main.cpp:172] Git tag: 0.21.0
    master_1 | I0626 12:53:36.598651 1 main.cpp:176] Git SHA: ab8fa655d34e8e15a4290422df38a18db1c09b5b
    master_1 | I0626 12:53:37.064618 1 leveldb.cpp:176] Opened db in 465.562252ms
    master_1 | I0626 12:53:37.072793 1 leveldb.cpp:183] Compacted db in 8.124325ms
    master_1 | I0626 12:53:37.072886 1 leveldb.cpp:198] Created db iterator in 47850ns
    master_1 | I0626 12:53:37.072902 1 leveldb.cpp:204] Seeked to beginning of db in 3551ns
    master_1 | I0626 12:53:37.072963 1 leveldb.cpp:273] Iterated through 0 keys in the db in 54525ns
    master_1 | I0626 12:53:37.073155 1 replica.cpp:741] Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfa754700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfa754700):ZOO_INFO@log_env@716: Client environment:host.name=master
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfa754700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfa754700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.7-200.fc21.x86_64
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfa754700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu May 7 22:00:21 UTC 2015
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfc758700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfc758700):ZOO_INFO@log_env@716: Client environment:host.name=master
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfc758700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfc758700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.7-200.fc21.x86_64
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfc758700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu May 7 22:00:21 UTC 2015
    master_1 | I0626 12:53:37.075690 8 log.cpp:238] Attempting to join replica to ZooKeeper group
    master_1 | 2015-06-26 12:53:37,075:1(0x7f1dfa754700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfc758700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfa754700):ZOO_INFO@log_env@741: Client environment:user.home=/root
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfa754700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfa754700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zookeeper:2181 sessionTimeout=10000 watcher=0x7f1e01d48a0a sessionId=0 sessionPasswd= context=0x7f1df0001350 flags=0
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfc758700):ZOO_INFO@log_env@741: Client environment:user.home=/root
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfc758700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp
    master_1 | 2015-06-26 12:53:37,076:1(0x7f1dfc758700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zookeeper:2181 sessionTimeout=10000 watcher=0x7f1e01d48a0a sessionId=0 sessionPasswd= context=0x7f1de0000eb0 flags=0
    master_1 | I0626 12:53:37.076454 13 recover.cpp:437] Starting replica recovery
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@716: Client environment:host.name=master
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.7-200.fc21.x86_64
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu May 7 22:00:21 UTC 2015
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@716: Client environment:host.name=master
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.7-200.fc21.x86_64
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu May 7 22:00:21 UTC 2015
    master_1 | I0626 12:53:37.077245 1 main.cpp:292] Starting Mesos master
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@741: Client environment:user.home=/root
    marathon_1 | MESOS_NATIVE_JAVA_LIBRARY is not set. Searching in /usr/lib /usr/local/lib.
    marathon_1 | MESOS_NATIVE_LIBRARY, MESOS_NATIVE_JAVA_LIBRARY set to ‘/usr/lib/libmesos.so’
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1dfbf57700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zookeeper:2181 sessionTimeout=10000 watcher=0x7f1e01d48a0a sessionId=0 sessionPasswd= context=0x7f1de80012e0 flags=0
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@741: Client environment:user.home=/root
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@log_env@753: Client environment:user.dir=/tmp
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df9f53700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zookeeper:2181 sessionTimeout=10000 watcher=0x7f1e01d48a0a sessionId=0 sessionPasswd= context=0x7f1dd8002160 flags=0
    master_1 | 2015-06-26 12:53:37,077:1(0x7f1df6f12700):ZOO_INFO@check_events@1703: initiated connection to server [172.17.0.47:2181]
    master_1 | I0626 12:53:37.077888 13 recover.cpp:463] Replica is in EMPTY status
    master_1 | I0626 12:53:37.079658 9 replica.cpp:638] Replica in EMPTY status received a broadcasted recover request
    master_1 | I0626 12:53:37.080459 8 recover.cpp:188] Received a recover response from a replica in EMPTY status
    master_1 | I0626 12:53:37.080941 13 recover.cpp:554] Updating replica status to STARTING
    master_1 | 2015-06-26 12:53:37,106:1(0x7f1df6711700):ZOO_INFO@check_events@1703: initiated connection to server [172.17.0.47:2181]
    master_1 | I0626 12:53:37.108281 10 master.cpp:318] Master 20150626-125337-822088108-5050-1 (master) started on 172.17.0.49:5050
    master_1 | 2015-06-26 12:53:37,110:1(0x7f1df6f12700):ZOO_INFO@check_events@1750: session establishment complete on server [172.17.0.47:2181], sessionId=0x14e2fee2b9e0000, negotiated timeout=10000
    master_1 | 2015-06-26 12:53:37,117:1(0x7f1df6711700):ZOO_INFO@check_events@1750: session establishment complete on server [172.17.0.47:2181], sessionId=0x14e2fee2b9e0001, negotiated timeout=10000
    master_1 | 2015-06-26 12:53:37,127:1(0x7f1df4f0e700):ZOO_INFO@check_events@1703: initiated connection to server [172.17.0.47:2181]
    master_1 | 2015-06-26 12:53:37,127:1(0x7f1def7fe700):ZOO_INFO@check_events@1703: initiated connection to server [172.17.0.47:2181]
    master_1 | 2015-06-26 12:53:37,127:1(0x7f1df4f0e700):ZOO_INFO@check_events@1750: session establishment complete on server [172.17.0.47:2181], sessionId=0x14e2fee2b9e0002, negotiated timeout=10000
    master_1 | 2015-06-26 12:53:37,127:1(0x7f1def7fe700):ZOO_INFO@check_events@1750: session establishment complete on server [172.17.0.47:2181], sessionId=0x14e2fee2b9e0003, negotiated timeout=10000
    master_1 | I0626 12:53:37.128437 9 group.cpp:313] Group process (group(2)@172.17.0.49:5050) connected to ZooKeeper
    master_1 | I0626 12:53:37.128478 9 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (1, 0, 0)
    master_1 | I0626 12:53:37.128492 9 group.cpp:385] Trying to create path ‘/mesos/log_replicas’ in ZooKeeper
    master_1 | I0626 12:53:37.128736 13 group.cpp:313] Group process (group(4)@172.17.0.49:5050) connected to ZooKeeper
    master_1 | I0626 12:53:37.128758 11 group.cpp:313] Group process (group(3)@172.17.0.49:5050) connected to ZooKeeper
    master_1 | I0626 12:53:37.128787 6 group.cpp:313] Group process (group(1)@172.17.0.49:5050) connected to ZooKeeper
    master_1 | I0626 12:53:37.128798 13 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
    master_1 | I0626 12:53:37.128826 6 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
    master_1 | I0626 12:53:37.128811 11 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
    master_1 | I0626 12:53:37.128836 13 group.cpp:385] Trying to create path ‘/mesos’ in ZooKeeper
    master_1 | I0626 12:53:37.128849 11 group.cpp:385] Trying to create path ‘/mesos’ in ZooKeeper
    master_1 | I0626 12:53:37.128841 6 group.cpp:385] Trying to create path ‘/mesos/log_replicas’ in ZooKeeper
    master_1 | I0626 12:53:37.136361 10 master.cpp:366] Master allowing unauthenticated frameworks to register
    master_1 | I0626 12:53:37.136402 10 master.cpp:371] Master allowing unauthenticated slaves to register
    master_1 | I0626 12:53:37.142256 8 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 61.067633ms
    master_1 | I0626 12:53:37.142412 8 replica.cpp:320] Persisted replica status to STARTING
    master_1 | I0626 12:53:37.143751 7 recover.cpp:463] Replica is in STARTING status
    master_1 | I0626 12:53:37.144172 10 contender.cpp:131] Joining the ZK group
    master_1 | I0626 12:53:37.144224 8 master.cpp:1202] Successfully attached file ‘/var/log/mesos-master.INFO’
    master_1 | I0626 12:53:37.145093 8 replica.cpp:638] Replica in STARTING status received a broadcasted recover request
    master_1 | I0626 12:53:37.145418 7 recover.cpp:188] Received a recover response from a replica in STARTING status
    master_1 | I0626 12:53:37.145638 8 recover.cpp:554] Updating replica status to VOTING
    master_1 | I0626 12:53:37.153756 7 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 7.981304ms
    master_1 | I0626 12:53:37.153848 7 replica.cpp:320] Persisted replica status to VOTING
    master_1 | I0626 12:53:37.154075 8 recover.cpp:568] Successfully joined the Paxos group
    master_1 | I0626 12:53:37.154477 8 recover.cpp:452] Recover process terminated
    master_1 | I0626 12:53:37.170176 6 network.hpp:424] ZooKeeper group memberships changed
    master_1 | I0626 12:53:37.170949 10 group.cpp:659] Trying to get ‘/mesos/log_replicas/0000000000′ in ZooKeeper
    master_1 | I0626 12:53:37.443980 7 detector.cpp:138] Detected a new leader: (id=’0’)
    master_1 | I0626 12:53:37.444546 7 group.cpp:659] Trying to get ‘/mesos/info_0000000000′ in ZooKeeper
    master_1 | I0626 12:53:37.444823 9 network.hpp:466] ZooKeeper group PIDs: { log-replica(1)@172.17.0.41:5050 }
    master_1 | I0626 12:53:37.449951 9 contender.cpp:247] New candidate (id=’2’) has entered the contest for leadership
    master_1 | I0626 12:53:37.450053 10 network.hpp:424] ZooKeeper group memberships changed
    master_1 | I0626 12:53:37.450242 6 group.cpp:659] Trying to get ‘/mesos/log_replicas/0000000000’ in ZooKeeper
    master_1 | I0626 12:53:37.450275 12 detector.cpp:433] A new leading master (UPID=master@172.17.0.41:5050) is detected
    master_1 | I0626 12:53:37.450832 8 master.cpp:1263] The newly elected leader is master@172.17.0.41:5050 with id 20150626-124831-687870380-5050-1
    master_1 | I0626 12:53:37.451058 6 group.cpp:659] Trying to get ‘/mesos/log_replicas/0000000001’ in ZooKeeper
    master_1 | I0626 12:53:37.451669 7 network.hpp:466] ZooKeeper group PIDs: { log-replica(1)@172.17.0.41:5050, log-replica(1)@172.17.0.49:5050 }
    master_1 | I0626 12:53:41.806807 9 http.cpp:478] HTTP request for ‘/master/state.json’
    marathon_1 | [2015-06-26 12:53:43,888] INFO Starting Marathon 0.8.2 (mesosphere.marathon.Main$:87)
    marathon_1 | [2015-06-26 12:53:45,059] INFO Connecting to Zookeeper… (mesosphere.marathon.Main$:37)
    marathon_1 | [2015-06-26 12:53:45,064] INFO Client environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,064] INFO Client environment:host.name=a136b2e373b7 (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,064] INFO Client environment:java.version=1.7.0_79 (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,064] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,064] INFO Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:java.class.path=./bin/../target/marathon-assembly-0.8.2.jar (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:os.version=3.19.7-200.fc21.x86_64 (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,065] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,066] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,067] INFO Client environment:user.dir=/marathon (org.apache.zookeeper.ZooKeeper:97)
    marathon_1 | [2015-06-26 12:53:45,068] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=10000 watcher=com.twitter.common.zookeeper.ZooKeeperClient$3@4cd06050 (org.apache.zookeeper.ZooKeeper:379)
    marathon_1 | [2015-06-26 12:53:45,077] INFO Opening socket connection to server zookeeper/172.17.0.47:2181 (org.apache.zookeeper.ClientCnxn:1061)
    zookeeper_1 | 2015-06-26 12:53:45,082 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.53:52379
    marathon_1 | [2015-06-26 12:53:45,083] INFO Socket connection established to zookeeper/172.17.0.47:2181, initiating session (org.apache.zookeeper.ClientCnxn:950)
    zookeeper_1 | 2015-06-26 12:53:45,085 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.53:52379; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:45,085 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.53:52379
    zookeeper_1 | 2015-06-26 12:53:45,089 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0004 with negotiated timeout 10000 for client /172.17.0.53:52379
    marathon_1 | [2015-06-26 12:53:45,091] INFO Session establishment complete on server zookeeper/172.17.0.47:2181, sessionid = 0x14e2fee2b9e0004, negotiated timeout = 10000 (org.apache.zookeeper.ClientCnxn:739)
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@716: Client environment:host.name=a136b2e373b7
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.7-200.fc21.x86_64
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu May 7 22:00:21 UTC 2015
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@741: Client environment:user.home=/root
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@log_env@753: Client environment:user.dir=/marathon
    marathon_1 | 2015-06-26 12:53:45,730:1(0x7fe3527fc700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zookeeper:2181 sessionTimeout=10000 watcher=0x7fe35ae35a60 sessionId=0 sessionPasswd= context=0x7fe2d4000d60 flags=0
    marathon_1 | 2015-06-26 12:53:45,731:1(0x7fe350ff9700):ZOO_INFO@check_events@1703: initiated connection to server [172.17.0.47:2181]
    zookeeper_1 | 2015-06-26 12:53:45,731 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.53:52380
    zookeeper_1 | 2015-06-26 12:53:45,731 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.53:52380; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:45,731 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.53:52380
    marathon_1 | 2015-06-26 12:53:45,735:1(0x7fe350ff9700):ZOO_INFO@check_events@1750: session establishment complete on server [172.17.0.47:2181], sessionId=0x14e2fee2b9e0005, negotiated timeout=10000
    zookeeper_1 | 2015-06-26 12:53:45,735 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0005 with negotiated timeout 10000 for client /172.17.0.53:52380
    marathon_1 | [2015-06-26 12:53:45,889] INFO Registering in Zookeeper with hostname:a136b2e373b7 (mesosphere.marathon.MarathonModule:140)
    marathon_1 | [2015-06-26 12:53:45,943] INFO Adding HTTP support. (mesosphere.marathon.MarathonApp$$anon$1:47)
    marathon_1 | [2015-06-26 12:53:45,943] INFO No HTTPS support configured. (mesosphere.marathon.MarathonApp$$anon$1:50)
    marathon_1 | [2015-06-26 12:53:45,946] INFO Starting up (mesosphere.marathon.MarathonSchedulerService:148)
    marathon_1 | [2015-06-26 12:53:45,947] INFO Beginning run (mesosphere.marathon.MarathonSchedulerService:153)
    marathon_1 | [2015-06-26 12:53:45,947] INFO Will offer leadership after 500 milliseconds backoff (mesosphere.marathon.MarathonSchedulerService:334)
    marathon_1 | [2015-06-26 12:53:45,948] INFO jetty-8.y.z-SNAPSHOT (org.eclipse.jetty.server.Server:272)
    zookeeper_1 | 2015-06-26 12:53:46,000 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020002, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020006, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020001, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020005, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020003, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020004, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [SessionTracker:ZooKeeperServer@347] – Expiring session 0x14e2fe981020000, timeout of 10000ms exceeded
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020002
    zookeeper_1 | 2015-06-26 12:53:46,001 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020006
    zookeeper_1 | 2015-06-26 12:53:46,002 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020001
    zookeeper_1 | 2015-06-26 12:53:46,002 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020005
    zookeeper_1 | 2015-06-26 12:53:46,002 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020003
    zookeeper_1 | 2015-06-26 12:53:46,002 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020004
    zookeeper_1 | 2015-06-26 12:53:46,002 [myid:] – INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@494] – Processed session termination for sessionid: 0x14e2fe981020000
    master_1 | I0626 12:53:46.009491 10 network.hpp:424] ZooKeeper group memberships changed
    master_1 | I0626 12:53:46.009886 10 group.cpp:659] Trying to get ‘/mesos/log_replicas/0000000001′ in ZooKeeper
    master_1 | I0626 12:53:46.010826 12 network.hpp:466] ZooKeeper group PIDs: { log-replica(1)@172.17.0.49:5050 }
    master_1 | I0626 12:53:46.010905 11 detector.cpp:138] Detected a new leader: (id=’2’)
    master_1 | I0626 12:53:46.011155 8 group.cpp:659] Trying to get ‘/mesos/info_0000000002’ in ZooKeeper
    master_1 | I0626 12:53:46.011996 9 detector.cpp:433] A new leading master (UPID=master@172.17.0.49:5050) is detected
    master_1 | I0626 12:53:46.012110 7 master.cpp:1263] The newly elected leader is master@172.17.0.49:5050 with id 20150626-125337-822088108-5050-1
    master_1 | I0626 12:53:46.012138 7 master.cpp:1276] Elected as the leading master!
    master_1 | I0626 12:53:46.012161 7 master.cpp:1094] Recovering from registrar
    master_1 | I0626 12:53:46.012356 9 registrar.cpp:313] Recovering registrar
    master_1 | I0626 12:53:46.013794 7 log.cpp:656] Attempting to start the writer
    master_1 | I0626 12:53:46.016116 7 replica.cpp:474] Replica received implicit promise request with proposal 1
    marathon_1 | [2015-06-26 12:53:46,034] INFO Registering com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:113)
    master_1 | I0626 12:53:46.029824 7 leveldb.cpp:306] Persisting metadata (8 bytes) to leveldb took 13.589458ms
    master_1 | I0626 12:53:46.035176 7 replica.cpp:342] Persisted promised to 1
    marathon_1 | [2015-06-26 12:53:46,035] INFO Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:113)
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.chaos.validation.ConstraintViolationExceptionMapper as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:113)
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.marathon.api.MarathonExceptionMapper as a provider class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:113)
    master_1 | I0626 12:53:46.036655 7 coordinator.cpp:230] Coordinator attemping to fill missing position
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.marathon.api.v2.AppsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.marathon.api.v2.TasksResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.marathon.api.v2.EventSubscriptionsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.marathon.api.v2.QueueResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,036] INFO Registering mesosphere.marathon.api.v2.GroupsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,037] INFO Registering mesosphere.marathon.api.v2.InfoResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,037] INFO Registering mesosphere.marathon.api.v2.LeaderResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,037] INFO Registering mesosphere.marathon.api.v2.DeploymentsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,037] INFO Registering mesosphere.marathon.api.v2.ArtifactsResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    marathon_1 | [2015-06-26 12:53:46,037] INFO Registering mesosphere.marathon.api.v2.SchemaResource as a root resource class (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:116)
    master_1 | I0626 12:53:46.038199 11 replica.cpp:375] Replica received explicit promise request for position 0 with proposal 2
    marathon_1 | [2015-06-26 12:53:46,039] INFO Initiating Jersey application, version ‘Jersey: 1.18.1 02/19/2014 03:28 AM’ (com.sun.jersey.server.impl.application.WebApplicationImpl:815)
    master_1 | I0626 12:53:46.047005 11 leveldb.cpp:343] Persisting action (8 bytes) to leveldb took 8.723117ms
    master_1 | I0626 12:53:46.047082 11 replica.cpp:676] Persisted action at 0
    master_1 | I0626 12:53:46.048579 10 replica.cpp:508] Replica received write request for position 0
    master_1 | I0626 12:53:46.048688 10 leveldb.cpp:438] Reading position from leveldb took 44616ns
    master_1 | I0626 12:53:46.054891 10 leveldb.cpp:343] Persisting action (14 bytes) to leveldb took 6.131471ms
    master_1 | I0626 12:53:46.054965 10 replica.cpp:676] Persisted action at 0
    master_1 | I0626 12:53:46.055663 10 replica.cpp:655] Replica received learned notice for position 0
    master_1 | I0626 12:53:46.062644 10 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 6.924248ms
    master_1 | I0626 12:53:46.062738 10 replica.cpp:676] Persisted action at 0
    master_1 | I0626 12:53:46.062788 10 replica.cpp:661] Replica learned NOP action at position 0
    master_1 | I0626 12:53:46.063309 10 log.cpp:672] Writer started with ending position 0
    master_1 | I0626 12:53:46.065325 10 leveldb.cpp:438] Reading position from leveldb took 35270ns
    master_1 | I0626 12:53:46.067427 13 registrar.cpp:346] Successfully fetched the registry (0B) in 55.02592ms
    master_1 | I0626 12:53:46.067654 13 registrar.cpp:445] Applied 1 operations in 55472ns; attempting to update the ‘registry’
    master_1 | I0626 12:53:46.069710 7 log.cpp:680] Attempting to append 116 bytes to the log
    master_1 | I0626 12:53:46.069885 13 coordinator.cpp:340] Coordinator attempting to write APPEND action at position 1
    master_1 | I0626 12:53:46.070451 9 replica.cpp:508] Replica received write request for position 1
    master_1 | I0626 12:53:46.080544 9 leveldb.cpp:343] Persisting action (133 bytes) to leveldb took 10.036587ms
    master_1 | I0626 12:53:46.080629 9 replica.cpp:676] Persisted action at 1
    master_1 | I0626 12:53:46.081200 9 replica.cpp:655] Replica received learned notice for position 1
    master_1 | I0626 12:53:46.089679 9 leveldb.cpp:343] Persisting action (135 bytes) to leveldb took 8.430152ms
    master_1 | I0626 12:53:46.089752 9 replica.cpp:676] Persisted action at 1
    master_1 | I0626 12:53:46.089803 9 replica.cpp:661] Replica learned APPEND action at position 1
    master_1 | I0626 12:53:46.091164 6 registrar.cpp:490] Successfully updated the ‘registry’ in 23328us
    master_1 | I0626 12:53:46.091336 6 registrar.cpp:376] Successfully recovered registrar
    master_1 | I0626 12:53:46.091621 6 master.cpp:1121] Recovered 0 slaves from the Registry (80B) ; allowing 10mins for slaves to re-register
    master_1 | I0626 12:53:46.091702 13 log.cpp:699] Attempting to truncate the log to 1
    master_1 | I0626 12:53:46.091852 8 coordinator.cpp:340] Coordinator attempting to write TRUNCATE action at position 2
    master_1 | I0626 12:53:46.092433 10 replica.cpp:508] Replica received write request for position 2
    master_1 | I0626 12:53:46.098285 10 leveldb.cpp:343] Persisting action (16 bytes) to leveldb took 5.803022ms
    master_1 | I0626 12:53:46.098371 10 replica.cpp:676] Persisted action at 2
    master_1 | I0626 12:53:46.098894 7 replica.cpp:655] Replica received learned notice for position 2
    marathon_1 | [2015-06-26 12:53:46,104] INFO Binding com.codahale.metrics.jersey.InstrumentedResourceMethodDispatchAdapter to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    master_1 | I0626 12:53:46.104647 7 leveldb.cpp:343] Persisting action (18 bytes) to leveldb took 5.672383ms
    master_1 | I0626 12:53:46.104771 7 leveldb.cpp:401] Deleting ~1 keys from leveldb took 40673ns
    master_1 | I0626 12:53:46.104791 7 replica.cpp:676] Persisted action at 2
    master_1 | I0626 12:53:46.104827 7 replica.cpp:661] Replica learned TRUNCATE action at position 2
    marathon_1 | [2015-06-26 12:53:46,116] INFO Binding mesosphere.chaos.validation.ConstraintViolationExceptionMapper to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,116] INFO Binding mesosphere.marathon.api.MarathonExceptionMapper to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,117] INFO Binding com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,131] INFO HV000001: Hibernate Validator 5.1.2.Final (org.hibernate.validator.internal.util.Version:27)
    marathon_1 | [2015-06-26 12:53:46,469] INFO Using HA and therefore offering leadership (mesosphere.marathon.MarathonSchedulerService:341)
    marathon_1 | [2015-06-26 12:53:46,479] INFO Set group member ID to member_0000000001 (com.twitter.common.zookeeper.Group:426)
    marathon_1 | [2015-06-26 12:53:46,488] INFO Candidate /marathon/leader/member_0000000001 is now leader of group: [member_0000000001] (com.twitter.common.zookeeper.CandidateImpl:152)
    marathon_1 | [2015-06-26 12:53:46,488] INFO Elected (Leader Interface) (mesosphere.marathon.MarathonSchedulerService:252)
    marathon_1 | [2015-06-26 12:53:46,520] INFO Elect leadership (mesosphere.marathon.MarathonSchedulerService:299)
    marathon_1 | [2015-06-26 12:53:46,520] INFO Migration successfully applied for version Version(0, 8, 2) (mesosphere.marathon.state.Migration:69)
    marathon_1 | [2015-06-26 12:53:46,520] INFO Running driver (mesosphere.marathon.MarathonSchedulerService:187)
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@716: Client environment:host.name=a136b2e373b7
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@724: Client environment:os.arch=3.19.7-200.fc21.x86_64
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Thu May 7 22:00:21 UTC 2015
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@741: Client environment:user.home=/root
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@log_env@753: Client environment:user.dir=/marathon
    marathon_1 | 2015-06-26 12:53:46,521:1(0x7fe3592fa700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=zookeeper:2181 sessionTimeout=10000 watcher=0x7fe35ae35a60 sessionId=0 sessionPasswd= context=0x7fe2d0000d30 flags=0
    marathon_1 | I0626 12:53:46.521939 60 sched.cpp:157] Version: 0.22.1
    marathon_1 | 2015-06-26 12:53:46,522:1(0x7fe3337de700):ZOO_INFO@check_events@1703: initiated connection to server [172.17.0.47:2181]
    zookeeper_1 | 2015-06-26 12:53:46,522 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] – Accepted socket connection from /172.17.0.53:52385
    zookeeper_1 | 2015-06-26 12:53:46,522 [myid:] – WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@822] – Connection request from old client /172.17.0.53:52385; will be dropped if server is in r-o mode
    zookeeper_1 | 2015-06-26 12:53:46,522 [myid:] – INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@868] – Client attempting to establish new session at /172.17.0.53:52385
    marathon_1 | [2015-06-26 12:53:46,524] INFO Reset offerLeadership backoff (mesosphere.marathon.MarathonSchedulerService:329)
    marathon_1 | [INFO] [06/26/2015 12:53:46.525] [marathon-akka.actor.default-dispatcher-2] [akka://marathon/user/MarathonScheduler/$a] Starting scheduler actor
    marathon_1 | 2015-06-26 12:53:46,529:1(0x7fe3337de700):ZOO_INFO@check_events@1750: session establishment complete on server [172.17.0.47:2181], sessionId=0x14e2fee2b9e0006, negotiated timeout=10000
    marathon_1 | I0626 12:53:46.529448 49 group.cpp:313] Group process (group(1)@172.17.0.53:49023) connected to ZooKeeper
    marathon_1 | I0626 12:53:46.529505 49 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
    marathon_1 | I0626 12:53:46.529512 49 group.cpp:385] Trying to create path ‘/mesos’ in ZooKeeper
    zookeeper_1 | 2015-06-26 12:53:46,529 [myid:] – INFO [SyncThread:0:ZooKeeperServer@617] – Established session 0x14e2fee2b9e0006 with negotiated timeout 10000 for client /172.17.0.53:52385
    marathon_1 | I0626 12:53:46.531232 46 detector.cpp:138] Detected a new leader: (id=’2′)
    marathon_1 | I0626 12:53:46.531301 46 group.cpp:659] Trying to get ‘/mesos/info_0000000002’ in ZooKeeper
    marathon_1 | I0626 12:53:46.531805 48 detector.cpp:452] A new leading master (UPID=master@172.17.0.49:5050) is detected
    marathon_1 | I0626 12:53:46.531838 48 sched.cpp:254] New master detected at master@172.17.0.49:5050
    marathon_1 | I0626 12:53:46.532074 48 sched.cpp:264] No credentials provided. Attempting to register without authentication
    master_1 | I0626 12:53:46.532624 9 master.cpp:1520] Received re-registration request from framework 20150626-124831-687870380-5050-1-0000 (marathon) at scheduler-433f22d8-2cf1-46bc-ab3b-ab37ed14cb7b@172.17.0.53:49023
    master_1 | I0626 12:53:46.533144 9 master.cpp:1573] Re-registering framework 20150626-124831-687870380-5050-1-0000 (marathon) at scheduler-433f22d8-2cf1-46bc-ab3b-ab37ed14cb7b@172.17.0.53:49023
    master_1 | I0626 12:53:46.533725 10 hierarchical_allocator_process.hpp:329] Added framework 20150626-124831-687870380-5050-1-0000
    marathon_1 | I0626 12:53:46.533907 48 sched.cpp:448] Framework registered with 20150626-124831-687870380-5050-1-0000
    marathon_1 | [2015-06-26 12:53:46,534] INFO Registered as 20150626-124831-687870380-5050-1-0000 to master ‘20150626-125337-822088108-5050-1’ (mesosphere.marathon.MarathonScheduler:55)
    marathon_1 | [INFO] [06/26/2015 12:53:46.536] [marathon-akka.actor.default-dispatcher-4] [akka://marathon/user/MarathonScheduler/$a] Scheduler actor ready
    marathon_1 | [2015-06-26 12:53:46,541] INFO Binding mesosphere.marathon.api.v2.AppsResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,541] INFO Stored framework ID ‘20150626-124831-687870380-5050-1-0000’ (mesosphere.mesos.util.FrameworkIdUtil:49)
    marathon_1 | [2015-06-26 12:53:46,547] INFO Binding mesosphere.marathon.api.v2.TasksResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,548] INFO Binding mesosphere.marathon.api.v2.EventSubscriptionsResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,549] INFO Binding mesosphere.marathon.api.v2.QueueResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,554] INFO Binding mesosphere.marathon.api.v2.GroupsResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,555] INFO Binding mesosphere.marathon.api.v2.InfoResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,556] INFO Binding mesosphere.marathon.api.v2.LeaderResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,557] INFO Binding mesosphere.marathon.api.v2.DeploymentsResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,562] INFO Binding mesosphere.marathon.api.v2.ArtifactsResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,563] INFO Binding mesosphere.marathon.api.v2.SchemaResource to GuiceManagedComponentProvider with the scope “Singleton” (com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory:168)
    marathon_1 | [2015-06-26 12:53:46,580] INFO Started SelectChannelConnector@0.0.0.0:8080 (org.eclipse.jetty.server.AbstractConnector:338)
    master_1 | I0626 12:53:48.810047 8 http.cpp:344] HTTP request for ‘/master/redirect’
    marathon_1 | [2015-06-26 12:54:01,527] INFO Syncing tasks for all apps (mesosphere.marathon.SchedulerActions:457)
    marathon_1 | [INFO] [06/26/2015 12:54:01.528] [marathon-akka.actor.default-dispatcher-3] [akka://marathon/deadLetters] Message [mesosphere.marathon.MarathonSchedulerActor$TasksReconciled$] from Actor[akka://marathon/user/MarathonScheduler/$a#1476646106] to Actor[akka://marathon/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings ‘akka.log-dead-letters’ and ‘akka.log-dead-letters-during-shutdown’.
    marathon_1 | [2015-06-26 12:54:01,529] INFO Requesting task reconciliation with the Mesos master (mesosphere.marathon.SchedulerActions:482)
    master_1 | I0626 12:54:01.530545 9 master.cpp:3556] Performing implicit task state reconciliation for framework 20150626-124831-687870380-5050-1-0000 (marathon) at scheduler-433f22d8-2cf1-46bc-ab3b-ab37ed14cb7b@172.17.0.53:49023
    master_1 | I0626 12:54:28.780117 11 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:54:38.953477 10 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:54:48.961133 10 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:54:58.968745 13 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:55:09.220623 9 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:55:19.237870 11 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:55:29.243477 13 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:55:39.537282 13 http.cpp:478] HTTP request for ‘/master/state.json’
    master_1 | I0626 12:55:49.544208 6 http.cpp:478] HTTP request for ‘/master/state.json’

    Wht is going wrong?

    • Interesting, based on the “Attaching to …” statement, no slave container could be found at all.
      You may try using another image as what I did and tweak it in the slave by mounting some more libs, see comments below.
      Good luck!

  3. excellent tutorial, but I need some explanation about how introduce test/aceptances/production workflow with these tools.

  4. Great tutorial! One question in my research to accomplish something like this is how do deal with micro-services needing to talk to another micro-service across different slaves. For example if ms-A on Slave 1 needs to talk to ms-B who happens to be on slave 2? I’d love any suggestions as this is a reality for me and am trying to decide if Meso’s is right for me.

  5. Great tutorial!

    Some changes I had to make while running in Ubuntu 14.04 64bit + Docker (1.11.0, build 4dc5990) + Docker Compose (1.6.2, build 4d72027)
    1. Mesos: To use 0.28.1-2.0.20.ubuntu1404 image instead.
    – mesosphere/mesos-master:0.28.1-2.0.20.ubuntu1404
    – mesosphere/mesos-slave:0.28.1-2.0.20.ubuntu1404
    Otherwise the slave would not be started up properly, with errors like “Could not create DockerContainerizer: Failed to execute ‘docker version’: exited with status 127” or “Insufficient version of Docker! Please upgrade to >= 1.0.0”

    2. Again, while running docker-in-docker in Mesos Slave, we need to mount some libs from host to container:
    volumes:
    – /var/run/docker.sock:/run/docker.sock
    – /usr/bin/docker:/usr/bin/docker
    – /sys:/sys:ro
    – mesosslace-stuff:/var/log
    – /lib/x86_64-linux-gnu/libsystemd-journal.so.0:/lib/x86_64-linux-gnu/libsystemd-journal.so.0
    – /usr/lib/x86_64-linux-gnu/libapparmor.so.1:/usr/lib/x86_64-linux-gnu/libapparmor.so.1
    – /lib/x86_64-linux-gnu/libcgmanager.so.0:/lib/x86_64-linux-gnu/libcgmanager.so.0
    – /lib/x86_64-linux-gnu/libnih.so.1:/lib/x86_64-linux-gnu/libnih.so.1
    – /lib/x86_64-linux-gnu/libnih-dbus.so.1:/lib/x86_64-linux-gnu/libnih-dbus.so.1
    – /lib/x86_64-linux-gnu/libgcrypt.so.11:/lib/x86_64-linux-gnu/libgcrypt.so.11

    And now the last issue that I’m still working on: While building in Jenkins, everything goes fine except an 404 error at last step (as below attached) even the Jenkins claims it’s a success build — no, it’s definitely a failure — will fix it later.
    ———————
    Console Output
    Started by user anonymous
    Building in workspace /var/jenkins_home/jobs/nodejs_app/workspace
    [workspace] $ /bin/bash /tmp/hudson6129765154639361511.sh
    Sending build context to Docker daemon 5.12 kB

    Step 1 : FROM google/nodejs

    Step 8 : ENTRYPOINT /nodejs/bin/npm start
    —> Using cache
    —> ccf16d7522ef
    Successfully built ccf16d7522ef
    [workspace] $ /bin/bash /tmp/hudson4993813342014631578.sh
    The push refers to a repository [192.168.56.118:5000/cddemo/nodejs_app]

    Pushing tag for rev [ccf16d7522ef] on {http://192.168.56.118:5000/v1/repositories/cddemo/nodejs_app/tags/latest}
    [workspace] $ /bin/bash /tmp/hudson8144804204041478425.sh
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
    0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed

    0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
    100 1665 100 1372 100 293 95423 20378 –:–:– –:–:– –:–:– 98000

    Error 404 Not Found

    HTTP ERROR 404
    Problem accessing /v2/apps. Reason:

    Powered by Jetty://

    Finished: SUCCESS

    • Finally, it works.

      Two more tips for those who are trying to follow this tutorial to walk it through:
      1. Most of the cases, I can’t use localhost but the real IP in the docker-compose.yml (yes, we can use docker-compose instead of Fig with a newly created docker-compose.yml file but the same content copied from tutorial’s fig.yml);
      2. If Marathon keeps pending in the deployment stage because of “waiting for resource offer”, we need to review the resources automatically allocated by Mesos. In my case, I had to reduce the memory requirement (from 512m in this tutorial to 200m) as the Mem allocated to Mesos in my env is just 496m.

      Again, good job @Pini. Thanks!

  6. First I would like to thank Pini for the detailed posts
    I followed the posts and completed the setup with a few corrections due to changes in docker images availability

    In Part 1:
    1. I used lkwg82/jenkins_with_docker image for jenkins+docker image (cant find containersol/jenkins_with_docker)
    2. With the above image there is no need for the “RUN groupadd -g 125 docker && usermod -a -G docker jenkins” trick
    3. I had to copy build.sh push.sh and deploy.sh to /var/lib/docker/volumes/jenkins-stuff/_data/workspace/my_webapp/

    In Part 2:
    1. The Hello word app is not on port 31000 its running on a dynamic port in the 31xxx range you can view under the instance running info in the marathon UI
    Example (in this case the port is 31804):
    app.d11c3f71-c12c-11e6-ab20-0242ac110006
    f965a9580cb2:31804

  7. One more item:
    I had to add the “MESOS_LAUNCHER=posix” to the configuration to work.

    environment:
    – MESOS_MASTER=zk://zookeeper:2181/mesos
    – MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
    – MESOS_CONTAINERIZERS=docker,mesos
    – MESOS_ISOLATOR=cgroups/cpu,cgroups/mem
    – MESOS_LOG_DIR=/var/log
    – MESOS_LAUNCHER=posix

Leave a Reply

Your email address will not be published. Required fields are marked *