Wheel of fortune
This is the story of how we came to bring a wheel of fortune to DockerConEU 2015!
At Container Solutions we are working on Cisco’s Mantl. We’re creating the Mesos framework for Elasticsearch, Logstash, and Kibana, Terraform providers for Cisco’s UCS, and doing various marketing efforts.
Anyone creating a modern microservices platform will need to run it on a stack. There are many supporting services for such a platform, like Consul and etcd for service discovery and k/v needs, Terraform and Ansible for provisioning, Mesos and Marathon for resource management and scheduling, and many others. These tools are valuable by themselves, but there is no single solution for every need. The burden of finding out which tools provide which services, how they work together, and maintenance of any connecting code is on the development team of each microservices platform. This leads to every team inventing their own stack.
Mantl seeks to change that. With Mantl we’ve created an infrastructure that consists of proven tech, connected together with open source "glue code." We use this term for connecting application, for example a Marathon to Consul bridge.
We’ve chosen Mesos as the fundamental layer because of its flexibility, proven battle readiness, and the ability to run other schedulers like Kubernetes on it. The flexibility Mesos provides in the ability to create frameworks for it is often necessary when scheduling stateful services like ElasticSearch.
What is the developer experience?
We’re moving the project forward, but what is the experience for the framework’s users? The developers or devops teams that design for Mantl should be offered every comfort. The best way to experience what it’s like to develop an application for Mantl, is by doing just that. The idea for creating a demo application was born.
We wanted to have a nice display object to take to DockerCon in Barcelona and have some hands-on experience with the Internet of Things. Mantl would be a great backend for an IoT service.
Our first idea was to have a spedometer attached to a bicycle or a hometrainer, so we pitched that to the rest of the company. Jamie misunderstood and thought we were talking about a wheel of fortune, which led to some hilarity and also to Marktplaats, our local version of Craigslist. Here we found a man called "Spider" who supplied us with a sturdy wheel for a very reasonable price.
In our enthusiasm, we ordered a set of devices for counting the ticks on the wheel. We already had a Raspberry PI, an Arduino and got a Particle Photon. The Photon is a particularly cool device, it is Wifi enabled and is managed through the cloud. Unfortunately that meant its downfall as well, because we imagined that Wifi at the conference would be suboptimal (it was worse than that…)
It turns out that measuring movement of the "brake" of a wheel of fortune is challenging. The brake consists of a strip of sturdy material that makes a noise when you spin the wheel, and if you spin fast, there’s a lot of movement on the brake. We tried the following sensors: A vibration sensor but there was way too much vibration and no reading that made sense. A knock sensor, which is basically a microphone that we hoped would send spikes of audio output to the Arduino, but there were no spikes, just high signal. We tried fixing an LED and a light dependent resistor (LDR) to the brake to see if that would generate a usable signal. This was promising electrically, but finnicky mechanically. In the end we went with a solution that did not count all the ticks individually but rather in batches; a Hall effect sensor and 4 magnets placed on the back of the wheel. This resulted in a nice and steady signal output, even at high speeds. This is the exact same thing a bike spedometer does.
The Arduino sketch we used is in the Wheel of Fortune github repo. The
sectors branch has a much more concise version than the one we used at DockerCon. There’s not much going on in the code besides reading the value from the sensor and sending it over the serial port. The receiving end is a simple python script that reads from the Arduino serial-to-USB interface and creates 5 threads with one request to a fixed backend address. This means that every full revolution of the wheel will send 20 requests.
We want the sensor data to go to a microservice backend. The backend consists of several parts.
Resource management and scheduler
The backend needs to run on a Mantl cluster. Unfortunately, we didn’t have a cluster available at the conference (spotty Wifi strikes again) so we opted for second best, a local minimesos install. Minimesos creates a fully functional Mesos, Marathon and Zookeeper install, running in Docker containers. We did have to create a separate branch because of the Træfik integration (keep reading!) and a newer version of Marathon that is much faster in spinning up containers.
The first part any request hits is Træfik. We’ve chosen this reverse proxy because of its Docker backend. In an earlier version of this application we used Consul as our service discovery services. This led to problems due to the fact that Consul can health check at most once per second, which is not fast enough for the particular trick we wanted to pull off. Don’t get me wrong, Consul is great and I would certainly use it in production. Our backing services are potentially very short lived and there aren’t many of them. Scaling down in terms of numbers of requests turns out to be a difficult task. We’re using Træfik’s Docker backend, which means it automatically parses Docker’s container config to enable routes. The branched version of minimesos is mostly just for ignoring all the Mesos and Marathon containers. The loadbalancer runs in a Docker container as well, see the
docker-compose file in the infra directory.
Our backing service consists of one or more cattlestore services, connected through Træfik. These are the red squares in the center of the architecture diagram. Each service is a simple
.go app running in a container (there’s a pattern here…) Since there’s only one sensor connected to one wheel, it would be unfair to not limit the backing service in some way. This is why each
cattlestore has a counter and a max number of requests. Each request to
/tick increments the counter and if the maximum is reached the application exits. This mimics dying backend servers. There’s also an
/info endpoint, that returns the value of the counter without incrementing it. The maximum number of requests is a random value between 10 and 30. Marathon knows how many containers should be running and will automatically schedule a new container to start after one dies.
The herder handles the scheduling of
cattlestore containers, together with Marathon. It does this by finding all the running
cattlestore containers, getting their current status by consuming their
/info endpoint and making a scheduling decision based on that. The current scheduling algorithm is simply based on the
load of the system, defined by requests received divided by max number of requests possible, and on a
reservoir, which is the total number of requests still possible. If both are too low, more containers will be scheduled through Marathon. Herder also serves the frontend, and pushes new data to it using websockets.
The frontend is a simple Angular site designed by our partner company, remember to play. It receives data through a websocket. Each bubble is a
cattlestore backing service, the size of the bubble signifies the number of possible requests. The size and colour of the inner circle displays the number of requests served so far.
First off, building a scheduling algorithm from scratch is not easy.
Second, scaling down in terms of numbers of requests per backend instance, together with a small total number of instances, and the requirement to be able to quickly react to changing input is not really supported by current backend systems. Most are built for very many requests per instance, which is why Consul’s 1 second max interval makes sense. It would be interesting to see how this architecture changes when using something like Amazon’s Lambda.
Third, in distributed systems, it’s hard to scale down services, especially if they are stateful. We can’t have Marathon kill one of our Elastic master nodes to free up resources, so we need to build a framework and do lots of bookkeeping. Conversely, scaling up is fairly easy; spin up new instances and have them find each other. We were discussing the dying instances and came to the conclusion that we still view containers as virtual machines, they have to stay running for as long as possible. Instead, if the service itself senses it is superfluous, it can remove itself from the pool. If it thinks its load is too high, it can spawn a sibling. Taken together, these rules could form the start of a decentralised scheduler for cloud native applications. That’s going to be a different blog post, though!
Finally, here’s summary video for your entertainment
So you see, through play and experimentation we reach new insights. And we had a blast at DockerCon!