As a service provider in the programmable infrastructure space, we at Container Solutions are constantly investigating new tools and technologies to help our clients to deploy and maintain large and complex systems.
Most of our work is related to containers and specifically to Docker, and over time we have experimented with multiple tools to manage containers on a large scale – most notably ApacheMesos, CoreOS and Kubernetes.
Over time we have gravitated towards ApacheMesos, which is a stepping stone towards the DCOS, and currently most of our clients use it for scheduling containers on a large scale. However, following our meeting with Google’s Eric Brewer, I started to realise the importance of Kubernetes.
No one would argue that engineers at Google know what they are doing in the field of large scale distributed computing.
Google’s paper on Borg makes it very clear that Kubernetes has an excellent heritage and that Googles engineers have learnt many important lessons over the past 10 years.
But in spite of the trust in Google engineers, Kubernetes does not make immediate sense to someone who only recently started using containers.
Based on my experience, most of container users still see them as if they were faster VMs. Or in other words, each container includes an independent piece of functionality with a distinct lifecycle and dependencies with other containers.
Such an approach is very useful in the earlier stages of adoption and helps enormously to improve development environments, however, it creates significant difficulties when containers are used in production like environments.
It turns out that separate services usually require more than a single container – they require a group of co-located containers, one for the main functionality and few more for supportive activities such as logging, monitoring, database services, etc..
Such a group of services is called pod in Kubernetes and it creates very elegant abstraction that eliminates the need to schedule these co-located groups and define shared network and storage configuration for each one of them. pods allow users to focus on the higher level challenges of orchestrating complex deployments, scaling, service level management, etc..
Such higher, system level operations are managed by Replication Controllers with the help of Services that ensure that the right number of pods is running and the load is balanced according to the needs.
The entire system is managed using Labels that allow easy and effective selection of groups of pods that belong to various sub-systems or environments.
All this said, Kubernetes does not provide sophisticated techniques for resource utilization as ApacheMesos does but, the openness of these tools allows them to be integrated – i.e. we can create a Kubernetes Mesos framework that allows us to enjoy the best of both worlds.
All said so far is important but before you bet the future of your company on certain technologies you should like to try to understand the vendor, their reasoning and future aspirations.
I think the the goal of Google is easy to read. Competing on the market of VMs hosting will be too difficult as it is already massively dominated by Amazon, and Google has no competitive advantage in this field. Actually, even VMs on Google Compute Engine are running inside containers.
Looking at Google’s history, the commodity market was never attractive to Google, they rather prefer to create their own market where they have very strong advantage over other vendors. Such a market is – container hosting.
Of course containers can be hosted on the VMs too but hosting them on bare metal will provide better performance and providing container specific tooling will make the systems even more efficient.
By open sourcing Kubernetes, Google is trying to push containers even further but it didn’t give away all its assets in this field, it wants to make sure that users will eventually gravitate to their cloud. After reading the Borg paper, it’s easy to see the levels of sophistication of the internal Google cloud, including scheduling, storage and network management and many other features.
Right now only a few public cloud providers allow the running of containers directly on the hardware. Rackspace is one of them, Joynet with their Triton is another. SoftLayer has the potential to compete in this space too as it allows us to provision bare metal servers fully automatically.
My bet is that Google is going to give a good fight to Amazon in the containers space in the next couple of years. It will require significant investment into technology to allow running containers in multi-tenant environments. Such investments are build made as we speak. And since Google is using containers for a decade, it has clear competitive advantage in the future containers market. For us users and service providers in the containers space, it is clear that Kubernetes with Google behind it is the power to watch in the coming year.
Want to know more about monitoring in the cloud native era? Download our whitepaper below.