Cloud native, Culture, Docker, Microservices, Miscellaneous

Is Dynamic Management the Real Ops Disruptor?

5453155654_8edc99dc83_z

In our last post we looked closely at container packaging - what it means and why everyone’s going crazy about it. In this post, we're going to look at another aspect of Cloud Native - dynamic management.

Dynamic Management

Dynamic infrastructure management is sometimes called programmable infrastructure and its purpose is to automate data centre tasks currently done by ops folk. This has multiple benefits:

  • Improved ops team productivity.
  • Systems that can react faster and more consistently to failure or attack and are therefore more resilient.
  • Systems that can have more component parts (e.g. be bigger)
  • Systems that can manage their resources more efficiently and are therefore cheaper to run.

Dynamic management relies on a brand new tool called a container orchestrator.

What is an Orchestrator?

According to Wikipedia, “Orchestration is the automated arrangement, coordination, and management of computer systems”.

Orchestration tools have been around a long time for controlling virtual machines (VMs) running on physical servers. VM orchestrators underpin the modern cloud - they allow Cloud providers to pack many VMs efficiently onto huge servers and manage them there. Without that operating the cloud would cost too much.

However, container orchestrators can do even more than VM orchestrators.

Container Orchestrators

New container orchestrators like Kubernetes, DC/OS, Nomad or Swarm remotely control containers running on any machine within a defined set called a cluster. Amongst other things, these orchestrators dynamically manage the cluster to automatically spot and restart failed applications (aka fault tolerance) and ensure the resources of the cluster are being used efficiently (aka bin packing).

The basic idea of any orchestrator (VM or container) is that we puny humans don’t need to control individual machines, we can just set high level directives and let the orchestrator worry about what’s happening on any particular server.

We mentioned in the previous post that containers are lightweight compared to VMs and highly transient (they may only exist for seconds or minutes). We are already dependent on VM orchestrators to operate virtualized data centres because there are so many VMs. Within a containerised data centre there will be orders of magnitude more containers. Container orchestrators will therefore almost certainly be required to manage them effectively.

Is Dynamic Management Just Orchestration?

Right now, dynamic management is mostly what we can do out-of-the box with orchestrators (better resource utilisation and automated resilience) although even that entry-level functionality is extremely useful. We know of several companies who have cut some hosting bills by 75% using container orchestrators in production.

However, the orchestrators also let third parties write tools to control containers under their management. In future, these tools will do further useful things like further improve security or reduce hosting costs and energy consumption even more.

Automation

The purpose of dynamic management is to automate data centres. We can do that with container orchestrators because of our 3 revolutionary features of containers:

  • a standard application packaging format
  • a lightweight application isolation mechanism (which leads to fast instantiation speeds)
  • a standard application control interface.

We have never had these features before in a commonly adopted form (Docker-compatible containers in this case) but with them data centres can be operated:

  • at greater scale
  • more efficiently (in terms of resources)
  • more productively (in terms of manpower)
  • more securely.

Orchestrators play a key role in delivering the cloud native goals of scale and margin but can also be useful in helping to automate deployment, which can improve feature velocity or speed.

Sounds Marvellous. Is There a Catch?

As we’ve discussed, dynamic management relies on container features like very fast instantiation speeds - seconds or sub-seconds compared to minutes for VMs.

The problem is lots of tools designed for applications running in VMs do not yet respond quickly enough for dynamically managed containers. Many firewalls and load balancers cannot handle applications that appear and disappear in seconds. The same is true of service discovery, logging and monitoring services. I/O operations can also be a problem for extremely short-lived processes.

These issues are being addressed with tools that are much more container-friendly but companies may have to move away from some old familiar tools to newer ones to be able to use dynamic management. It also may make more sense in a container world to hold state in managed stateful services like Databases-as-a-Service rather than to battle the requirements of fast I/O.

Which Came First, The Container or the Orchestrator?

Companies that start by running containers in production often then move on to using orchestrators because they can save so much hosting money. Many early container adopters like the Financial Times online or Cloud66 initially wrote their own orchestrators but they are now adopting off-the-shelf versions like Kubernetes as those products become more mature.

So is the first step in a Cloud Native strategy always to adopt containers, quickly followed by orchestrators? Actually not necessarily. Many companies start first with microservices, as we’ll see in our next post.

Read more about our work in The Cloud Native Attitude.

New Call-to-action

Art: Banksy's Charlie Brown https://www.flickr.com/photos/lord-jim/5453155654/in/photolist-9iSRc5-9iPJhi-9iSUud-9iPMmx

Comments
Leave your Comment