Trying out Kubernetes


by Remmelt Pit

In this post I’ll describe what the steps are to achieve a simple way to get Kubernetes running. A single node Kubernetes setup can be convenient for kicking the tires, testing and local development.




You will need to have some background knowledge about the concepts behind Kubernetes, so you’ll know what I’m referring to when I write about nodes, pods, replication controllers and services.

I am running OSX so these instructions will be geared towards that operating system. With some minor adjustments it’ll work on Linux and Windows, too.

You will have to have Docker, Docker Compose, Docker Machine and kubectl installed. These pacakages are all available in Homebrew. The “kubernetes-cli” package contains kubectl.

Setting up Kubernetes

A single node Kubernetes “cluster” is remarkably easy to set up using hyperkube, which is a Docker image with the Kubernetes binary inside.

See this repository for the Kubernetes demo code that I’m describing here.

The Docker Compose file holds all required containers and stitches them together. Issue the

command and presto, your very own Kubernetes node!

Check if everything is running using the docker-compose commands.

The ps command should show a number of running containers.

Now we’re ready to use the kubectl command. We can pass it a server and port using the –server flag, and we can also create a tunnel to the Docker machine using

After that, kubectl can connect to localhost:8080, which is the default for earlier versions of Kubernetes. For current versions, you need to provide the –server flag which can be done by defining a context:

Now let’s run a demo application on there. Create a replication controller and start the first pod:

Verify the replication controller was created:

Check the pod was started:

Create the service:

The services can be displayed by running:

Find the port Kubernetes assigned to the service:

Now you can use curl to make requests to the pod:

The returned value will be the hostname of the container as assigned by Kubernetes.

Now let’s make things a little more exciting by scaling the service up. We’ll scale up to three pods:

After a while – check by going kubectl get pods – the curl commands will start returning the other pods’ hostnames as well.

The replication controller will keep the correct number of replicas running. See what happens when you run this command:

Check the list of running pods (kubectl get pods) and see that there are still three available. One is probably still starting up because it was just started by the replication controller.

As you can see, it’s easy to start playing around with Kubernetes on your local machine.


  1. The last command

    kubectl delete pod $(kubectl get -o yaml po -l run=kube-demo | egrep -o "kube-demo-[a-z0-9]{5}" | head -1)

    relies on formatting of output. It is quite unstable. I would rather use Go templates. kubectl command supports them with the --template option.

  2. Thanks for posting, great intro! With Kubernetes just releasing the 1.0 version, things got serious 🙂 Did you know Red Hat is the second contributor to the project? Next month I’ll touch up about Kubernets and its integration in Openshift at the Docker Randstad November Meetup ( you probably know that already 🙂 see you soon!

    • Doesn’t quite work for me – whatever I request from “kubectl get” I get a response that it’s not there:
      for “rc”: the server doesn’t have a resource type “replicationcontrollers”
      for “pods”: the server doesn’t have a resource type “pods”
      for “services”: the server doesn’t have a resource type “services”

      Using API I can see that replication controller is created in default namespace, but there are no pods.

Leave a Reply

Your email address will not be published. Required fields are marked *