Docker, Kubernetes

From Pet to Cattle – Running Sonar on Kubernetes

Kubernetes logo - Running Sonar on KubernetesRecently we ran into a bit of trouble with our pet CI server. It was being used as a Jenkins server and a Postgres database and Sonar was also running on it but it wasn't evident as it was just a manually ran docker compose setup on the machine. Sure enough someone turned off the server not knowing that Sonar is also running there. As Sonar is part of our minimesos build our builds stopped working.

I thought about the best way to move Sonar and Postgres off the Jenkins server. They were already "containerized" - started as a Docker Compose setup so I though let's keep that but run the containers on a proper cluster which will keep them alive and takes the management of the underlying VMs out of our hands. For this I looked to Google Container Engine which is a managed Kubernetes setup. I can use the gcloud client for Google Cloud to create a cluster and do a standard Kubernetes deployment to run the application there.

Stuff I want to create

  1. One Sonar container. I used sonarqube:5.3.
  2. One Postgres container. I used postgres:9.5.3.
  3. Kubernetes deployment files for both services.
  4. Kubernetes service files for both services.
  5. Persistent storage for the database.
  6. A Kubernetes cluster. Easiest way to get that one is to use Google Container Engine.
  7. Secret for storing the database password.
  8. Certificate for the DNS name sonar.infra.container-solutions.com.
  9. Loadbalancer and DNS in GCE.

A bit about pets and cattle

I refer to the analogy often used in #DevOps circles to distinguish servers that are created by SSH-ing to the machine from those that are created by a fully automated process. This former is called is a pet - it has a name (in our case jenkins-ci-4) and it doesn't have a reproducible way to create it. Killing it is considered extreme cruelty. As opposed to pets, cattle can be killed with impunity - because as we all know cows just get resurrected if you shoot them in the head. Let's say these analogies are not perfect but I hope you get what I mean. What I wanted to achieve with the new Sonar deployment was that the whole infrastructure would be automated so we no longer have to worry about accidental shutdowns or unclear ways of configuring stuff.

How it's done

Services

I started out by converting the original docker-compose.yml file to two Kubernetes deployments. A deployment in Kubernetes describes how a Pod (in this case Pod=DockerContainer) should be created, including it's:

  • Image name
  • Exposed ports
  • Environment variables
  • Volumes

Here is what I ended up with for the Sonar container:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sonar
spec:
  replicas: 1
  template:
    metadata:
      name: sonar
      labels:
        name: sonar
    spec:
      containers:
        - image: sonarqube:5.3
          args:
            - -Dsonar.web.context=/sonar
          name: sonar
          env:
            - name: SONARQUBE_JDBC_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-pwd
                  key: password
            - name: SONARQUBE_JDBC_URL
              value: jdbc:postgresql://sonar-postgres:5432/sonar
          ports:
            - containerPort: 9000
              name: sonar
	
	

This defines that we want a single replica of the sonarqube:5.3 image running connecting to the Postgres database. I didn't feel like running several instances because Kubernetes will always restart this one if it fails and that's enough for such a rarely used internal service. What's really nice here is that I can reference the Postgres server by an internal DNS name sonar-postgres. Another nice thing is that I can reference a secret defined in Kubernetes for the database password, so I get secret management out of the box.

Additionally I need to define a service for Sonar. Adding Type: LoadBalancer to the service definition tells Kubernetes to use the underlying platform (GCE in this case) to create a load balancer to expose our service over the internet.


apiVersion: v1
kind: Service
metadata:
  labels:
    name: sonar
  name: sonar
spec:
  ports:
    - port: 80
      targetPort: 9000
      name: sonarport
  selector:
    name: sonar
  type: LoadBalancer
	
	

The Postgres deployment looks like this:


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sonar-postgres
spec:
  replicas: 1
  template:
    metadata:
      name: sonar-postgres
      labels:
        name: sonar-postgres
    spec:
      containers:
        - image: postgres:9.5.3
          name: sonar-postgres
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres-pwd
                  key: password
            - name: POSTGRES_USER
              value: sonar
          ports:
            - containerPort: 5432
              name: postgresport
          volumeMounts:
            # This name must match the volumes.name below.
            - name: data-disk
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: data-disk
          gcePersistentDisk:
            # This disk must already exist.
            pdName: minimesos-sonar-postgres-disk
            fsType: ext4
	
	

You can see that I'm referencing the same password as in the Sonar deployment. Additionally I'm attaching a GCE persistent disk to keep our DB data safe. I'll get back to creating this disk a bit later.

The service definition for Postgres doesn't contain a LoadBalancer entry as it doesn't need to be publicly accessible. It just makes port 5432 accessible inside the cluster under the DNS name sonar-postgres.


apiVersion: v1
kind: Service
metadata:
  labels:
    name: sonar-postgres
  name: sonar-postgres
spec:
  ports:
    - port: 5432
  selector:
    name: sonar-postgres
	
	

Persistent storage

Getting persistent storage is a point where we have to get out of Kubernetes-world and use GCE. Kubernetes doesn't support creating persistent volumes but it does play nice with ones created in GCE (see the gcePersistentDisk entry above). Creating the volume in GCE is very simple:


gcloud compute disks create --size 200GB minimesos-sonar-postgres-disk
	
	

I also had to migrate the existing data. For that I mounted the new volume to the old pet machine:

gcloud compute instances attach-disk jenkins-ci-4 --disk minimesos-sonar-postgres-disk --device-name postgresdisk

mount and format it:

/usr/share/google/safe_format_and_mount /dev/disk/by-id/google-postgresdisk /postgresdisk

...copy the files...

then detach the volume:

gcloud compute instances detach-disk jenkins-ci-4 --disk minimesos-sonar-postgres-disk

Creating the Cluster

Creating a new Kubernetes cluster is of course super-easy, as that's the point of Google Container Engine. You use gcloud but the second parameter is container instead of compute to dive into container-land. ok

gcloud container clusters create minimesos-sonar --machine-type n1-standard-2 --zone europe-west1-d

You can of course specify a lot more parameters here.

Secrets

Creating secrets is a really cool Kubernetes feature that takes care of an annoying problem for us - distributing the database password to two separate containers. There are of course more sophisticated solutions out there like Hashicorp's Vault but for this simple setup Kubernetes's secret support is great. We first create a secret with this command:

kubectl create secret generic postgres-pwd --from-file=./password

The ./password is a file on the disk that contains nothing else then the password itself. I then use the password by injecting it as and environment variable into the containers. You can see that in the deployment definition files above.

This is the first time we used the kubectl command. kubectl is the client to control your Kuberntes cluster. gcloud is the client for the Google Cloud service while kubectl controls a single Kubernetes cluster by communicating with the API server of that cluster. kubectl is independent of Google Cloud and is used to control any Kubernetes cluster including ones that weren't created using gcloud. It is important to point your kubectl to the newly created cluster by running gcloud container clusters get-credentials minimesos-sonar-cluster. This will create and entry in your ~/.kube/config file and possibly set the new cluster as the current one. You can verify this using kubectl cluster-info. If you don't see the IPs of your cluster listed then you'll have to use kubectl config set-cluster minimesos-sonar-cluster to set it as the current one.

Fire engines.. blast off!

We can now launch both services by passing the 4 yaml files to kubectl apply. This command will compare the current state of the cluster with the desired state described in the config files and make any necessary changes. In this case it means Kubernetes needs to start everything up.

kubectl apply -f sonar-postgres-deployment.yaml -f sonar-deployment.yaml -f sonar-postgres-service.yaml -f sonar-service.yaml

Run kubectl get deployments,services,pods to view the state of all our freshly started Kubernetes things:

  
kubectl get deployments,services,pods
(out)NAME                             DESIRED          CURRENT          UP-TO-DATE   AVAILABLE   AGE
(out)sonar                            1                1                1            1           1m
(out)sonar-postgres                   1                1                1            1           1m
(out)NAME                             CLUSTER-IP       EXTERNAL-IP      PORT(S)      AGE
(out)kubernetes                       10.139.240.1     <none>           443/TCP      9d
(out)sonar                            10.139.246.151   104.155.45.237   80/TCP       1m
(out)sonar-postgres                   10.139.241.166   <none>           5432/TCP     1m
(out)NAME                             READY            STATUS           RESTARTS     AGE
(out)sonar-117517980-fgkqx            1/1              Running          0            1m
(out)sonar-postgres-176201253-2hu79   1/1              Running          0            1m

This shows that all the deployments, services and pods are working. It's a really cool feature of kubectl that it allows fetching data about multiple types of resources in one command. We can see that the sonar service has an external IP. This is because we defined a load-balancer in the service definition. I can now navigate to http://104.155.45.237/sonar to see the Sonar web UI.

Let's do a simple exercise to see how cattle-ish our new deployment really is: kubectl delete -f sonar-postgres-deployment.yaml -f sonar-deployment.yaml -f sonar-postgres-service.yaml -f sonar-service.yaml Just swapping kubectl apply for kubectl delete will tear down both services as if they never existed and of course we can start them again without any hassle. It's also easy to make changes to the configuration and do another kubectl apply to apply the changes.

To be continued

This post is running quite long and there is still work left to do. We need to expose the service over DNS, a temporary IP that changes with ever restart of the Pod won't do. We also want https access to the service for which we'll need to get certificate and set up automatic renewal for it. We'll also need a proxy in front of the Sonar service to terminate the https connection. It will be quite some work to achieve this, so I'll leave it for a new post.

Comments
Leave your Comment