Kubernetes

Kubernetes The Hard Way Explained - Chapter 1

Kelsey Hightower’s tutorial is the goto place for wannabe Kubernetes administrators who want to learn the ins and outs of the platform. With the CNCF’s official Kubernetes Certified Administrator programme out today we can only expect this great resource to gain even more attention.

I used KTHW a lot when I was learning K8S and prepping for the CNCF exam. This blog post is about stuff that didn’t fit into the tutorial - explanations of the workings of K8s that give context to the individual steps. Read each section before you start working on that part of the tutorial (or after if you find out you stepped into too deep water). I numbered chapter titles to match chapters in the tutorial, some are skipped because I have nothing to add.

New call-to-action

Chapter 1 - Prerequisites for Google Cloud Platform (GCP)

KTHW has you set things up on the Google Cloud Platform (not to be confused with Google Container Engine, which is Google’s fully managed Kubernetes service). It is reasonable to limit the tutorial to a single platform but I would like to make sure newcomers know that Kubernetes is not in any way tied to Google’s cloud. You can run Kubernetes from bare metal to any cloud provider. The difference will be in ease of setup, maintenance and the integration features provided.

Setting up Kubernetes on the Google Cloud Platform (GCP) means we have to create the underlying infrastructure. This will include creating Compute Instances (VMs) to run the Kubernetes Master and Worker processes, firewall rules to enable traffic between our Nodes, and creating routing rules to make sure packets addressed to a Pod running on another Node will reach it over the GCP network. Finally we will have to create a GCP LoadBalancer to make it possible to access the API Servers from the Internet. This would be quite similar on other cloud providers too with some differences in how the components are called and connected.

The setup is not going to be very simple - this is the price we pay for running a distributed system where components can't find and trust each other in trivial ways (like when they run inside the same process or at least the same OS). Fortunately once Kubernetes is up and running it will take care of these complexities for your applications.

The diagram below show the components you will create in GCP (firewall rules and routing rules are not displayed). Note that the combination of an address, a forwarding-rule and a target-pool with a health-check is a load balancer.

You will be performing tasks on GCP will using the gcloud utility, which connects to the API of GCP from your machine. You can either control GCP using this utility or the web-based console at console.cloud.google.com. As the tutorial is based on using gcloudI advise you to stick to it, but go to the web console to see a nice overview of the objects you created. You will notice that once you set up Kubernetes you will interact with the cluster using the cloud-agnostic kubectlutility.

Overview of GCP components created during the tutorial

Kubernetes in Production

Even if you complete the tutorial and understand the ins and outs of the various components and networking configurations you have to keep in mind that a production cluster will:

  • Have be secure - TLS communication between nodes, user authorisation, network segmentation.
  • Have to be highly available (if it goes down it takes all your applications with it).
  • Need upgrades, which will require changes to many components over all your Nodes.
  • Have to be tuned in case of performance issues.
  • Have to be fixed if there are issues.

Don't underestimate these tasks - setting up a cluster and maintaining it in a production setting is not the same level of difficulty. If you don’t have the in-house expertise or the hours to put in you should consider going for a managed solution where a third party takes care of the cluster management for you. Google provides Google Container Engine (GKE), a mature solution that has been in production for years now. Microsoft is hard at work to create a similar product called Azure Container Service (ACS) on their Azure Cloud. In it’s current form it can install and upgrade Kubernetes clusters but doesn’t provide managed master functionality. Besides the big cloud providers there are a number of companies providing fully managed Kubernetes solutions on any cloud or even on premise. GiantSwarm, CoreOS Tectonic, Kubermatic and Apprenda are most important players in this space. Mesosphere also started offering Kubernetes on top of DC/OS recently.

Up next

I hope this was useful. Next time I will dive into chapter 2 to explain the client tools that will be used throughout the tutorial.

Want to learn about Kubernetes? Check out our new training course below.

Kubernetes Deployment Strategies

Comments
Leave your Comment