Utilizing Caches When Building Go Projects On Google Cloud Build

 

by

Moore’s law has ensured we experience exponential rise in computing power. There’s no pressure to optimize our software to use fewer CPU cycles, less memory, and eventually, less electricity. However, the explosive growth that computing has gone through, and the challenges we face as humanity has put us in a tough spot. We need to consume less, use our resources smarter, and consider the impact of our actions on future generations. With this in mind, we can take small, simple steps to ensure our actions have less of a negative impact.

This post is loosely coupled with Faster builds in Docker with Go 1.11. I assume familiarity with the contents of the post before proceeding.

 

Consider using Go for your next (cloud oriented) project. Why? Go has a great mixture of accessibility, minimalism and practicality. With Go and Docker it’s quick and easy to build containerized software. Compilation time is low and with the new module functionality dependencies are much easier and straightforward to manage.

Let’s have a look at how we can manage our dependencies and re-use as much as possible when implementing CI with Go, Google Cloud Build and Docker, thus saving precious Google Cloud time. We will create a builder image as a base layer with our project dependencies with no additional tooling or extra Dockerfiles. We’ll use standard multi stage Dockerfile definition. We will use docker compose to bootstrap our dependencies and run tests:

 

We define our test service, which has the image tag in the form gcr.io/${PROJECT_ID}/${REPO_NAME}:builder

 

$PROJECT_ID  and $REPO_NAME  variables will be supplied by Cloud Build tool.

If we run tests locally, we can create .env file in the project root directory to create those variables when running docker-compose. We found that using docker-compose to bootstrap your dependencies and running tests is the best solution for Cloud Build, it’s supported nicely and improves reproducibility. Now let’s have a look at the Dockerfile:

 

As you can notice, we use a two staged build. That’s a standard builder pattern using multi-stage definition. It’s time to see what our Cloud Build definition file will look like.

Notice --target  flag in the first step. Docker allows us build only specific build step, ignoring everything after it. This is exactly what we need to be able to use cache image in subsequent builds.

In the first step, if our dependencies change, the image will be rebuilt, if the cache image doesn’t exist it will be built from scratch. The second and the last steps will test and build / deploy in case of success the final image.

 

Re-using dependencies layer in Cloud Build steps sped up our built times by a factor of x1.5, not the greatest achievement, I must admit. However, the more your project grows, more and more minutes end up being shaved off from build times. Additionally, this approach establishes the efficiency mindset. Let’s not waste our build resources and consciously apply scarcity mindset to the CI/CD process. Change happens in small increments, bit by bit.

 

All resources used in this post are available in the Github repository.

This post is an inspiration from the white-paper released by Paul Johnston & Anne Currie titled ETHICS WHITEPAPER – THE STATE OF DATA CENTRE ENERGY USE IN 2018.

Consider signing the petition Sustainable servers by 2024.

Leave a Reply

Your email address will not be published. Required fields are marked *