Lifting Developers’ Productivity

With BuildKit CLI for kubectl a drop in replacement for docker build

Developing and testing software on Kubernetes usually means building and moving container images from the developer workstation to a container registry, and ultimately, Kubernetes. When considering all the moving parts, the complexity and context switches break the development flow. Gone are the days where developers just hit a command and a few seconds later the newly built software is ready for testing.

What if there was some magical tool that you could simply feed instructions like: “Build this Dockerfile here and replace any existing image on that Kubernetes cluster with the results!” ASAP. Spoiler Alert! That is exactly what BuildKit CLI for kubectl is there to do.

Building container images with Docker is the typical method for most users, but it is not the only option to build container images. In situations in which the workload is already running in a container, like Docker, Kubernetes, CI/CD pipeline or a serverless function, it is not always possible to run Docker.

Where the required Docker features may not have been available historically, alternative approaches emerged that only replicated the required functionality. In this post, we not only introduce a unique method for building container images with BuildKit CLI for kubectl, but we will also discuss two solutions that are sure to save developers hours of time and a much cleaner CI/CD pipeline.

Before we dive into the details, let’s break down Docker into its different functionalities. Docker as an application serves multiple purposes and in essence is a monolithic application holding multiple functionalities. This overview shows Docker’s functionalities and their single-purpose alternatives, allowing us to determine where BuildKit CLI for kubectl fits.

Depending on the case, multiple single-purpose tools carved out of Docker exist that help replicate some needed Docker functionality. Even better, these tools usually work well together as Open Container Initiative (OCI)-conformant runtimes.

BuildKit CLI for kubectl

BuildKit CLI for kubectl is a plugin for kubectl the Kubernetes command-line tool. The plugin extends the functionality of kubectl, allowing to build container images without a local Docker installation.

VMware open sourced BuildKit CLI for kubectl in 2020. The initial product announcement sums up pretty well the purpose of the tool:

A key feature of this new tool is that it strives to make the images you build immediately available in the container runtime of your Kubernetes cluster so you can “bounce” your pod(s) to pick up a freshly built image with virtually no overhead.

BuildKit CLI for kubectl has the following key features:

  • Dockerfiles are parsed the same way as with the existing docker build
  • Container images are built within Kubernetes, to leverage the power of your Kubernetes cluster
  • Builds OCI compatible images
  • Supports building multi-architecture images
  • No local registry needed for local Kubernetes development

Inner-Loop Productivity Flow

As developers, we spend a significant amount of time in the so-called “Inner Loop.” This is an iterative process where code is written, built, and tested repeatedly. All this takes place before we share our work with the team and the rest of the world, hence the need for code reviews and a CI workflow prior to pushing our completed work. The tight loop is the productive phase of the development process, therefore we want to spend as much time in the iterations as possible.

Iterations should be fast and with minimal friction. For example, if it takes 30 minutes to complete one loop, we can average around 10-12 loops a day. By shrinking that time to three minutes, we can theoretically make over 100 iterations of writing and testing code in a single day. That’s a huge productivity boost without “context switches” while waiting for your code to compile and deploy.

Mitch Denny has written an accurate summary about the inner-loop and how to tune it.

Inner Loop and outer Loop Development Workflow

There are two ways to speed up the inner-loop. More hardware power to reduce the time it takes to build and deploy, or take some shortcuts by skiping steps in the cycle. BuildKit CLI for kubectl falls into the latter category. It allows us to skip superfluous steps, allowing us to reduce context switches to decrease wait time resulting in more iterations. The diagram above illustrates the two methods of deploying an application to Kubernetes. The common practice today is:

  • Compile the code and package the artifacts
  • Build the container image locally
  • Push the container image to a container registry
  • Execute the deployment
  • Let Kubernetes pull the container image from the container registry
  • Start the application

With BuildKit CLI for kubectl, the need to repeat all the steps every time we want to test our changes is no longer there. After the initial full cycle we only need our Kubernetes cluster to build the new image and restart the pods requiring updates. By mastering the lean image build workflow, it all should take less than a few seconds.

Common practice workflow with and without BuildKit CLI for kubectl
Enough theory! Let us examine this workflow in practice and replace our local docker build with kubectl build for our Kubernetes development workflow.

Installation

Download the binaries for your platform from github on vmware-tanzu/buildkit-cli-for-kubectl/releases.

The best way to work with kubectl plugins is to use Krew the plugin manager for kubectl. Unfortunately, at the moment of writing, buildkit-cli-for-kubectl is not supported yet. Once the buildkit-cli-for-kubectl Krew plugin is available, the installation will be simple:

kubectl krew install buildkit-cli-for-kubectl

After downloading the binary for and extracting it:

cat darwin-*.tgz | tar -C /usr/local/bin -xvf -
#This is an example for MacOS & Linux

We can verify the successful installation and the plugin registration with kubectl

kubectl build --help
or
kubectl buildkit --help

The buildkit-cli-for-kubectl package contains two binaries: buildkit and build which is simply an alias for buildkit build. Now that we have installed buildkit-cli-for-kubectl and verified its functionality, we can try to make something useful.

Example Application

For our example, we are going to use a simple Hello-World Go application. There is no need to install anything locally as the Golang toolset, and everything needed is within the Dockerfile.

Clone the minimum Hello-World application from buildkit-cli-for-kubectl-demo-app

git clone https://github.com/container-registry/buildkit-cli-for-kubectl-demo-app.git
cd buildkit-cli-for-kubectl-demo-app

Building an Image with BuildKit CLI for kubectl

To build container images, we need to have a Kubernetes cluster up and running. One easy option to set up a local Kubernetes cluster it to use minikube or k3d. Note that k3d needs a workaround.

To build a Container Image with BuildKit CLI for kubectl simply run:

kubectl build -t hello-world -f Dockerfile .

The CLI build syntax is the same as Docker build. In this example, the first built took around ~70 seconds, which is not very impressive to be considered a quick iteration.

$ kubectl build -t 8gears.container-registry.com/examples/hello-world -f Dockerfile .
[+] Building 69.2s (17/17) FINISHED                                                                                                                                                                       
 => [internal] booting buildkit
 => => waiting for 1 pods to be ready 
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 501B
 => [internal] load .dockerignore
 => => transferring context: 2B 
 => [internal] load metadata for docker.io/library/alpine:latest
 => [internal] load metadata for docker.io/library/golang:latest
 => [stage-1 1/4] FROM 
...
 => [builder 1/5] FROM 
...
 => [stage-1 2/4] RUN addgroup -S example-group && adduser -S -D example -G example-group
 => [stage-1 3/4] WORKDIR /home/example
 => [builder 2/5] WORKDIR /hello-world
 => [builder 3/5] COPY go.mod .
 => [builder 4/5] COPY . .
 => [builder 5/5] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /bin/hello-world .
 => [stage-1 4/4] COPY --from=builder /bin/hello-world ./
 => exporting to oci image format
 => => exporting layers
 => => exporting manifest sha256:...
 => => exporting config sha256:...
 => => sending tarball
 => loading image to docker runtime via pod buildkit-55fbd66677-m9v56  

However, the subsequent runs become much faster. This Kubernetes cluster can finish the build in 3.2 seconds, which is abut enough time to switch from the IDE to browser and hit refresh.

$ kubectl build -t 8gears.container-registry.com/examples/hello-world -f Dockerfile .
[+] Building 3.2s (16/16) FINISHED

We can also see a BuildKit deployment in our cluster.

$ kubectl get deploy

NAME             READY   UP-TO-DATE   AVAILABLE   AGE
buildkit         1/1     1            1           2m

You would have to enable registry or use some hack to use local build image with Minikube. With BuildKit CLI for kubectl, this is optional.

Deploying an Application

In the previous step, we only built a container image on remote Kubernetes cluster but never utilized it. A container image in a resting state like this within the Kubernetes cluster is quite useless. To prove that we can achieve a quick inner-loop cycle, let us deploy the hello-world application and measure how long it takes to get our change appear in the cluster.

The initial deployment:

$ kubectl apply -f deployment.yaml

deployment.apps/hello-world created
service/hello-world created

We can first verify the application deployment:

$ kubectl get deploy,svc

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/buildkit      1/1     1            1           36m
deployment.apps/hello-world   1/1     1            1           4m19s

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/hello-world   ClusterIP   10.39.140.45   <none>        80/TCP    4m18s

To verify the application itself, we can bypass the ingress or load balancer with the kubectl port-forward and forward our local ports 8081 to the remote service port 80.

$ kubectl port-forward service/hello-world 8081:80  

Forwarding from 127.0.0.1:8081 -> 8080
Forwarding from [::1]:8081 -> 8080

# Open a second terminal and verify that the application responds.  
$ curl localhost:8081

Hello, world!
Version: 1.0.0
Hostname: hello-world-d8b44fbff-pt2bm

Now that we verified our application is up and running, we can change our code and measure how long it takes to notice the change in the cluster. Perform minor code change in the main.go file.

//Change in Line 26 
_, _ = fmt.Fprintf(w, "Hello, reader!\n")

Lets measure the time it takes:

time ( \
kubectl build -t 8gears.container-registry.com/examples/hello-world:latest -f Dockerfile . &&\
kubectl rollout restart deployment/hello-buildkit-example &&\
kubectl rollout status deployment/hello-buildkit-example &&\
curl --retry 5 52.42.1.123:30080
)

The result is that the code change made it into our Kubernetes Cluster within 13 seconds.

[+] Building 4.1s (14/14) FINISHED
 => [internal] load build definition from Dockerfile
 .....
 => loading image to docker runtime via pod buildkit-7f7bf48987-brc2r
deployment.apps/hello-buildkit-example restarted
Waiting for deployment "hello-buildkit-example" rollout to finish: 1 old replicas are pending termination...
deployment "hello-buildkit-example" successfully rolled out
Hello, reader!
Version: 1.0.0
Hostname: hello-buildkit-example-78894bfbfc-rnz4p

0.72s user 0.43s system 5% cpu 12.918 total

Is 13 seconds much faster than the traditional way? Well, it depends on a few factors:

  • Container Image size: Larger container images take more time to up and download to the container registry.
  • Kubernetes deployment stack complexity with multiple pods, services and other resources takes more time to update.
  • Container Build time is CPU and IO intense and is usually faster on server hardware.

Deploying the same application, the traditional way takes about 33 seconds.

time docker build -t 8gears.container-registry.com/examples/hello-world:latest -f   
0.26s user 0.18s system 3% cpu 13.529 total

time docker push 8gears.container-registry.com/examples/hello-world:latest  
0.17s user 0.11s system 2% cpu 11.789 total

time kubectl apply -f deployment.yaml && kubectl rollout status  && curl 52.42.1.123:30080
0.43s user 0.18s system 6% cpu 8.700 total

Even with this sample application, there is already a 66 % time improvement. With real-life scenarios and complex application stacks, the time saving is even bigger. No more waiting and context switches for minutes until the changes reach our Kubernetes cluster.

Now that we are using BuildKit CLI for kubectl, we don’t even have to change our deployment or Helm Charts with correct image tags. In addition, we can also skip cleaning up our polluted container registry or set up a retention policy.

Because there are much less moving parts, it becomes easier to create a workflow that automatically rebuilds and redeploys our changes whenever our code changes.

What more is possible with BuildKit CLI for kubectl?

BuildKit CLI for kubectl also works great with container registries. We can, for example, push the built images to our container-registry or c8n. A secret can be obtained from our c8n.io container registry under: User Profile -> CLI secret. Here, you will create a Kubernetes container registry secret that will be used to pull/push the image.

kubectl create secret docker-registry c8n-secret \
 --docker-server='c8n.io' \
 --docker-username='USER_NAME_CASE_SENTITIVE' \
 --docker-password='REG_SECRET'

# Verify the secret was created:
kubectl get secret c8n-secret

Build and Push Container Image with BuildKit CLI for kubectl

BuildKit CLI for kubectl can also push image after creating them by appending the --push and --registry-secret flags to the build command accordingly:

kubectl build --push --registry-secret c8n-secret \
  -t c8n.io/USER_NAME/hello-world:1.0 \
  -f Dockerfile .

Change Deployment to use our Private Registry

There are a few approaches on how to access a private container registry in Kubernetes. An elegant solution that doesn’t require changing the deployment is to add ImagePullSecrets to a service account.

We can reuse our created imagePullSecret from before and attach it to the service account.

# update service account for all deployments to use image pull secret
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-secret"}]}'

update deployment

apiVersion: apps/v1
kind: Deployment
...
spec:
  ...
  template:
    ...
    spec:
      containers:
          - name: hello-buildkit-example
            image: c8n.io/user_name/hello-world:1.0 # use private registry
            ...

Redeploy the updated deployment.yaml file to let Kubernetes pull the image from our private container registry.

Closing Thoughts

BuildKit CLI for kubectl is highly valuable when developing software on a local or remote Kubernetes cluster. Docker is intense on CPU and I/O usage while building images, making everything perform more sluggish. In situations like this, we usually tend to just to wait until the build finishes, which breaks our workflow and slows down our inner-loop cycle. The Docker remote host feature is an alternative but requires a provisioned VM with its own setup and maintenance overhead. Since we already work with Kubernetes, BuildKit CLI for kubectl is the perfect fit. It allows us to make deployment shortcuts and use the superior processing power of our Kubernetes cluster.

It is extremely easy to get started with BuildKit CLI for kubectl by using it as a drop-in replacement for docker build. This provides those who use it a way to take shortcuts without the need of two separate workflows for the inner and outer loop.

While still in pre-release it is already showing a large amount of downloads and is considered stable for use. Try it out the next time you need to wait for your changes to reach your Kubernetes cluster.

Container Registry logo

Container Management Solution for Organizations

A Container Management Solution that was made for Kubernetes

Discover our offer

Published — January 7, 2021

Last Updated —
Categories: