Using Flux to Automate Simple Tasks
Table of Contents
Introduction⌗
In this post, I will assume you’re familiar with Flux and Kustomize. If you are not please read the below article first.
When you are operating pieces of infrastructure, like me and this blog, you frequently need to execute tasks. And they can become very repetitive. For this post we’ll take this blog as an example. As I have an RSS feed, index and a site map, and they’re all cached we need to flush the cache upon update of the blog. I used to do this by hand, now it’s fully automated.
Flushing the cache⌗
When I publish a new post I want Cloudflare to clear the cache of certain pages so the new updates are accessible and searchable quickly.
In this approach we’re going to leverage the powers of flux and kustomize. Flux will take care of (re)deploying the kubernetes components at the right time. And kustomize ensures it actually gets forced to be updated by using ConfigMapGenerators.
Setting up flux⌗
Setting up flux is quite simple, you just need to specify the overlays with a dependency in between. Below are the resources for our cache-buster (ci-cache-buster
) and the blog overlay sites-siebjee
.
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: ci-cache-buster
namespace: flux-system
spec:
interval: 5m
dependsOn:
- name: sites-siebjee # We only want to reconcile when overlay sites-siebjee is updated and ready.
sourceRef:
kind: GitRepository
name: playground
path: ./k8s/ci/sites/cache-buster
prune: true
wait: true
force: true # Force creation of resources, jobs are immutable
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: sites-siebjee
namespace: flux-system
spec:
interval: 5m
dependsOn:
- name: core-ingress-nginx # We're using ingress, therefor nice to only deploy when it's ready
sourceRef:
kind: GitRepository
name: playground
path: ./k8s/sites/sites/siebjee
prune: true
Here you see that overlay ci-cache-buster
has a dependency on sites-siebjee
. Therefor overlay ci-cache-buster
will only reconcile when sites-siebjee
has finished its reconciliation and is in a Ready state.
And not to forget, the image registry and policy, we’ll need it later.
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: siebjee
namespace: flux-system
spec:
image: europe-west4-docker.pkg.dev/my-project/sites/siebjee
interval: 10m
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: siebjee
namespace: flux-system
spec:
imageRepositoryRef:
name: siebjee
filterTags:
pattern: '^(?P<version>[0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3})$'
extract: '$version'
policy:
semver:
range: '>0.1.0'
Cache-buster overlay⌗
We’ll not cover the overlay sites-siebjee
as this is just a simple deployment. And it is not the focus of this post.
The trick in this whole approach is making use of the ConfigMapGenerator. This will append a hash like string to the ConfigMap. For instance env-vars
will turn into env-vars-abcdef
. And kustomize replaces the ConfigMap references as well. One note here is that these only work with native kubernetes resources.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: cache-buster
resources:
- namespace.yaml
- job.yaml
- rbac.yaml
configMapGenerator:
- name: env-vars
literals:
- APP_IMAGE=dummy
And the job, with a reference to env-vars
, as mentioned kustomize will replace that value.
apiVersion: batch/v1
kind: Job
metadata:
name: cache-buster
spec:
template:
spec:
restartPolicy: Never
serviceAccountName: savage
initContainers:
- name: init
image: bitnami/kubectl:1.24.3
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: env-vars
command:
- /bin/sh
- -ce
args:
- |-
until kubectl get deployment siebjee --namespace siebjee -o json | jq -erc --arg APP_IMAGE "${APP_IMAGE}" '.spec.template.spec.containers[0] | select(.image == $APP_IMAGE)'; do
echo "Deployment does not yet have the right image..."
sleep 1
done
echo "Waiting till deployment rollout has completed."
kubectl rollout status deployment/siebjee --namespace siebjee
containers:
- name: siebjee
image: curlimages/curl:7.85.0
envFrom:
- configMapRef:
name: env-vars
- secretRef:
name: secret-cf # Create this one yourself for your own cloudflare account
command:
- /bin/sh
- -c
args:
- |
curl -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/purge_cache" \
-H "Authorization: ${TOKEN}" \
-H "Content-Type: application/json" \
--data '{"files":["https://siebjee.nl","https://siebjee.nl/posts/index.xml", "https://siebjee.nl/sitemap.xml"]}'
And if you want, also the rbac & namespace:
apiVersion: v1
kind: ServiceAccount
metadata:
name: savage
namespace: cache-buster
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: savage
namespace: cache-buster
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: adam-savage
namespace: cache-buster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: savage
subjects:
- kind: ServiceAccount
name: savage
namespace: cache-buster
apiVersion: v1
kind: Namespace
metadata:
name: cache-buster
The above yamls I have located in k8s/sites/base/siebjee
as my base overlay. Now to create the actual magic
As by my own pattern lets create the overlay components that make this work.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: cache-buster
resources:
- ../../base/cache-buster
patchesStrategicMerge:
- env-vars.yaml
And the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-vars
data:
APP_IMAGE: europe-west4-docker.pkg.dev/my-project/sites/siebjee:0.14.0 # {"$imagepolicy": "flux-system:siebjee"}
When you do a kustomize build on the overlay you’ll see that the env-vars
has a hash like string appended to it.
Now every time I update the image, flux will update it inside of the ConfigMap. And kustomize will create a new ConfigMap upon reconciliation, where flux will force (re)create the cache-buster
job.
Closing note⌗
There are more tools or approaches to do things like this. But they require additional tooling. I build it this way because I’d like to keep the tooling footprint as low as I can. The more tools you have running, the more nodes you’ll eventually need just to support your tool set. And therefor the more $$ you spend.