Using Flux in Kubernetes
Table of Contents
Introduction⌗
In this post, I will assume you’re familiar with the CI/CD and GitOps concepts. If you are not please read the below articles first.
Flux, also known as The GitOps Toolkit, is a tool with many different components to implement the GitOps way of working in kubernetes. It is using CustomResourceDefinitions
(CRD) and kubernetes controllers to orchestrate your resources in kubernetes.
Flux also operates autonomously to update HelmReleases
and images based on your ImageRegistry
and ImagePolicy
. This process goes via the git repository. It first commits the change into the repository, and the controller(s) pick up the change.
I shall briefly touch upon the components while we create them.
Helm⌗
Helm is a package manager for kubernetes written in go and with template support allowing you to make a kubernetes out of your application.
Kustomize⌗
Kustomize is a configuration management tool within kubernetes. You can also package your application and modify it according to your needs via overlays while maintaining a structured approach.
Where helm has golang templating, kustomize is patching manifests on the base manifest, allowing you to make small or large changes.
Kustomize also has a SecretGenerator
and a ConfigMapGenerator
(kustomize generators) which you can use to make some sweet deployments. In turn, these generators add a hash-like addition to your chosen name. And it also replaces it throughout your set of manifests, not CRDs allowing you to do a roll-out of your deployment on ConfigMap changes. With helm, you need to do this via labels and hashing of your ConfigMap.
You can use helm and kustomize together. In fact, they could possibly enhance one another.
Setting up flux⌗
Prerequisites⌗
You can install flux in different ways. One is the bootstrap method, where the flux cli tool will bootstrap the cluster for you. Or you can, and also my preferred method, do it the DIY method. It will also allow you to easily integrate it into a CI/CD process without needing the flux cli tool.
The DIY approach I will go through here.
Installation⌗
Installing flux⌗
The command listed below will generate a install.yaml
file with all the required resources defined that you have selected.
flux install \
--watch-all-namespaces=true \
--namespace=flux-system \
--components-extra=image-reflector-controller,image-automation-controller \
--export > install.yaml
With this YAML, you can install the flux components manually to test or send it to your git repository that also maintains your kubernetes cluster.
Adding your GitRepository⌗
Once flux is installed, we can add our git repository. Please be aware of the different methods of doing this. As Azure DevOps, GitHub and/or others require some special things. They are documented here. But for this example we’ll take GitHub in consideration.
The below YAML will add a GitRepository
to the flux-system
namespace. It can be any namespace, but I like to have them in flux-system
.
Copy the below contents to k8s/infrastructure/git-repositories/kubernetes.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: kubernetes
namespace: flux-system
spec:
interval: 5m0s
url: https://github.com/YourGitHubUserName/YourRepository
ref:
branch: master
Note: Please update the url
to match what you have.
If it were a private repository, please follow the instructions provided by flux documented here.
Kustomize overlay Concept⌗
Below is a listing of a basic overlay structure that I personally love. It is highly flexible and extensible.
└── k8 # primary kubernetes manifests directory
├── apps # primary apps manifests directory
│ ├── base # the base directory for all "apps" in kubernetes. Cluster kustomizations refer to this
│ ├── demo # for demo cluster deployment
├── clusters # primary cluster manifests directory
│ ├── demo # for demo cluster deployment
├── core # core k8s infrastructure manifests directory
│ ├── base # the base directory for all "core" components in kubernetes. Cluster kustomizations refer to this
│ ├── demo # for demo cluster deployment
└── infrastructure # Kubernetes infrastructure that will be deployed as is in every cluster.
The apps
overlay is for your applications. Those that require components of overlay core
.
The core
overlay is for your core components to make applications work. Think here of ingress-nginx
, ingress-haproxy
, external-secrets
or external-dns
. The apps
overlay is dependent on these for them to operate as designed.
The infrastructure
overlay is a small overlay without any complexity for all common components you need across your clusters. You can not adjust these like the overlay described above. This is very static. Here I put SorageClass
, HelmRepository
, CSI drivers, or even ImagePolicy
/ ImageRepository
.
Please note that the above structure is based on my personal preference.
In my daily job I also have a ci
overlay. This overlay is partially dependent on apps
as argo is placed here. And also dependent on core
.
Preparing for the overlays⌗
The flux Kustomization
resource does work recursively. But for more fine-grained control, we need to make a few files in a structured manner to make the entire flux setup that we’re going to build autonomous.
Copy the below contents to k8s/clusters/demo/configuration.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: cluster-configuration
namespace: flux-system
spec:
interval: 5m
sourceRef:
kind: GitRepository
name: kubernetes
path: ./k8s/clusters/demo/config
prune: true
validation: client
This Kustomization
will take care of the config path for this cluster. If you add another overlay, as an example ci
, you can do this without reconfiguring this by hand.
Creating the config⌗
Now we have a Kustomization
that is watching for config files, and we can add the infrastructure, core, and apps config files.
Copy the below contents to k8s/clusters/demo/config/infrastructure.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: config-infrastructure
namespace: flux-system
spec:
interval: 5m
sourceRef:
kind: GitRepository
name: kubernetes
path: ./k8s/infrastructure
prune: true
validation: client
Copy the below contents to k8s/clusters/demo/config/core.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: config-core
namespace: flux-system
spec:
interval: 5m
sourceRef:
kind: GitRepository
name: kubernetes
path: ./k8s/clusters/demo/core
prune: true
validation: client
Copy the below contents to k8s/clusters/demo/config/apps.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: config-apps
namespace: flux-system
spec:
interval: 5m
sourceRef:
kind: GitRepository
name: kubernetes
path: ./k8s/clusters/demo/apps
prune: true
validation: client
These files are similar to k8s/clusters/demo/configuration.yaml
. They’re watching the respective directories for new additions. This way we made flux recursively check k8s/clusters/demo
for new files.
Creating your kustomize overlays⌗
Infrastructure⌗
k8s/infrastructure
├── git-repositories
├── helm-repositories
├── image-automation
└── notifications
The infrastructure overlay is a simple one, here are only those components that are generic across all other clusters.
Copy the earlier created GitRepository
YAML to k8s/infrastructure/git-repositories/kubernetes.yaml
.
ImageUpdateAutomation⌗
The image automation controller is scanning the image registries described in ImageRegistry
. Once it has found new tags it will then compare it to the policies defined in ImagePolicy
to see if the new tag(s) need to be deployed. If so, it will do this using the definition in the ImageUpdateAutomation
. By default this message is very short and not explanatory. To have a more meaningful message from the ImageUpdateAutomation
controller I use the below contents.
Copy the below contents to k8s/infrastructure/image-automation/policy.yaml
apiVersion: image.toolkit.fluxcd.io/v1alpha2
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
sourceRef:
kind: GitRepository
name: kubernetes
git:
checkout:
ref:
branch: master
commit:
author:
name: fluxcdbot
email: [email protected]
messageTemplate: |
An automated update from FluxBot [ci skip]
Files:
{{ range $filename, $_ := .Updated.Files -}}
- {{ $filename }}
{{ end -}}
Objects:
{{ range $resource, $_ := .Updated.Objects -}}
- {{ $resource.Kind }} {{ $resource.Name }}
{{ end -}}
Images:
{{ range .Updated.Images -}}
- {{.}}
{{ end -}}
interval: 1m0s
update:
strategy: Setters
The commit message would look similar to this:
Author: fluxcdbot <[email protected]>
Date: Fri Aug 12 14:07:25 2022 +0000
An automated update from FluxBot [ci skip]
Files:
- k8s/apps/demo/yourApplication/kustomization.yaml
Objects:
- Kustomization
Images:
- my-registry/yourApplication:0.8.1
Notifications⌗
As I’ve explained in What is GitOps? notifications are important. Especially with flux. If something goes wrong, you need to know.
Copy the below contents to k8s/infrastructure/notifications/kustomize-overlay.yaml
apiVersion: notification.toolkit.fluxcd.io/v1beta1
kind: Alert
metadata:
name: kustomize
namespace: flux-system
spec:
providerRef:
name: slack
eventSeverity: error
eventSources:
- kind: Kustomization
name: '*'
Copy the below contents tok8s/infrastructure/notifications/slack.yaml
apiVersion: v1
kind: Secret
metadata:
name: slack-webhook-url
namespace: flux-system
type: Opaque
data:
address: changeme
---
apiVersion: notification.toolkit.fluxcd.io/v1beta1
kind: Provider
metadata:
name: slack
namespace: flux-system
spec:
type: slack
channel: demo
secretRef:
name: slack-webhook-url
Note: Please update the data.address
with the right URL for your webhook in slack.
Once flux does what it does, it will notify you when something is wrong with the kustomize overlay.
Core⌗
k8s/core
├── base
└── demo
The core overlay is a bit more complex because we’re going to touch kustomize. Kustomize consists of 2 or more sets of files. One of them is the base, and the other one is what I call the overlay. The base has the actual manifests making up a deployment in kubernetes. And the overlay has the minor adjustments.
Note: Modifying YAML arrays with patchesStrategicMerge
works fine in standard kubernetes resources. But are somewhat unpredictable on CustomResourceDefinitions
resources.
Base files⌗
Let’s make the overlay for the ingress controller.
Copy the below contents to k8s/core/base/ingress-nginx/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
Copy the below contents to k8s/core/base/ingress-nginx/release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
interval: 5m
chart:
spec:
chart: ingress-nginx
version: ">=4.0.0 <5.0.0"
sourceRef:
kind: HelmRepository
name: ingress-nginx
namespace: flux-system
interval: 60m
values:
controller:
replicaCount: 2
Copy the below contents to k8s/core/base/ingress-nginx/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ingress-nginx
resources:
- namespace.yaml
- release.yaml
Now to test if it is all working, please run the following command
kustomize build k8s/core/base/ingress-nginx
This should print to STDOUT
one long YAML with the above resources. If not, please retrace your steps.
And not to forget, the HelmRepository
needs to be at k8s/infrastructure/helm-repositories/ingress-nginx.yaml
. Else the HelmRelease
can not find his chart.
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: ingress-nginx
namespace: flux-system
spec:
url: https://kubernetes.github.io/ingress-nginx
interval: 60m
This collection of manifests makes up the base components to deploy a ingress-nginx
controller HelmRelease
. But we still need to create a few more files to make the overlay work how it needs to work.
Demo cluster overlay⌗
Now let’s make the cluster overlay files for the demo cluster.
Copy the below contents to k8s/core/demo/ingress-nginx/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: ingress-nginx
namespace: ingress-nginx
resources:
- ../../base/ingress-nginx
patchesStrategicMerge:
- release.yaml
Copy the below contents to k8s/core/demo/ingress-nginx/release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
overlay: demo
spec:
values:
controller:
replicaCount: 2
service:
type: NodePort
Note: If you deploy this in a public cloud, it’s probably enough to leave out the spec.values.controller.service.type
.
Now to test if it is all working, please run the following command
kustomize build k8s/core/demo/ingress-nginx
And this time you should see something similar to before. But the HelmRelease
has an additional label of overlay: demo
. And possibly the service type of NodePort
.
Enabling the overlay⌗
The newly created base and overlay are not yet enabled on the demo cluster. We need to add one more file to the flux-system
namespace.
Copy the below contents to k8s/custers/demo/core/ingress-nginx.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: core-ingress-nginx
namespace: flux-system
spec:
interval: 5m
dependsOn:
- name: config-infrastructure
sourceRef:
kind: GitRepository
name: kubernetes
path: ./k8s/core/demo/ingress-nginx
prune: true
Above, you see the dependsOn
is linked to config-infrastructure
because it requires configuring the helm repository beforehand.
Once flux does what it does, it will create an ingress-nginx
controller.
Apps⌗
k8s/apps
├── base
└── demo
The apps overlay is quite comparable to the core overlay. Therefore, you could follow a similar approach as the core overlay. Here we’ll try to deploy a simple echo
deployment.
echo base⌗
Copy the below contents to k8s/apps/base/echo/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: echo
Copy the below contents to k8s/apps/base/echo/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
namespace: echo
labels:
app.kubernetes.io/name: echo
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: echo
template:
metadata:
labels:
app.kubernetes.io/name: echo
spec:
containers:
- image: hashicorp/http-echo
imagePullPolicy: IfNotPresent
name: echo
ports:
- containerPort: 5678
args:
- text="${MESSAGE}"
envFrom:
- configMapRef:
name: env-vars
resources:
limits:
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
Copy the below contents to k8s/apps/base/echo/service.yaml
apiVersion: v1
kind: Service
metadata:
name: echo
namespace: echo
spec:
ports:
- name: web
port: 5678
targetPort: 5678
protocol: TCP
selector:
app.kubernetes.io/name: echo
Copy the below contents to k8s/apps/base/echo/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: echo
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
ingressClass: nginx
rules: []
Copy the below contents to k8s/apps/base/echo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: echo
resources:
- namespace.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
configMapGenerator:
- name: env-vars
literals:
- MESSAGE="Hello from base overlay"
Now to test if it is all it working, please run the following command
kustomize build k8s/apps/base/echo
Demo cluster overlay⌗
Copy the below contents to k8s/apps/demo/echo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: echo
resources:
- ../../base/echo
patchesStrategicMerge:
- deployment.yaml
- configmap.yaml
Copy the below contents to k8s/apps/demo/echo/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: echo
spec:
tls:
- hosts:
- example.com
secretName: example.com
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo
port:
name: web
Copy the below contents to k8s/apps/demo/echo/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: env-vars
namespace: echo
data:
MESSAGE: Hello from echo overlay
Now to test if it is all working, please run the following command
kustomize build k8s/apps/demo/echo
Enabling the overlay⌗
The newly created base and overlay are not yet enabled on the demo cluster. We need to add one more file to the flux-system
namespace.
Copy the below contents to k8s/custers/demo/apps/echo.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: apps-echo
namespace: flux-system
spec:
interval: 5m
dependsOn:
- name: core-ingress-nginx
sourceRef:
kind: GitRepository
name: kubernetes
path: ./k8s/apps/demo/echo
prune: true
Above, you see the dependsOn
is linked to core-ingress-nginx
because it required the ingress controller to be deployed.
Once flux does what it does, it will create an echo
deployment.
FAQ⌗
Q: Why do you put some resources in flux-system
and others not?
That is a good question. It basically is open for debate, up to your personal preference. But this is how I taught myself.
If I were using flux to do a multi-tenant setup, I’d have a slightly different approach. flux-system
would here be only for flux. And the tenant needs to put the GitRepository
and other resources in the respective namespace.
But for a single tenant setup, I’d have the flux controller resources like GitRepository
, ImagePolicy
, ImageRegistry
, Alert
etc.. In the flux-namespace
. And the HelmReleases
in the target namespace. This way, it is organized for me. But in all honesty, it is up to you.
Q: Do you have an example of what has been written here?
Yes, I have. Please take a look at my kubernetes repository here.
Q: The kustomization resource of flux can also do patches. Why don’t you?
Flux Kustomization
resource does not have the ConfigMap generator I mentioned earlier. Also, I prefer having a kustomization.yaml
at the overlay side. This way, I am more in control.
Q: You kept the rules in the base ingress.yaml empty, why?
Some clusters are in separate accounts or projects and could, or just have different DNS domains. Or you want to be more flexible in this.
Q: What are the differences between both kustomize kinds?
Flux has kustomize.toolkit.fluxcd.io
. This is their custom resource docs. While kustomization.kustomize.config.k8s.io
is a kubernetes-sig.
Q: Can you describe the exact flow of an Image Update?
Yes I can. The chronological order for flux to update a image is as followed:
- Image reflector controller scans all defined registries for new tags.
- Image reflector controller reflects the image metadata in kubernetes resources.
- Image automation controller scans all images against the defined image policies.
- Image automation controller updates the respective images that require an update based on the image policy.
- Image automation controller commits the changes.
- Source controller detects new hash in the defined repository.
- Source controller pulls new changes.
- Respective controllers, e.g. helm and/or kustomize controllers, they will process the new changes.