Install FluxCD and ChartMuseum in Kubernetes cluster
Preparing monorepo CI/CD workflow by installing FluxCD & Chartmuseum
In the previous article we set up our local Kubernetes cluster and now we can start having some real fun.๐คฉ
In this post, we'll install some cool infrastructure tools for our cluster that will make our lives much easier.
What I'm talking about are FluxCD and ChartMuseum.
All the code you will see in this article can be found here.
What is FluxCD?
Flux is a set of continuous and progressive delivery solutions for Kubernetes that are open and extensible.
Or simply said, Flux will handle the continuous deployment part for us by keeping in sync our cluster with the Kubernetes manifests we have in our repository.
Moreover, Flux also helps us with our infrastructure apps that are not part of the normal CD process. You'll see what I'm talking about a bit later.
If you want to read more about Flux, here's a link.
What is ChartMuseum?
ChartMuseum is an open-source repository for helm charts.
We'll use ChartMuseum to host the helm charts of our services. You'll see this in action in the upcoming articles.
Now that we have a plan and know what to do, let's start working.
Install FluxCD
SSH into master node and run curl -s https://fluxcd.io/install.sh | sudo bash
.
Flux requires read/write permissions to your repository thus you need to generate a personal access token.
Follow this link to create a token.
Now let's bootstrap flux:
export GITHUB_TOKEN=<personal_access_token>
flux bootstrap github \
--owner=cioti \ #this is the owner of the repository
--branch=main \ #the branch that you want flux to sync
--repository=devops-chapter \
--path=clusters/staging \ # path in the repo which flux will sync
--personal
That's it. If everything is good you should see some resources if you run kubectl get all -n flux-system
.
Install ChartMuseum
We will mostly take advantage of FluxCD as part of our CI/CD pipeline but we can also use it to install & sync our infrastructure apps like nginx, chartmuseum, etc.
Folder structure
Previously we configured Flux to watch the /clusters/staging
path in our repo so here we will have to add all the app manifests that we want flux to install.
We could just throw everything in this folder but I prefer to keep things organized and split the infrastructure apps from the future services so I think it's better to go with the following folder structure:
โโโ apps
โโโ infrastructure
โโโ clusters
โโโ production
โโโ staging
Kustomize
Kustomize is a configuration management tool natively supported by kubectl and helps us manage the Kubernetes manifests.
Let's chart!โธ
First, we'll create a HelmRepository kind in /infrastructure/sources/chartmuseum.yaml
.
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: chartmuseum
spec:
url: https://chartmuseum.github.io/charts
interval: 10m
And the corresponding kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
resources:
- chartmuseum.yaml
Next let's add all chartmuseum manifests in /infrastructure/chartmuseum
Because I'm using a local fresh k8s cluster I also have to create a storage class
#storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
#namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: chartmuseum
labels:
name: chartmuseum
#persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: chartmuseum-pv
namespace: chartmuseum
labels:
name: chartmuseum-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/chartmuseum # path on the disk of this PV
nodeAffinity: # nodeAffinity is required when using local volumes.
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s2 #this is the hostname of the worker node
- k8s3 #this is the hostname of the worker node
#persistent-volume-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: chartmuseum-pvc
namespace: chartmuseum
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
#release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: ac-chartmuseum
namespace: chartmuseum
spec:
interval: 5m
chart:
spec:
chart: chartmuseum
version: "3.6.2"
sourceRef:
kind: HelmRepository
name: chartmuseum
namespace: flux-system
interval: 1m
values:
env:
open:
STORAGE: local
DISABLE_API: false
persistence:
enabled: true
path: /mnt/chartmuseum
existingClaim: chartmuseum-pvc
#kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: chartmuseum
resources:
- storage-class.yaml
- namespace.yaml
- persistent-volume.yaml
- persistent-volume-claim.yaml
- release.yaml
NOTE: I had to manually SSH into k8s2 and k8s3 machines to create the persistent volume mount directory with mkdir /mnt/chartmuseum
.
And finally let's create /clusters/staging/infrastructure.yaml
:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: infrastructure
namespace: flux-system
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure
prune: true
Now push the and let Flux take it from here.
If you encounter any issues send me a message and I'll be happy to help.