A hub-and-spoke example for multi-cluster continuous delivery with Flux
Find a file
Stefan Prodan 49d584bba9
Add workload customization to docs
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-04-10 17:43:38 +03:00
.github/workflows Add e2e tests 2024-04-10 13:36:19 +03:00
clusters Add workload customization to docs 2024-04-10 17:43:38 +03:00
deploy Refactor structure 2024-04-10 12:55:53 +03:00
hub Provision production cluster 2024-04-10 13:08:05 +03:00
scripts Add validation script 2024-04-10 13:36:04 +03:00
.gitignore Add fleet provisioning scripts 2024-04-10 01:06:39 +03:00
LICENSE Initial commit 2024-04-10 00:03:54 +03:00
Makefile Add validation script 2024-04-10 13:36:04 +03:00
README.md Add workload customization to docs 2024-04-10 17:43:38 +03:00

flux2-hub-spoke-example

test e2e license

This repository showcases how to run Flux on a central Kubernetes cluster and have it manage the GitOps continuous delivery of apps and infrastructure workloads on multiple clusters.

Prerequisites

For this example, you need to install the following tools:

On macOS and Linux, you can install the tools using the following commands:

brew install fluxcd/tap/flux
brew install kind kubectl kustomize

In order to follow the guide you'll need a GitHub account and a personal access token that can create repositories (check all permissions under repo).

Repository structure

The Git repository contains the following top directories:

  • deploy dir contains the HelmRelease definitions for the apps and infrastructure workloads
  • clusters dir contains the apps and infrastructure Kustomize overlays for each target cluster
  • hub dir contains the Flux configuration for the central cluster and targets
├── deploy
│   ├── apps
│   │   ├── podinfo.yaml
│   │   └── kustomization.yaml
│   ├── infra-configs
│   │   ├── cluster-issuers.yaml
│   │   └── kustomization.yaml
│   ├── infra-controllers
│   │   ├── cert-manager.yaml
│   │   ├── ingress-nginx.yaml
│   │   └── kustomization.yaml
│   └── tenants
├── clusters
│   ├── production
│   └── staging
│       ├── apps
│       │   ├── kustomization.yaml
│       │   └── podinfo-values.yaml
│       ├── infra-configs
│       ├── infra-controllers
│       └── tenants
└── hub
    ├── flux-system
    ├── production.yaml
    └── staging.yaml

Bootstrap the cluster fleet

To bootstrap the cluster fleet, first you need to create several Kubernetes KinD clusters by running the following command:

make fleet-up

The above command will create the following clusters:

  • flux-hub - the central cluster where Flux will run
  • flux-staging - the target cluster where Flux will deploy the clusters/staging workloads
  • flux-production - the target cluster where Flux will deploy the clusters/production workloads

After the clusters are created, kubeconfig files for staging and production are generated and persisted in the flux-hub cluster, so that Flux can access the target clusters.

$ kubectl get secrets -A
NAMESPACE     NAME                     TYPE
production    cluster-kubeconfig       Opaque
staging       cluster-kubeconfig       Opaque

Fork this repository on your personal GitHub account and export your GitHub access token, username and repo name:

export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>
export GITHUB_REPO=<repository-name>

Then, bootstrap Flux on the hub cluster:

flux bootstrap github \
    --context=kind-flux-hub \
    --owner=${GITHUB_USER} \
    --repository=${GITHUB_REPO} \
    --branch=main \
    --personal \
    --path=hub

The bootstrap command commits the manifests for the Flux components in hub/flux-system dir and creates a deploy key with read-only access on GitHub, so it can pull changes inside the cluster.

Wait for the Flux to reconcile the infrastructure and apps workloads on the target clusters with:

watch flux get kustomizations -A

Once the Flux Kustomizations are ready, you can list the workloads deployed in the target clusters. For example, in the staging cluster:

$ kubectl --context kind-flux-staging get pods -A
NAMESPACE            NAME                                                 READY
cert-manager         cert-manager-6dc66985d4-fkv95                        1/1
cert-manager         cert-manager-cainjector-c7d4dbdd9-2kr6f              1/1
cert-manager         cert-manager-webhook-847d7676c9-vqn6f                1/1
ingress-nginx        ingress-nginx-controller-55474d95c5-mq8mj            1/1
podinfo              podinfo-66f4ccb98c-bt99t                             1/1

Customize the workloads

Assuming you want to ship workloads to the production cluster with a different configuration, you can employ Kustomize patches in the clusters/production overlay and change the Flux HelmRelease values.

For example, to change the number of replicas for ingress-nginx in the production cluster, you can create a patch file in clusters/production/infra-controllers/ingress-nginx-values.yaml:

apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: ingress-nginx
spec:
  values:
    controller:
      replicaCount: 2

And then apply the patch to the ingress-nginx HelmRelease in the clusters/production/infra-controllers/kustomization.yaml file with:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../../../deploy/infra-controllers
patches:
  - target:
      kind: HelmRelease
      name: ingress-nginx
    path: ingress-nginx-values.yaml

Verify that the patch is correctly applied with:

kustomize build ./clusters/production/infra-controllers/

After you commit the changes to the repository, Flux will automatically apply the changes.

You can trigger a manual reconciliation with:

flux -n production reconcile ks infra-controllers --with-source

To verify the number of pods, you can list the deployments in the production cluster:

kubectl --context kind-flux-production -n ingress-nginx get deploy
NAME                       READY   UP-TO-DATE
ingress-nginx-controller   2/2     2

Testing

After making changes to the manifests, you can validate them locally with kubeconform by running:

make validate

Any change to the Kubernetes manifests or to the repository structure should be validated in CI before a pull requests is merged into the main branch and synced on the cluster.

This repository contains the following GitHub CI workflows:

  • the test workflow validates the Kubernetes manifests and Kustomize overlays are conformant with the Flux OpenAPI spec
  • the e2e workflow starts the Kubernetes cluster fleet in CI and tests the setup by running Flux in Kubernetes Kind

Teardown

To delete the cluster fleet, run the following command:

make fleet-down