Flux repository for Homelab Kubernetes applications, synced automatically to clusters installed with the k8s-bootstrap project.
The structure is based on this project from FluxCD: flux2-kustomize-helm-example
To sync the homelab cluster environment of this repository to a cluster
running Flux Operator, configure a
sync block in your FluxInstance like so:
apiVersion: fluxcd.controlplane.io/v1
kind: FluxInstance
spec:
sync:
name: k8s-apps
kind: GitRepository
url: "https://github.com/jack-luke/k8s-apps.git"
ref: "refs/heads/main"
path: "clusters/homelab".
├── apps # Applications to be deployed to the cluster
│ ├── base
│ │ └── <app>/
│ └── homelab
├── clusters # Environment definitions
│ └── homelab
├── infrastructure # Services that are deployed to support applications
│ ├── configs
│ │ └── <app>/
│ └── controllers
│ │ └── <app>/
└── policy # Policy engines that govern cluster compliance
├── controllers
│ └── <app>/
└── policies/infrastructure/controllers contains the infra-controllers Kustomization
and resources to install the infrastructure service and any CRDs it utilises.
Depends on the policies Kustomization, to ensure that all policy rules are in
place prior to installing these resources.
/infrastructure/configs contains the infra-configs Kustomization and
resources to configure infrastructure services, such as Gateways,
ClusterIssuers etc.
Depends on the infra-controllers Kustomization to ensure that all CRDs
are installed prior to installing these resources.
Installs cluster applications. This contains the 'base' configuration for each
application, and a directory for every cluster environment defined in
/clusters containing overlays to configure the infrastructure and
applications for the target environment.
Depends on the infra-configs Kustomization, to ensure that all CRDs and
supporting services are installed prior to installing these resources.
/policy/controllers contains the policy-controllers Kustomization and all
policy engines in the cluster to ensure CRDs exist before applying policies.
/policy/policies contains the policies Kustomiaztion and all policy rules
that should be applied to the cluster.
Depends on the policy-controllers Kustomization, to ensure that all CRDs and
policy engines are installed prior to installing these resources.
Environment-specific network configuration such as DNS names and external IPs are configured using Flux post build variable substitution.
Values are set using a ConfigMap named network-config in the flux-system
namespace. This ConfigMap just has to exist at Kustomize build time, so it can
be provisioned with the cluster if needed.
| ConfigMap Key | Description |
|---|---|
| EXTERNAL_ACCESS_CIDR | CIDR that is allowed to access cluster services from outside the cluster. Used in NetworkPolicies and Gateway resources for ingress rules. |
| KUBERNETES_API_CIDR | CIDR that the Kubernetes API can be reached on. Used in NetworkPolicies to allow apps API access. |
| KUBERNETES_API_PORT | Port that the Kubernetes API is exposed on. Used in NetworkPolicies to allow apps API access. |
| OIDC_GATEWAY_IP | External IP that MetalLB assigns to the Envoy OIDC Gateway. |
| INTERNAL_GATEWAY_IP | External IP that MetalLB assigns to the Envoy Internal Gateway. |
| GATEWAY_HOSTNAME | Hostname that the Envoy Gateway can be reached on externally. Used for Gateway routing and OIDC callback URLs. |
| VECTOR_HOSTNAME | Hostname that the Vector event forwarding route can be reached on externally. Used to allow Vector insatnces across the homelab to forward events. |
| VAULT_EXTERNAL_IP | External IP that MetalLB assigns to the Vault server. |
| VAULT_HOSTNAME | Hostname that the Vault server can be reached on externally. Used for OIDC provider URLs. |
| MEMCACHED_EXTERNAL_IP | External IP that MetalLB assigns to the Memcached Proxy. |
All applications and services in the repository leverage Vault and Vault Secrets Operator for secrets management.
Note
The details of the secret locations and authentication role names required for each secret are documented in the README.md files of the individual applications.
The diagrams below show how secrets are setup and managed in the cluster.
Certificates are delivered to the cluster via CI/CD during setup. This allows Vault to communicate over TLS, and allows Vault Secrets Operator to verify the Vault server's certificate.
Then, Vault Secrets Operator delivers CA bundles for the Vault server and Vault PKI issuer to Trust Manager so that it can distribute them cluster-wide.
Now the Vault server is setup with TLS, and all cluster workloads are able to trust the Vault server and any certificates it issues.
To setup secure communications between workloads, a Cert Manager ClusterIssuer is used, which uses the Vault PKI issuing CA to sign certificate requests.
These signed certificates are then automatically rendered as TLS secrets for the workloads to use. Trust Manager distributes the CA bundle for the Vault PKI issuing CA to every namespace, which workloads then use to verify the certificates of their neighbours.
The diagram below shows an example of how this works with Grafana reading data from InfluxDB.
- Vault Tutorial: Build a CA with an offline root
- Cert Manager Vault Issuer Tutorial
- Trust Manager Usage
Envoy Gateway is used to provide simple and secure access to cluster services, with OIDC authentication and TLS provided by Vault.
The diagram below shows how access to Grafana is secured via the Gateway with Envoy Gateway custom resources.
Central to OIDC authentication is the SecurityPolicy custom resource. When users access the Gateway, the SecurityPolicy forces all requests to be authenticated with OIDC before routing them to the backend service.
The general workflow is as follows:
- Unauthenticated user accesses Gateway via browser.
- SecurityPolicy detects unauthed request, and redirects user to Vault UI, submitting OIDC client credentials provided by Vault Secrets Operator.
- User logs into Vault UI, which is hosting the OIDC provider.
- Upon login success, Vault issues an OAuth 2.0 token and redirects to the Gateway using the callback URL.
- SecurityPolicy is now served a request with valid OAuth 2.0 token, so it passes request to Grafana backend.
- Gateway terminates client TLS and establishes TLS communication to the Grafana backend.
- To prevent a second authentication in Grafana, basic auth credentials are injected into the request and the login form is disabled.
- User is served the requested resources from Grafana.