This repository contains the Helm chart for deploying LitmusEdge. The documentation below covers chart prerequisites, installation methods (from the public OCI registry or the local sources in this repo), common configuration options, and lifecycle operations such as upgrades and uninstalling.
- A running Kubernetes cluster (v1.24 or newer recommended).
kubectlconfigured to communicate with your cluster.- Helm 3.11 or newer installed on the workstation that will manage the deployment.
- Cluster policies must permit pods to run with the
NET_ADMINLinux capability. The chart always requests this capability for the LitmusEdge container.
The Helm project provides an installation script that downloads verified binaries from the official release archives. Refer to the official Helm installation guide for platform-specific guidance.
Confirm the installation with:
helm versionThe litmusedge chart deploys the core LitmusEdge services and persistent storage required for the LitmusEdge control plane. The chart is published to the LitmusEdge OCI registry for connected environments and can also be installed directly from this repository.
- Primary chart name:
litmusedge - Default container image:
- Repository:
litmusedge.azurecr.io/litmusedge-std-docker - Tag:
latest
- Repository:
Set image.repository and image.tag to match the version you intend to deploy, especially when pinning to a specific LitmusEdge release for production or air-gapped environments.
-
Create or select a namespace for LitmusEdge. The examples below use
litmusedge-system:kubectl create namespace litmusedge-system
-
Choose the installation source that fits your workflow. Both options deploy the same release artifacts:
-
Using the public OCI registry
helm install litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \ --version 4.0.1 \ --namespace litmusedge-system
Provide
--values my-values.yamlto supply custom overrides and--create-namespaceif the namespace does not already exist. Usehelm search repoorhelm show chart oci://litmusedge.azurecr.io/helm/litmusedge --version <version>to review newly published releases before installation. -
Using the local chart sources
git clone https://github.com/<your-org>/le-helm.git cd le-helm helm install litmusedge . \ --namespace litmusedge-system \ --values my-values.yaml
-
Update your override file with the desired changes (for example, a new image.tag) and run:
helm upgrade litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
--version 4.0.1 \
--namespace litmusedge-system \
--values my-values.yamlOnly required for local chart: change the chart reference to the local path when running the upgrade from a cloned repository.
helm upgrade litmusedge . \
--namespace litmusedge-system \
--values my-values.yamlWhen targeting a newer LitmusEdge release, set image.tag (and optionally image.repository) to the desired build before running the upgrade, and update the --version flag so Helm retrieves the matching chart package from the OCI registry.
Remove the release while leaving the namespace in place:
helm uninstall litmusedge --namespace litmusedge-systemTo delete the namespace and all remaining resources managed outside Helm, follow up with:
kubectl delete namespace litmusedge-systemDeleting the namespace removes any persistent volume claims or secrets that remain after the Helm release is gone. Omit the kubectl delete command if you plan to redeploy to the same namespace shortly thereafter.
Override the image values so the cluster can pull from an internal registry:
# my-values.yaml
image:
repository: <your-private-registry>/litmusedge-std-docker
tag: <pinned-version>
imagePullSecrets:
- name: <registry-secret>Ensure the secret exists in the target namespace prior to installation:
kubectl create secret docker-registry <registry-secret> \
--namespace litmusedge-system \
--docker-server=<your-private-registry> \
--docker-username=<user> \
--docker-password=<password>Pass the file to helm install/helm upgrade as shown above. Ensure the registry secrets exist in the target namespace.
The following table highlights commonly customized settings for the litmusedge chart. All values are defined in values.yaml.
| Value | Description | Default |
|---|---|---|
namespace |
Optional override for the namespace metadata applied to chart objects. | "" (release namespace) |
nameOverride / fullnameOverride |
Override generated resource names. | "" |
image.repository |
Container image registry/repository. | litmusedge.azurecr.io/litmusedge-std-docker |
image.tag |
Image tag to deploy (pin to a specific release for production). | latest |
image.pullPolicy |
Image pull policy. | IfNotPresent |
imagePullSecrets |
List of Kubernetes secrets for pulling from private registries. | [] |
serviceAccount.create |
Whether Helm should create a service account. | true |
serviceAccount.name |
Name of an existing service account to use. | "" |
serviceAccount.annotations |
Extra annotations for the service account. | {} |
dataVolume.enabled |
Mount an application data volume. | true |
dataVolume.mountPath |
Mount path for the data volume. | /var/lib/litmusedge |
persistence.create |
Whether Helm should create/manage the PVC (true) or bind to an existing claim (false). |
true |
persistence.pvcName |
Name of the PVC to create or bind. Defaults to <release>-<dataVolume.name>. |
"" |
persistence.storageClassName |
StorageClass for the persistent volume claim. | "" |
persistence.accessModes |
PVC access modes. | [ReadWriteOnce] |
persistence.size |
Requested PVC size. | 10Gi |
service.enabled |
Whether to create a Service for the deployment. | true |
service.type |
Kubernetes service type exposing LitmusEdge. | ClusterIP |
service.port / service.targetPort |
External / container ports when no custom list is supplied. | 443 / 443 |
service.annotations / service.labels |
Extra metadata for the Service. | {} |
containerPorts |
Optional list of container ports exposed by the Deployment. | [] (defaults to the service mapping or 443) |
resources.requests |
CPU/memory requests for the pod. | 1 / 1024Mi |
resources.limits |
CPU/memory limits for the pod. | 4 / 4096Mi |
nodeSelector |
Node selector labels. | {} |
tolerations |
Tolerations for taints. | [] |
affinity |
Pod affinity/anti-affinity rules. | {} |
Refer to values.yaml for additional options and nested structures that can be overridden via your custom values.yaml or the --set flag.
You can bind the Deployment to a pre-created PersistentVolumeClaim by setting:
persistence:
create: false
pvcName: litmusedge-dataWhen persistence.create is false:
- The chart does not create a new PVC.
- The Deployment’s volume directly references
persistence.pvcName(or the default name if omitted). - Values such as
persistence.size,storageClassName, andaccessModesare ignored since the PVC already exists.
To have Helm manage the PVC lifecycle, keep persistence.create set to true. Helm will create the claim if it does not exist using the name in persistence.pvcName (or the default release-based name when unset) and reuse it on subsequent upgrades.
These options cover both pre-provisioned storage and charts that should manage their own claims.
LitmusEdge can expose ports in two ways. You can define containerPorts directly on the Deployment, or define service.ports on the Service. If neither is set, the chart uses a single port 443.
All ports on the Deployment must be numbers only. Do not use strings like "443".
When you set containerPorts, the template uses them as-is and ignores service.port and service.ports for the Deployment's port list.
values.yaml
containerPorts:
- name: https
containerPort: 443
protocol: TCP
- name: metrics
containerPort: 9100
protocol: TCP
service:
enabled: true
type: ClusterIP
port: 443
targetPort: 443Effect
- Deployment exposes container ports 443 and 9100
- Service exposes a single port 443 to
targetPort: 443
When you define service.ports, the template derives each Deployment container port from each item:
- If
targetPortis a number, it uses that number for the container port - If
targetPortis a string, it uses the numericportfor the container port, and copies the string into the portnameif the name was empty
values.yaml
containerPorts: [] ## not set
service:
enabled: true
type: ClusterIP
ports:
- name: http
port: 80 ## number
targetPort: 8080 ## number
protocol: TCP
- port: 443 ## number
targetPort: https ## string
protocol: TCPRendered behavior
-
Deployment container ports:
8080namedhttp443namedhttps
-
Service ports:
80 -> 8080/TCPnamedhttp443 -> https/TCPnamedhttps
If you set a single service.port and service.targetPort as a string, the Service uses the string, while the Deployment uses the numeric service.port as its container port.
values.yaml
containerPorts: []
service:
enabled: true
type: ClusterIP
port: 8443 ## number
targetPort: https ## string
name: https
protocol: TCPRendered behavior
-
Deployment container ports:
8443
-
Service ports:
8443 -> https/TCPnamedhttps
You can run LitmusEdge without a Service. The Deployment will still expose a container port so sidecars or node-local traffic can connect.
values.yaml
service:
enabled: false
containerPorts: [] ## optionalRendered behavior
-
No Service object is created
-
Deployment container ports:
- If
containerPortsis set, those ports are used - Else a single port
443is used by default
- If
-
If
containerPortsis non-empty, the Deployment uses it and ignoresservice.portsfor the Deployment port list -
Else if
service.portsis set, the Deployment derives per-port container ports as described above -
Else the Deployment exposes a single container port
- Uses numeric
service.targetPortwhen present - Else uses numeric
service.port - Else falls back to
443
- Uses numeric
-
Deployment container ports
containerPorts[].containerPortmust be a number, not a stringprotocolshould beTCPorUDP
-
Service ports
ports[].portmust be a numberports[].targetPortcan be a number or a stringnameshould be a valid DNS label if set
The deployment requires the NET_ADMIN capability on the LitmusEdge container. If the pod remains in Pending with messages
such as CreateContainerConfigError or Error creating: pods ... are forbidden: unable to validate against any pod security policy, verify that the namespace and service account are allowed to use this capability. Cluster-level admission controllers
such as Pod Security Standards, OPA Gatekeeper, or custom PSP replacements may need explicit exceptions that allow NET_ADMIN.
Update the relevant policies and re-run kubectl describe pod <pod-name> to confirm the capability is accepted.
If the PersistentVolumeClaim (PVC) remains in Pending or fails to bind during installation, explicitly set persistence.storageClassName to a StorageClass that exists in your cluster (for example, persistence.storageClassName="default"). Apply the override in your values file or by passing --set persistence.storageClassName=<class-name> on the Helm command line.
helm install litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
--namespace litmusedge-system \
--set persistence.storageClassName=defaultTo bind the Deployment to a static PVC name, set persistence.create=false and supply persistence.pvcName with the existing resource. This skips PVC creation, allowing the release to reference pre-provisioned storage or keep a predictable claim name.
helm install litmusedge . \
--namespace litmusedge-system \
--set persistence.create=false \
--set persistence.pvcName=litmusedge-dataUse helm template to review the resources Helm will apply before deploying:
helm template litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
--version 4.0.1 \
--namespace litmusedge-system \
--values my-values.yamlOnly required for local chart: render manifests from the cloned sources by pointing Helm at the local path.
helm template litmusedge . \
--namespace litmusedge-system \
--values my-values.yaml