Skip to content

litmusautomation/litmusedge-chart

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LitmusEdge Helm Chart

This repository contains the Helm chart for deploying LitmusEdge. The documentation below covers chart prerequisites, installation methods (from the public OCI registry or the local sources in this repo), common configuration options, and lifecycle operations such as upgrades and uninstalling.

Prerequisites

  • A running Kubernetes cluster (v1.24 or newer recommended).
  • kubectl configured to communicate with your cluster.
  • Helm 3.11 or newer installed on the workstation that will manage the deployment.
  • Cluster policies must permit pods to run with the NET_ADMIN Linux capability. The chart always requests this capability for the LitmusEdge container.

Installing Helm on Linux (from the official source)

The Helm project provides an installation script that downloads verified binaries from the official release archives. Refer to the official Helm installation guide for platform-specific guidance.

Confirm the installation with:

helm version

Overview

The litmusedge chart deploys the core LitmusEdge services and persistent storage required for the LitmusEdge control plane. The chart is published to the LitmusEdge OCI registry for connected environments and can also be installed directly from this repository.

Chart details

  • Primary chart name: litmusedge
  • Default container image:
    • Repository: litmusedge.azurecr.io/litmusedge-std-docker
    • Tag: latest

Set image.repository and image.tag to match the version you intend to deploy, especially when pinning to a specific LitmusEdge release for production or air-gapped environments.

Installing the chart

  1. Create or select a namespace for LitmusEdge. The examples below use litmusedge-system:

    kubectl create namespace litmusedge-system
  2. Choose the installation source that fits your workflow. Both options deploy the same release artifacts:

    • Using the public OCI registry

      helm install litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
        --version 4.0.1 \
        --namespace litmusedge-system

      Provide --values my-values.yaml to supply custom overrides and --create-namespace if the namespace does not already exist. Use helm search repo or helm show chart oci://litmusedge.azurecr.io/helm/litmusedge --version <version> to review newly published releases before installation.

    • Using the local chart sources

      git clone https://github.com/<your-org>/le-helm.git
      cd le-helm
      helm install litmusedge . \
        --namespace litmusedge-system \
        --values my-values.yaml

Upgrading to a newer LitmusEdge version

Update your override file with the desired changes (for example, a new image.tag) and run:

helm upgrade litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
  --version 4.0.1 \
  --namespace litmusedge-system \
  --values my-values.yaml

Only required for local chart: change the chart reference to the local path when running the upgrade from a cloned repository.

helm upgrade litmusedge . \
  --namespace litmusedge-system \
  --values my-values.yaml

When targeting a newer LitmusEdge release, set image.tag (and optionally image.repository) to the desired build before running the upgrade, and update the --version flag so Helm retrieves the matching chart package from the OCI registry.

Uninstalling the chart

Remove the release while leaving the namespace in place:

helm uninstall litmusedge --namespace litmusedge-system

To delete the namespace and all remaining resources managed outside Helm, follow up with:

kubectl delete namespace litmusedge-system

Deleting the namespace removes any persistent volume claims or secrets that remain after the Helm release is gone. Omit the kubectl delete command if you plan to redeploy to the same namespace shortly thereafter.

Air-gapped deployments

Override the image values so the cluster can pull from an internal registry:

# my-values.yaml
image:
  repository: <your-private-registry>/litmusedge-std-docker
  tag: <pinned-version>
imagePullSecrets:
  - name: <registry-secret>

Ensure the secret exists in the target namespace prior to installation:

kubectl create secret docker-registry <registry-secret> \
  --namespace litmusedge-system \
  --docker-server=<your-private-registry> \
  --docker-username=<user> \
  --docker-password=<password>

Pass the file to helm install/helm upgrade as shown above. Ensure the registry secrets exist in the target namespace.

Configuration values

The following table highlights commonly customized settings for the litmusedge chart. All values are defined in values.yaml.

Value Description Default
namespace Optional override for the namespace metadata applied to chart objects. "" (release namespace)
nameOverride / fullnameOverride Override generated resource names. ""
image.repository Container image registry/repository. litmusedge.azurecr.io/litmusedge-std-docker
image.tag Image tag to deploy (pin to a specific release for production). latest
image.pullPolicy Image pull policy. IfNotPresent
imagePullSecrets List of Kubernetes secrets for pulling from private registries. []
serviceAccount.create Whether Helm should create a service account. true
serviceAccount.name Name of an existing service account to use. ""
serviceAccount.annotations Extra annotations for the service account. {}
dataVolume.enabled Mount an application data volume. true
dataVolume.mountPath Mount path for the data volume. /var/lib/litmusedge
persistence.create Whether Helm should create/manage the PVC (true) or bind to an existing claim (false). true
persistence.pvcName Name of the PVC to create or bind. Defaults to <release>-<dataVolume.name>. ""
persistence.storageClassName StorageClass for the persistent volume claim. ""
persistence.accessModes PVC access modes. [ReadWriteOnce]
persistence.size Requested PVC size. 10Gi
service.enabled Whether to create a Service for the deployment. true
service.type Kubernetes service type exposing LitmusEdge. ClusterIP
service.port / service.targetPort External / container ports when no custom list is supplied. 443 / 443
service.annotations / service.labels Extra metadata for the Service. {}
containerPorts Optional list of container ports exposed by the Deployment. [] (defaults to the service mapping or 443)
resources.requests CPU/memory requests for the pod. 1 / 1024Mi
resources.limits CPU/memory limits for the pod. 4 / 4096Mi
nodeSelector Node selector labels. {}
tolerations Tolerations for taints. []
affinity Pod affinity/anti-affinity rules. {}

Refer to values.yaml for additional options and nested structures that can be overridden via your custom values.yaml or the --set flag.

Using an existing PVC

You can bind the Deployment to a pre-created PersistentVolumeClaim by setting:

persistence:
  create: false
  pvcName: litmusedge-data

When persistence.create is false:

  • The chart does not create a new PVC.
  • The Deployment’s volume directly references persistence.pvcName (or the default name if omitted).
  • Values such as persistence.size, storageClassName, and accessModes are ignored since the PVC already exists.

To have Helm manage the PVC lifecycle, keep persistence.create set to true. Helm will create the claim if it does not exist using the name in persistence.pvcName (or the default release-based name when unset) and reuse it on subsequent upgrades.

These options cover both pre-provisioned storage and charts that should manage their own claims.

Port configuration

LitmusEdge can expose ports in two ways. You can define containerPorts directly on the Deployment, or define service.ports on the Service. If neither is set, the chart uses a single port 443.

All ports on the Deployment must be numbers only. Do not use strings like "443".

1) One or many container ports on the Deployment

When you set containerPorts, the template uses them as-is and ignores service.port and service.ports for the Deployment's port list.

values.yaml

containerPorts:
  - name: https
    containerPort: 443
    protocol: TCP
  - name: metrics
    containerPort: 9100
    protocol: TCP

service:
  enabled: true
  type: ClusterIP
  port: 443
  targetPort: 443

Effect

  • Deployment exposes container ports 443 and 9100
  • Service exposes a single port 443 to targetPort: 443

2) Multi-port Service with mixed targetPort types

When you define service.ports, the template derives each Deployment container port from each item:

  • If targetPort is a number, it uses that number for the container port
  • If targetPort is a string, it uses the numeric port for the container port, and copies the string into the port name if the name was empty

values.yaml

containerPorts: []   ## not set

service:
  enabled: true
  type: ClusterIP
  ports:
    - name: http
      port: 80          ## number
      targetPort: 8080  ## number
      protocol: TCP
    - port: 443         ## number
      targetPort: https ## string
      protocol: TCP

Rendered behavior

  • Deployment container ports:

    • 8080 named http
    • 443 named https
  • Service ports:

    • 80 -> 8080/TCP named http
    • 443 -> https/TCP named https

3) Single-port Service with string targetPort

If you set a single service.port and service.targetPort as a string, the Service uses the string, while the Deployment uses the numeric service.port as its container port.

values.yaml

containerPorts: []

service:
  enabled: true
  type: ClusterIP
  port: 8443          ## number
  targetPort: https   ## string
  name: https
  protocol: TCP

Rendered behavior

  • Deployment container ports:

    • 8443
  • Service ports:

    • 8443 -> https/TCP named https

4) Service disabled

You can run LitmusEdge without a Service. The Deployment will still expose a container port so sidecars or node-local traffic can connect.

values.yaml

service:
  enabled: false

containerPorts: []   ## optional

Rendered behavior

  • No Service object is created

  • Deployment container ports:

    • If containerPorts is set, those ports are used
    • Else a single port 443 is used by default

Behavior reference

  • If containerPorts is non-empty, the Deployment uses it and ignores service.ports for the Deployment port list

  • Else if service.ports is set, the Deployment derives per-port container ports as described above

  • Else the Deployment exposes a single container port

    • Uses numeric service.targetPort when present
    • Else uses numeric service.port
    • Else falls back to 443

Type requirements

  • Deployment container ports

    • containerPorts[].containerPort must be a number, not a string
    • protocol should be TCP or UDP
  • Service ports

    • ports[].port must be a number
    • ports[].targetPort can be a number or a string
    • name should be a valid DNS label if set

Troubleshooting

Pod admission or start failures due to missing NET_ADMIN capability

The deployment requires the NET_ADMIN capability on the LitmusEdge container. If the pod remains in Pending with messages such as CreateContainerConfigError or Error creating: pods ... are forbidden: unable to validate against any pod security policy, verify that the namespace and service account are allowed to use this capability. Cluster-level admission controllers such as Pod Security Standards, OPA Gatekeeper, or custom PSP replacements may need explicit exceptions that allow NET_ADMIN. Update the relevant policies and re-run kubectl describe pod <pod-name> to confirm the capability is accepted.

PersistentVolumeClaim creation failures

If the PersistentVolumeClaim (PVC) remains in Pending or fails to bind during installation, explicitly set persistence.storageClassName to a StorageClass that exists in your cluster (for example, persistence.storageClassName="default"). Apply the override in your values file or by passing --set persistence.storageClassName=<class-name> on the Helm command line.

helm install litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
  --namespace litmusedge-system \
  --set persistence.storageClassName=default

To bind the Deployment to a static PVC name, set persistence.create=false and supply persistence.pvcName with the existing resource. This skips PVC creation, allowing the release to reference pre-provisioned storage or keep a predictable claim name.

helm install litmusedge . \
  --namespace litmusedge-system \
  --set persistence.create=false \
  --set persistence.pvcName=litmusedge-data

Inspecting rendered manifests

Use helm template to review the resources Helm will apply before deploying:

helm template litmusedge oci://litmusedge.azurecr.io/helm/litmusedge \
  --version 4.0.1 \
  --namespace litmusedge-system \
  --values my-values.yaml

Only required for local chart: render manifests from the cloned sources by pointing Helm at the local path.

helm template litmusedge . \
  --namespace litmusedge-system \
  --values my-values.yaml

About

Litmus Edge Helm chart repository

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages