Skip to content

Introduce label selector for watching ConfigMaps and Secrets #1258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 17, 2025

Conversation

matheuscscp
Copy link
Member

@matheuscscp matheuscscp commented Jul 15, 2025

Part of: fluxcd/flux2#5446
Closes: #1086

@matheuscscp matheuscscp marked this pull request as ready for review July 15, 2025 18:37
@matheuscscp matheuscscp requested a review from stefanprodan July 15, 2025 18:37
@stefanprodan
Copy link
Member

stefanprodan commented Jul 16, 2025

Before we merge this please run the following tests:

Load test

  • Configure the controller with --watch-configs-label-selector=owner!=helm
  • Create a cluster with 1K HelmReleses, 1K ConfigMaps and 1K Secrets
  • Each HelmRelease would have valuesFrom a ConfigMap and a Secret
  • Trigger an update to all 2000 configs
  • The controller should finish reconciling all 1000 HRs without OOM or hitting CPU throttling

Failure recovery test

  • Create a HelmRelease with upgrade retries set to 2
  • Have a ConfigMap with the podinfo image tag set to a non existing tag
  • Wait for the HR to fail the upgrade and hit the max retries
  • Update the CM with a valid image tag
  • The HR should reconcile and become ready

@stefanprodan stefanprodan added the enhancement New feature or request label Jul 16, 2025
@matheuscscp
Copy link
Member Author

Failure recovery test looks ok:

k get hr -w
NAME               AGE   READY   STATUS
failure-recovery   71s   True    Helm install succeeded for release flux-system/podinfo-helm.v1 with chart podinfo@6.9.1
failure-recovery   4m12s   True    Helm install succeeded for release flux-system/podinfo-helm.v1 with chart podinfo@6.9.1
failure-recovery   4m12s   Unknown   Fulfilling prerequisites
failure-recovery   4m12s   Unknown   Running 'upgrade' action with timeout of 2m0s
failure-recovery   4m12s   Unknown   Running 'upgrade' action with timeout of 2m0s
failure-recovery   6m12s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   6m12s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   6m12s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   6m14s   False     Helm rollback to previous release flux-system/podinfo-helm.v1 with chart podinfo@6.9.1 succeeded
failure-recovery   6m14s   False     Helm rollback to previous release flux-system/podinfo-helm.v1 with chart podinfo@6.9.1 succeeded
failure-recovery   6m25s   Unknown   Fulfilling prerequisites
failure-recovery   6m25s   Unknown   Running 'upgrade' action with timeout of 2m0s
failure-recovery   8m25s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   8m25s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   8m25s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   8m25s   False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   8m28s   False     Helm rollback to previous release flux-system/podinfo-helm.v3 with chart podinfo@6.9.1 succeeded
failure-recovery   8m28s   False     Helm rollback to previous release flux-system/podinfo-helm.v3 with chart podinfo@6.9.1 succeeded
failure-recovery   8m39s   Unknown   Fulfilling prerequisites
failure-recovery   8m39s   Unknown   Running 'upgrade' action with timeout of 2m0s
failure-recovery   10m     False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   10m     False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   10m     False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   10m     False     Helm upgrade failed for release flux-system/podinfo-helm with chart podinfo@6.9.1: context deadline exceeded
failure-recovery   10m     False     Helm rollback to previous release flux-system/podinfo-helm.v5 with chart podinfo@6.9.1 succeeded
failure-recovery   10m     False     Helm rollback to previous release flux-system/podinfo-helm.v5 with chart podinfo@6.9.1 succeeded
failure-recovery   13m     Unknown   Fulfilling prerequisites
failure-recovery   13m     Unknown   Running 'upgrade' action with timeout of 2m0s
failure-recovery   13m     Unknown   Running 'upgrade' action with timeout of 2m0s
failure-recovery   13m     True      Helm upgrade succeeded for release flux-system/podinfo-helm.v8 with chart podinfo@6.9.1
failure-recovery   13m     True      Helm upgrade succeeded for release flux-system/podinfo-helm.v8 with chart podinfo@6.9.1
failure-recovery   13m     True      Helm upgrade succeeded for release flux-system/podinfo-helm.v8 with chart podinfo@6.9.1
failure-recovery   13m     True      Helm upgrade succeeded for release flux-system/podinfo-helm.v8 with chart podinfo@6.9.1

@matheuscscp
Copy link
Member Author

matheuscscp commented Jul 16, 2025

For the load test, I used this ResourceSet:

apiVersion: fluxcd.controlplane.io/v1
kind: ResourceSet
metadata:
  name: benchmark
  namespace: benchmark
spec:
  inputs:
  - helmReleases: 1000
    ui:
      color: "#ff006fff"
      message: "Hello from Podinfo!"
  resourcesTemplate: |
    apiVersion: source.toolkit.fluxcd.io/v1
    kind: OCIRepository
    metadata:
      name: podinfo-chart
      namespace: benchmark
    spec:
      interval: 1h
      url: oci://ghcr.io/stefanprodan/charts/podinfo
      # Default values
      # https://github.com/stefanprodan/podinfo/blob/master/charts/podinfo/values.yaml
      ref:
        semver: 6.8.0
    <<- range $i := until (int inputs.helmReleases) >>
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: podinfo-values-<< add $i 1 >>
      namespace: benchmark
    data:
      values.yaml: |
        ui:
          color: "<< inputs.ui.color >>"
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: podinfo-secret-<< add $i 1 >>
      namespace: benchmark
    type: Opaque
    stringData:
      values.yaml: |
        ui:
          message: "<< inputs.ui.message >>"
    ---
    apiVersion: helm.toolkit.fluxcd.io/v2
    kind: HelmRelease
    metadata:
      name: podinfo-<< add $i 1 >>
      namespace: benchmark
    spec:
      chartRef:
        kind: OCIRepository
        name: podinfo-chart
      interval: 1h
      driftDetection:
        mode: enabled
      values:
        replicaCount: 0
      valuesFrom:
      - kind: ConfigMap
        name: podinfo-values-<< add $i 1 >>
      - kind: Secret
        name: podinfo-secret-<< add $i 1 >>
    <<- end >>

I applied it to the cluster then waited for all the HelmReleases to reconcile. Then after that I updated the values of inputs.ui.color (ConfigMap) and inputs.ui.message (Secret) and waited for the ResourceSet reconciliation to finish:

color: '#0f006fff'
message: Hallo from Podinfo!

It took 1m24s for flux-operator to update all 1k ConfigMaps and 1k Secrets:

{
    "level": "info",
    "ts": "2025-07-16T16:28:29.109Z",
    "msg": "Reconciliation finished in 1m24s",
    "controller": "resourceset",
    "controllerGroup": "fluxcd.controlplane.io",
    "controllerKind": "ResourceSet",
    "ResourceSet": {
        "name": "benchmark",
        "namespace": "benchmark"
    },
    "namespace": "benchmark",
    "name": "benchmark",
    "reconcileID": "78ab5fb2-6acf-4e65-860f-282218d25dce"
}

After helm-controller reconciled all HelmReleases, this was the last log:

{
    "level": "info",
    "ts": "2025-07-16T16:35:40.987Z",
    "msg": "release in-sync with desired state",
    "controller": "helmrelease",
    "controllerGroup": "helm.toolkit.fluxcd.io",
    "controllerKind": "HelmRelease",
    "HelmRelease": {
        "name": "podinfo-118",
        "namespace": "benchmark"
    },
    "namespace": "benchmark",
    "name": "podinfo-118",
    "reconcileID": "1c2dc8fd-9840-434c-8908-b814969c0bca"
}

So it took 7m11s for helm-controller to reconcile all 1k HelmReleases. The helm-controller did not OOM or restart for whatever reason in the process.

This is the helm-controller pod.
apiVersion: v1
kind: Pod
metadata:
  annotations:
    prometheus.io/port: "8080"
    prometheus.io/scrape: "true"
  creationTimestamp: "2025-07-16T16:10:43Z"
  generateName: helm-controller-798dbbbdb7-
  labels:
    app: helm-controller
    pod-template-hash: 798dbbbdb7
  name: helm-controller-798dbbbdb7-8rj5b
  namespace: flux-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: helm-controller-798dbbbdb7
    uid: 07f9a517-5cf5-4817-954a-1a9b044c318f
  resourceVersion: "481496"
  uid: 3915b02b-45a4-4d03-bf00-73df39267a28
spec:
  containers:
  - args:
    - --events-addr=http://notification-controller.flux-system.svc.cluster.local./
    - --watch-all-namespaces=true
    - --log-level=info
    - --log-encoding=json
    - --enable-leader-election
    - --concurrent=10
    - --requeue-dependency=5s
    - --watch-configs-label-selector=owner!=helm
    env:
    - name: RUNTIME_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: GOMAXPROCS
      valueFrom:
        resourceFieldRef:
          containerName: manager
          divisor: "0"
          resource: limits.cpu
    - name: GOMEMLIMIT
      valueFrom:
        resourceFieldRef:
          containerName: manager
          divisor: "0"
          resource: limits.memory
    image: ghcr.io/matheuscscp/fluxcd/helm-controller:wcms-v3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: healthz
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: manager
    ports:
    - containerPort: 8080
      name: http-prom
      protocol: TCP
    - containerPort: 9440
      name: healthz
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /readyz
        port: healthz
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 64Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tmp
      name: temp
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-gmgvb
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: ip-192-168-65-202.eu-west-2.compute.internal
  nodeSelector:
    kubernetes.io/os: linux
  preemptionPolicy: PreemptLowerPriority
  priority: 2000000000
  priorityClassName: system-cluster-critical
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1337
  serviceAccount: helm-controller
  serviceAccountName: helm-controller
  terminationGracePeriodSeconds: 600
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: temp
  - name: kube-api-access-gmgvb
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2025-07-16T16:10:45Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2025-07-16T16:10:43Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2025-07-16T16:10:45Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2025-07-16T16:10:45Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2025-07-16T16:10:43Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://7427ec6fdf6551b1d4f605cbe9321d7843985c83c013daac9f56208f269efe09
    image: ghcr.io/matheuscscp/fluxcd/helm-controller:wcms-v3
    imageID: ghcr.io/matheuscscp/fluxcd/helm-controller@sha256:9397a644317fcb6624bb171bd582f59b485c77a9eb50140478fb260d9e4048aa
    lastState: {}
    name: manager
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2025-07-16T16:10:44Z"
  hostIP: 192.168.65.202
  hostIPs:
  - ip: 192.168.65.202
  phase: Running
  podIP: 192.168.78.20
  podIPs:
  - ip: 192.168.78.20
  qosClass: Burstable
  startTime: "2025-07-16T16:10:43Z"
This is the node where it ran on.
apiVersion: v1
kind: Node
metadata:
  annotations:
    alpha.kubernetes.io/provided-node-ip: 192.168.65.202
    csi.volume.kubernetes.io/nodeid: '{"efs.csi.aws.com":"i-02d6fc59c6089957e"}'
    node.alpha.kubernetes.io/ttl: "0"
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  creationTimestamp: "2025-07-15T15:57:44Z"
  labels:
    alpha.eksctl.io/cluster-name: flux-e2e
    alpha.eksctl.io/nodegroup-name: workers-amd64
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/instance-type: t3.medium
    beta.kubernetes.io/os: linux
    eks.amazonaws.com/capacityType: ON_DEMAND
    eks.amazonaws.com/nodegroup: workers-amd64
    eks.amazonaws.com/nodegroup-image: ami-07ed2d7574e038a0b
    eks.amazonaws.com/sourceLaunchTemplateId: lt-09b47b0b34c0e8ee4
    eks.amazonaws.com/sourceLaunchTemplateVersion: "1"
    failure-domain.beta.kubernetes.io/region: eu-west-2
    failure-domain.beta.kubernetes.io/zone: eu-west-2c
    k8s.io/cloud-provider-aws: d7fb4381d12591c35b50e746c5efb9cd
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: ip-192-168-65-202.eu-west-2.compute.internal
    kubernetes.io/os: linux
    node.kubernetes.io/instance-type: t3.medium
    topology.k8s.aws/zone-id: euw2-az1
    topology.kubernetes.io/region: eu-west-2
    topology.kubernetes.io/zone: eu-west-2c
  name: ip-192-168-65-202.eu-west-2.compute.internal
  resourceVersion: "539091"
  uid: 4291fc24-9fd9-4bfe-af5d-8996126955e1
spec:
  providerID: aws:///eu-west-2c/i-02d6fc59c6089957e
status:
  addresses:
  - address: 192.168.65.202
    type: InternalIP
  - address: 3.8.23.8
    type: ExternalIP
  - address: ip-192-168-65-202.eu-west-2.compute.internal
    type: InternalDNS
  - address: ip-192-168-65-202.eu-west-2.compute.internal
    type: Hostname
  - address: ec2-3-8-23-8.eu-west-2.compute.amazonaws.com
    type: ExternalDNS
  allocatable:
    cpu: 1930m
    ephemeral-storage: "95491281146"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 3364452Ki
    pods: "17"
  capacity:
    cpu: "2"
    ephemeral-storage: 104779756Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 3919460Ki
    pods: "17"
  conditions:
  - lastHeartbeatTime: "2025-07-16T16:40:56Z"
    lastTransitionTime: "2025-07-15T15:57:41Z"
    message: kubelet has sufficient memory available
    reason: KubeletHasSufficientMemory
    status: "False"
    type: MemoryPressure
  - lastHeartbeatTime: "2025-07-16T16:40:56Z"
    lastTransitionTime: "2025-07-15T15:57:41Z"
    message: kubelet has no disk pressure
    reason: KubeletHasNoDiskPressure
    status: "False"
    type: DiskPressure
  - lastHeartbeatTime: "2025-07-16T16:40:56Z"
    lastTransitionTime: "2025-07-15T15:57:41Z"
    message: kubelet has sufficient PID available
    reason: KubeletHasSufficientPID
    status: "False"
    type: PIDPressure
  - lastHeartbeatTime: "2025-07-16T16:40:56Z"
    lastTransitionTime: "2025-07-15T15:57:57Z"
    message: kubelet is posting ready status
    reason: KubeletReady
    status: "True"
    type: Ready
  daemonEndpoints:
    kubeletEndpoint:
      Port: 10250
  images:
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/aws-efs-csi-driver@sha256:240e08c62b5626705dfd8beabeb4980985a3f3324a6eddf1b6ae260ba0ad931b
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/aws-efs-csi-driver:v2.1.9
    sizeBytes: 114401249
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init@sha256:ce36e6fc8457a3c79eab29ad7ca86ebc9220056c443e15502eeab7ceeef8496f
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.19.0-eksbuild.1
    sizeBytes: 62982633
  - names:
    - 709825985650.dkr.ecr.us-east-1.amazonaws.com/controlplane/fluxcd/source-controller@sha256:c305483df5dbe4c8be074427c3d52dae38ee0150dd6ec7e1db8365b48648fb7d
    sizeBytes: 60657540
  - names:
    - 709825985650.dkr.ecr.us-east-1.amazonaws.com/controlplane/fluxcd/kustomize-controller@sha256:e7ac7fe956cc3a20dd4585247e57c86cadc16bb479055b3d33848b2b0f479584
    sizeBytes: 50162935
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni@sha256:efada7e5222a3376dc170b43b569f4dea762fd58186467c233b512bd6ab5415b
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni:v1.19.0-eksbuild.1
    sizeBytes: 48672727
  - names:
    - ghcr.io/matheuscscp/fluxcd/helm-controller@sha256:7214bb918711f9b1674ed7fb414ea293db522a4fe135ae8a56c842ef079e15a3
    - ghcr.io/matheuscscp/fluxcd/helm-controller:wcms-v1
    sizeBytes: 48426984
  - names:
    - ghcr.io/matheuscscp/fluxcd/helm-controller@sha256:34f49ef28987f6f21faef4b4243e22c6be770c2a13810934b44b97e9166fac6d
    - ghcr.io/matheuscscp/fluxcd/helm-controller:wcms-v2
    sizeBytes: 48415651
  - names:
    - ghcr.io/matheuscscp/fluxcd/helm-controller@sha256:9397a644317fcb6624bb171bd582f59b485c77a9eb50140478fb260d9e4048aa
    - ghcr.io/matheuscscp/fluxcd/helm-controller:wcms-v3
    sizeBytes: 48403188
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon/aws-network-policy-agent@sha256:f3280f090b6c5d3128357d8710db237931f5e1089e8017ab3d9cece429d77954
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon/aws-network-policy-agent:v1.1.5-eksbuild.1
    sizeBytes: 40739177
  - names:
    - 709825985650.dkr.ecr.us-east-1.amazonaws.com/controlplane/fluxcd/helm-controller@sha256:2062e8b084e0036df51d9c09906cd3d5fa6a5aaac6ed61394893c5ff9ea7eecd
    sizeBytes: 39417031
  - names:
    - 709825985650.dkr.ecr.us-east-1.amazonaws.com/controlplane/fluxcd/image-automation-controller@sha256:2a9afc5ddada343161c714157573d68883dc91e71d48dcd5f0eb863e018ebe8c
    sizeBytes: 36154502
  - names:
    - ghcr.io/stefanprodan/podinfo@sha256:262578cde928d5c9eba3bce079976444f624c13ed0afb741d90d5423877496cb
    - ghcr.io/stefanprodan/podinfo:6.9.1
    sizeBytes: 32325055
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/kube-proxy@sha256:5ed7b40f2b07b992318718d8264324747ecc24b4ea8fab26095b8e569980eff6
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/kube-proxy:v1.30.6-minimal-eksbuild.3
    sizeBytes: 31494047
  - names:
    - 709825985650.dkr.ecr.us-east-1.amazonaws.com/controlplane/fluxcd/flux-operator@sha256:a2a259e204957b039d1efba0a1075a9bf6c7cefc74f922add49fb1c6aa2621a7
    - 709825985650.dkr.ecr.us-east-1.amazonaws.com/controlplane/fluxcd/flux-operator:v0.24.1
    sizeBytes: 22989444
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/csi-provisioner@sha256:d8b225ac582fd89b88a4a2bbdc32cd643f55af73b88d07d0bdbd01cd312bc852
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/csi-provisioner:v5.2.0-eks-1-33-3
    sizeBytes: 17836801
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/eks-pod-identity-agent@sha256:00a1acdec7ba92dace2866f5a4b46a4393ee9e9975f286c8aa821957956da5c9
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/eks-pod-identity-agent:v0.1.29
    sizeBytes: 14555294
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/livenessprobe@sha256:7e5ef199541463f0b7276402c9db18bc1d1f6f71d02610e964112f8bebbb234f
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/livenessprobe:v2.15.0-eks-1-33-3
    sizeBytes: 9040841
  - names:
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/csi-node-driver-registrar@sha256:ff35c932856005095ca4eba7bac7ec3642203aac3cefd6689d3658a76777334a
    - 602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/csi-node-driver-registrar:v2.13.0-eks-1-33-3
    sizeBytes: 8928231
  - names:
    - 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.10
    - localhost/kubernetes/pause:latest
    sizeBytes: 318731
  nodeInfo:
    architecture: amd64
    bootID: b2253739-ee42-4332-a377-8d98c9068f0f
    containerRuntimeVersion: containerd://1.7.27
    kernelVersion: 6.1.141-155.222.amzn2023.x86_64
    kubeProxyVersion: v1.30.11-eks-473151a
    kubeletVersion: v1.30.11-eks-473151a
    machineID: ec2e21d93cfd688516949af48073a232
    operatingSystem: linux
    osImage: Amazon Linux 2023.7.20250623
    systemUUID: ec2e21d9-3cfd-6885-1694-9af48073a232

@matheuscscp matheuscscp requested a review from stefanprodan July 16, 2025 17:13
@matheuscscp matheuscscp force-pushed the watch-label branch 6 times, most recently from bc2f64b to ea692e6 Compare July 17, 2025 08:08
Signed-off-by: Matheus Pimenta <matheuscscp@gmail.com>
Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Thanks @matheuscscp 🏅

@matheuscscp matheuscscp merged commit 3bb7850 into main Jul 17, 2025
6 checks passed
@matheuscscp matheuscscp deleted the watch-label branch July 17, 2025 09:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Watch ConfigMaps/Secrets referenced in HelmReleases
2 participants