Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore/switch ci sidecars to polling #974

Merged
merged 3 commits into from
Mar 17, 2021
Merged

Chore/switch ci sidecars to polling #974

merged 3 commits into from
Mar 17, 2021

Conversation

dduportal
Copy link
Contributor

@dduportal dduportal commented Mar 17, 2021

This PR introduces the following changes:

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
@dduportal dduportal requested review from a team, garethjevans, olblak and timja March 17, 2021 11:06
timja
timja previously approved these changes Mar 17, 2021
@infra-ci-jenkins-io
Copy link

Helmfile Diff
datadog, datadog, DaemonSet (apps) has changed:
  # Source: datadog/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: datadog
    labels:
      helm.sh/chart: "datadog-2.10.3"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    selector:
      matchLabels:
        app: datadog
    template:
      metadata:
        labels:
          app: datadog
        name: datadog
        annotations:
-         checksum/clusteragent_token: 30e39f6b6fa8415cf2ba8cb15d41c60aeb5d68984152881bf8d5d456ad21167b
+         checksum/clusteragent_token: 70be731410f60c8d3aecde12db5c8ef523fa0018f61c2a036401d1bb716593e6
          checksum/api_key: 824730bdc1979b502f87073ea3428a966fdb6cbe371c25ed8ccacbdaf2e3479b
          checksum/install_info: 4cfc444efdc4aa906aad4156228c33de1749f6c6c1a1a655923d3a125ff7089f
          checksum/autoconf-config: 74234e98afe7498fb5daf1f36ac2d78acc339464f950703b8c019892f982b90b
          checksum/confd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
          checksum/checksd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
      spec:
        containers:
        - name: agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["agent", "run"]
          resources:
            {}
          ports:
          - containerPort: 8125
            name: dogstatsdport
            protocol: UDP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_DOGSTATSD_PORT
              value: "8125"
            - name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_APM_ENABLED
              value: "false"
            - name: DD_LOGS_ENABLED
              value: "true"
            - name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
              value: "false"
            - name: DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE
              value: "true"
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "clusterchecks endpointschecks"
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: pointerdir
              mountPath: /opt/datadog-agent/run
              mountPropagation: None
            - name: logpodpath
              mountPath: /var/log/pods
              mountPropagation: None
              readOnly: true
            - name: logdockercontainerpath
              mountPath: /var/lib/docker/containers
              mountPropagation: None
              readOnly: true
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
        - name: trace-agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["trace-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          ports:
          - containerPort: 8126
            hostPort: 8126
            name: traceport
            protocol: TCP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_APM_ENABLED
              value: "true"
            - name: DD_APM_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_APM_RECEIVER_PORT
              value: "8126"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          livenessProbe:
            initialDelaySeconds: 15
            periodSeconds: 15
            tcpSocket:
              port: 8126
            timeoutSeconds: 5
        - name: process-agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["process-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_PROCESS_AGENT_ENABLED
              value: "true"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_SYSTEM_PROBE_ENABLED
              value: "false"
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: passwd
              mountPath: /etc/passwd
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
        initContainers:
            
        - name: init-volume
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - cp -r /etc/datadog-agent /opt
          volumeMounts:
            - name: config
              mountPath: /opt/datadog-agent
          resources:
            {}
        - name: init-config
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - for script in $(find /etc/cont-init.d/ -type f -name '*.sh' | sort) ; do bash $script ; done
          volumeMounts:
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: config
              mountPath: /etc/datadog-agent
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
          resources:
            {}
        volumes:
        - name: installinfo
          configMap:
            name: datadog-installinfo
        - name: config
          emptyDir: {}
        - hostPath:
            path: /var/run
          name: runtimesocketdir
          
        - name: logdatadog
          emptyDir: {}
        - name: tmpdir
          emptyDir: {}
        - hostPath:
            path: /proc
          name: procdir
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroups
        - name: s6-run
          emptyDir: {}
        - hostPath:
            path: /etc/passwd
          name: passwd
        - hostPath:
            path: /var/lib/datadog-agent/logs
          name: pointerdir
        - hostPath:
            path: /var/log/pods
          name: logpodpath
        - hostPath:
            path: /var/lib/docker/containers
          name: logdockercontainerpath
        tolerations:
        affinity:
          {}
        serviceAccountName: datadog
        nodeSelector:
          kubernetes.io/os: linux
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 10%
      type: RollingUpdate
datadog, datadog-cluster-agent, Deployment (apps) has changed:
  # Source: datadog/templates/cluster-agent-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: datadog-cluster-agent
    labels:
      helm.sh/chart: "datadog-2.10.3"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    replicas: 1
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    selector:
      matchLabels:
        app: datadog-cluster-agent
    template:
      metadata:
        labels:
          app: datadog-cluster-agent
        name: datadog-cluster-agent
        annotations:
-         checksum/clusteragent_token: 8ce267e2e9131486ae7d6db2e89cce808d48d6a1ea27b7ddf48b9900b4a50a1d
+         checksum/clusteragent_token: 8f8a01a71e208132b26d9f50fb18d729481c3b30344e63afdad357b8af38bb9f
          checksum/api_key: 824730bdc1979b502f87073ea3428a966fdb6cbe371c25ed8ccacbdaf2e3479b
          checksum/application_key: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/install_info: 4cfc444efdc4aa906aad4156228c33de1749f6c6c1a1a655923d3a125ff7089f
          ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
          ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
          ad.datadoghq.com/cluster-agent.instances: |
            [{
              "prometheus_url": "http://%%host%%:5000/metrics",
              "namespace": "datadog.cluster_agent",
              "metrics": [
                "go_goroutines", "go_memstats_*", "process_*",
                "api_requests",
                "datadog_requests", "external_metrics", "rate_limit_queries_*",
                "cluster_checks_*"
              ]
            }]
  
      spec:
        serviceAccountName: datadog-cluster-agent
        containers:
        - name: cluster-agent
          image: "gcr.io/datadoghq/cluster-agent:1.11.0"
          imagePullPolicy: IfNotPresent
          resources:
            {}
          ports:
          - containerPort: 5005
            name: agentport
            protocol: TCP
          env:
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
                  optional: true
            - name: DD_CLUSTER_CHECKS_ENABLED
              value: "true"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "kube_endpoints kube_services"
            - name: DD_EXTRA_LISTENERS
              value: "kube_endpoints kube_services"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_LEADER_ELECTION
              value: "true"
            - name: DD_LEADER_LEASE_DURATION
              value: "60"
            - name: DD_COLLECT_KUBERNETES_EVENTS
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: datadog-cluster-agent
                  key: token
            - name: DD_KUBE_RESOURCES_NAMESPACE
              value: datadog
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_EXPLORER_CONTAINER_SCRUBBING_ENABLED
              value: "true"
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
        volumes:
          - name: installinfo
            configMap:
              name: datadog-installinfo
        nodeSelector:
          kubernetes.io/os: linux
datadog, datadog-cluster-agent, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

grafana, grafana, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
grafana, grafana, StatefulSet (apps) has changed:
  # Source: grafana/templates/statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: grafana
    namespace: grafana
    labels:
      helm.sh/chart: grafana-6.6.3
      app.kubernetes.io/name: grafana
      app.kubernetes.io/instance: grafana
      app.kubernetes.io/version: "7.4.3"
      app.kubernetes.io/managed-by: Helm
  spec:
    replicas: 1
    selector:
      matchLabels:
        app.kubernetes.io/name: grafana
        app.kubernetes.io/instance: grafana
    serviceName: grafana-headless
    template:
      metadata:
        labels:
          app.kubernetes.io/name: grafana
          app.kubernetes.io/instance: grafana
        annotations:
          checksum/config: cd8a06918ba8f33f62727d8992e445809f0f16d59659ed0bf2686fcabc6ea66e
          checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
-         checksum/secret: df439726712e8aaa0c4f23514ba6c1f9abbe3ca1ae8e716588596ecbb38a955a
+         checksum/secret: 74458fe5c21d2b74b66c8d3d6d123d40df14132e83f97e0816972c5d5ee7b493
      spec:
        
        serviceAccountName: grafana
        securityContext:
          fsGroup: 472
          runAsGroup: 472
          runAsUser: 472
        initContainers:
          - name: init-chown-data
            image: "busybox:1.31.1"
            imagePullPolicy: IfNotPresent
            securityContext:
              runAsNonRoot: false
              runAsUser: 0
            command: ["chown", "-R", "472:472", "/var/lib/grafana"]
            resources:
              {}
            volumeMounts:
              - name: storage
                mountPath: "/var/lib/grafana"
        containers:
          - name: grafana
            image: "grafana/grafana:7.4.3"
            imagePullPolicy: IfNotPresent
            volumeMounts:
              - name: config
                mountPath: "/etc/grafana/grafana.ini"
                subPath: grafana.ini
              - name: ldap
                mountPath: "/etc/grafana/ldap.toml"
                subPath: ldap.toml
              - name: storage
                mountPath: "/var/lib/grafana"
              - name: config
                mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
                subPath: datasources.yaml
            ports:
              - name: service
                containerPort: 80
                protocol: TCP
              - name: grafana
                containerPort: 3000
                protocol: TCP
            env:
              - name: GF_SECURITY_ADMIN_USER
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-user
              - name: GF_SECURITY_ADMIN_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-password
              
            livenessProbe:
              failureThreshold: 10
              httpGet:
                path: /api/health
                port: 3000
              initialDelaySeconds: 60
              timeoutSeconds: 30
            readinessProbe:
              httpGet:
                path: /api/health
                port: 3000
            resources:
              limits:
                cpu: 200m
                memory: 256Mi
              requests:
                cpu: 100m
                memory: 128Mi
        volumes:
          - name: config
            configMap:
              name: grafana
          - name: ldap
            secret:
              secretName: grafana
              items:
                - key: ldap-toml
                  path: ldap.toml
        # nothing
    volumeClaimTemplates:
    - metadata:
        name: storage
      spec:
        accessModes: [ReadWriteOnce]
        storageClassName: 
        resources:
          requests:
            storage: 50

jenkins-infra, jenkins-infra, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
jenkins-infra, jenkins-infra, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: jenkins-infra
    namespace: jenkins-infra
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "jenkins-infra"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: jenkins-infra
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "jenkins-infra"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "jenkins-infra"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
          checksum/config: 20f61d5d0b46862c0ae7c0b42d1a80f59be1de543e6b30e4faf367d90ce83bd6
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.284-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.284-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
                   -Dcasc.reload.token=$(POD_NAME) -XshowSettings:vm -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled -XX:+DisableExplicitGC -XX:MaxRAM=4g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+UseG1GC
  
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "jenkins-infra-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'jenkins-infra'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
+             - name: METHOD
+               value: SLEEP
+             - name: SLEEP_TIME
+               value: "300"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: jenkins-infra
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-infra
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: jenkins-infra

release, default-release-jenkins, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
release, default-release-jenkins, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: default-release-jenkins
    namespace: release
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: default-release-jenkins
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "default-release-jenkins"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "default-release-jenkins"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
          checksum/config: 3df3eb60ff5fed2d1a6b4a2474078ba8c707ae7a90bd659d08f707ea59b6e67c
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.277.1-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.277.1-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
                   -Dcasc.reload.token=$(POD_NAME) -XshowSettings:vm -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled -XX:+DisableExplicitGC -XX:MaxRAM=4g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+UseG1GC
  
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "default-release-jenkins-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'release'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
+             - name: METHOD
+               value: SLEEP
+             - name: SLEEP_TIME
+               value: "300"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: default-release-jenkins
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: default-release-jenkins
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: default-release-jenkins
release, default-release-jenkins-jenkins-config-ldap-settings, ConfigMap (v1) has changed:
  # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: default-release-jenkins-jenkins-config-ldap-settings
    namespace: release
    labels:
      "app.kubernetes.io/name": jenkins
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
      default-release-jenkins-jenkins-config: "true"
  data:
    ldap-settings.yaml: |-
      jenkins:
        securityRealm:
          ldap:
            configurations:
              - server: "${LDAP_SERVER}"
                rootDN: "${LDAP_ROOT_DN}"
                managerDN: "${LDAP_MANAGER_DN}"
                managerPasswordSecret: "${LDAP_MANAGER_PASSWORD}"
                mailAddressAttributeName: "mail"
                userSearch: cn={0}
                userSearchBase: "ou=people"
                groupSearchBase: "ou=groups"
            disableMailAddressResolver: false
            groupIdStrategy: "caseInsensitive"
            userIdStrategy: "caseInsensitive"
            cache:
              size: 100
              ttl: 300
-     advisor-settings: |
-     jenkins:
-       disabledAdministrativeMonitors:
-         - com.cloudbees.jenkins.plugins.advisor.Reminder
-     advisor:
-       acceptToS: true
-       ccs:
-       - "damien.duportal@gmail.com"
-       email: "jenkins@oblak.com"
-       excludedComponents:
-         - "ItemsContent"
-         - "GCLogs"
-         - "Agents"
-         - "RootCAs"
-         - "SlaveLogs"
-         - "HeapUsageHistogram"
-       nagDisabled: true
release, default-release-jenkins-jenkins-config-advisor-settings, ConfigMap (v1) has been added:
- 
+ # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+   name: default-release-jenkins-jenkins-config-advisor-settings
+   namespace: release
+   labels:
+     "app.kubernetes.io/name": jenkins
+     "helm.sh/chart": "jenkins-3.2.4"
+     "app.kubernetes.io/managed-by": "Helm"
+     "app.kubernetes.io/instance": "default-release-jenkins"
+     "app.kubernetes.io/component": "jenkins-controller"
+     default-release-jenkins-jenkins-config: "true"
+ data:
+   advisor-settings.yaml: |-
+     jenkins:
+       disabledAdministrativeMonitors:
+         - com.cloudbees.jenkins.plugins.advisor.Reminder
+     advisor:
+       acceptToS: true
+       ccs:
+       - "damien.duportal@gmail.com"
+       email: "jenkins@oblak.com"
+       excludedComponents:
+         - "ItemsContent"
+         - "GCLogs"
+         - "Agents"
+         - "RootCAs"
+         - "SlaveLogs"
+         - "HeapUsageHistogram"
+       nagDisabled: true

@dduportal
Copy link
Contributor Author

@timja the duration of 300 looked a lot. I changed to 60 to ensure the configmap are checked often (and to avoid waiting too much when deploying). WDYT?

@infra-ci-jenkins-io
Copy link

Helmfile Diff
datadog, datadog, DaemonSet (apps) has changed:
  # Source: datadog/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: datadog
    labels:
      helm.sh/chart: "datadog-2.10.3"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    selector:
      matchLabels:
        app: datadog
    template:
      metadata:
        labels:
          app: datadog
        name: datadog
        annotations:
-         checksum/clusteragent_token: db3c22bd2b4e1a22682dce9a9eaf5c0f4db147c3c92787e6f811fc38988af33a
+         checksum/clusteragent_token: 17805e317f04a96dd02b8419653f209ced7dd38d6e8040f92b192e394c4fdbf7
          checksum/api_key: 824730bdc1979b502f87073ea3428a966fdb6cbe371c25ed8ccacbdaf2e3479b
          checksum/install_info: 4cfc444efdc4aa906aad4156228c33de1749f6c6c1a1a655923d3a125ff7089f
          checksum/autoconf-config: 74234e98afe7498fb5daf1f36ac2d78acc339464f950703b8c019892f982b90b
          checksum/confd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
          checksum/checksd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
      spec:
        containers:
        - name: agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["agent", "run"]
          resources:
            {}
          ports:
          - containerPort: 8125
            name: dogstatsdport
            protocol: UDP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_DOGSTATSD_PORT
              value: "8125"
            - name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_APM_ENABLED
              value: "false"
            - name: DD_LOGS_ENABLED
              value: "true"
            - name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
              value: "false"
            - name: DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE
              value: "true"
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "clusterchecks endpointschecks"
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: pointerdir
              mountPath: /opt/datadog-agent/run
              mountPropagation: None
            - name: logpodpath
              mountPath: /var/log/pods
              mountPropagation: None
              readOnly: true
            - name: logdockercontainerpath
              mountPath: /var/lib/docker/containers
              mountPropagation: None
              readOnly: true
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
        - name: trace-agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["trace-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          ports:
          - containerPort: 8126
            hostPort: 8126
            name: traceport
            protocol: TCP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_APM_ENABLED
              value: "true"
            - name: DD_APM_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_APM_RECEIVER_PORT
              value: "8126"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          livenessProbe:
            initialDelaySeconds: 15
            periodSeconds: 15
            tcpSocket:
              port: 8126
            timeoutSeconds: 5
        - name: process-agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["process-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_PROCESS_AGENT_ENABLED
              value: "true"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_SYSTEM_PROBE_ENABLED
              value: "false"
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: passwd
              mountPath: /etc/passwd
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
        initContainers:
            
        - name: init-volume
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - cp -r /etc/datadog-agent /opt
          volumeMounts:
            - name: config
              mountPath: /opt/datadog-agent
          resources:
            {}
        - name: init-config
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - for script in $(find /etc/cont-init.d/ -type f -name '*.sh' | sort) ; do bash $script ; done
          volumeMounts:
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: config
              mountPath: /etc/datadog-agent
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
          resources:
            {}
        volumes:
        - name: installinfo
          configMap:
            name: datadog-installinfo
        - name: config
          emptyDir: {}
        - hostPath:
            path: /var/run
          name: runtimesocketdir
          
        - name: logdatadog
          emptyDir: {}
        - name: tmpdir
          emptyDir: {}
        - hostPath:
            path: /proc
          name: procdir
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroups
        - name: s6-run
          emptyDir: {}
        - hostPath:
            path: /etc/passwd
          name: passwd
        - hostPath:
            path: /var/lib/datadog-agent/logs
          name: pointerdir
        - hostPath:
            path: /var/log/pods
          name: logpodpath
        - hostPath:
            path: /var/lib/docker/containers
          name: logdockercontainerpath
        tolerations:
        affinity:
          {}
        serviceAccountName: datadog
        nodeSelector:
          kubernetes.io/os: linux
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 10%
      type: RollingUpdate
datadog, datadog-cluster-agent, Deployment (apps) has changed:
  # Source: datadog/templates/cluster-agent-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: datadog-cluster-agent
    labels:
      helm.sh/chart: "datadog-2.10.3"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    replicas: 1
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    selector:
      matchLabels:
        app: datadog-cluster-agent
    template:
      metadata:
        labels:
          app: datadog-cluster-agent
        name: datadog-cluster-agent
        annotations:
-         checksum/clusteragent_token: e294b94e49cc9934b986e9785066d535bee14e1e7346c8e47f2265ce117fd4e1
+         checksum/clusteragent_token: 370ce8ca2b07c4ea602303d4da5ea5e5f40fdcec750e73b453fe80d9d2540f75
          checksum/api_key: 824730bdc1979b502f87073ea3428a966fdb6cbe371c25ed8ccacbdaf2e3479b
          checksum/application_key: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/install_info: 4cfc444efdc4aa906aad4156228c33de1749f6c6c1a1a655923d3a125ff7089f
          ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
          ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
          ad.datadoghq.com/cluster-agent.instances: |
            [{
              "prometheus_url": "http://%%host%%:5000/metrics",
              "namespace": "datadog.cluster_agent",
              "metrics": [
                "go_goroutines", "go_memstats_*", "process_*",
                "api_requests",
                "datadog_requests", "external_metrics", "rate_limit_queries_*",
                "cluster_checks_*"
              ]
            }]
  
      spec:
        serviceAccountName: datadog-cluster-agent
        containers:
        - name: cluster-agent
          image: "gcr.io/datadoghq/cluster-agent:1.11.0"
          imagePullPolicy: IfNotPresent
          resources:
            {}
          ports:
          - containerPort: 5005
            name: agentport
            protocol: TCP
          env:
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
                  optional: true
            - name: DD_CLUSTER_CHECKS_ENABLED
              value: "true"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "kube_endpoints kube_services"
            - name: DD_EXTRA_LISTENERS
              value: "kube_endpoints kube_services"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_LEADER_ELECTION
              value: "true"
            - name: DD_LEADER_LEASE_DURATION
              value: "60"
            - name: DD_COLLECT_KUBERNETES_EVENTS
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: datadog-cluster-agent
                  key: token
            - name: DD_KUBE_RESOURCES_NAMESPACE
              value: datadog
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_EXPLORER_CONTAINER_SCRUBBING_ENABLED
              value: "true"
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
        volumes:
          - name: installinfo
            configMap:
              name: datadog-installinfo
        nodeSelector:
          kubernetes.io/os: linux
datadog, datadog-cluster-agent, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

grafana, grafana, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
grafana, grafana, StatefulSet (apps) has changed:
  # Source: grafana/templates/statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: grafana
    namespace: grafana
    labels:
      helm.sh/chart: grafana-6.6.3
      app.kubernetes.io/name: grafana
      app.kubernetes.io/instance: grafana
      app.kubernetes.io/version: "7.4.3"
      app.kubernetes.io/managed-by: Helm
  spec:
    replicas: 1
    selector:
      matchLabels:
        app.kubernetes.io/name: grafana
        app.kubernetes.io/instance: grafana
    serviceName: grafana-headless
    template:
      metadata:
        labels:
          app.kubernetes.io/name: grafana
          app.kubernetes.io/instance: grafana
        annotations:
          checksum/config: cd8a06918ba8f33f62727d8992e445809f0f16d59659ed0bf2686fcabc6ea66e
          checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
-         checksum/secret: 1e9e2492ffda9ec1d777cc99bda0ff6bea88010425b0d59169a1671aaf6d8946
+         checksum/secret: 65d155fa6a0cfe3ba6d0aa8c99683bad91f231d9faaf257f6171883c16e729b5
      spec:
        
        serviceAccountName: grafana
        securityContext:
          fsGroup: 472
          runAsGroup: 472
          runAsUser: 472
        initContainers:
          - name: init-chown-data
            image: "busybox:1.31.1"
            imagePullPolicy: IfNotPresent
            securityContext:
              runAsNonRoot: false
              runAsUser: 0
            command: ["chown", "-R", "472:472", "/var/lib/grafana"]
            resources:
              {}
            volumeMounts:
              - name: storage
                mountPath: "/var/lib/grafana"
        containers:
          - name: grafana
            image: "grafana/grafana:7.4.3"
            imagePullPolicy: IfNotPresent
            volumeMounts:
              - name: config
                mountPath: "/etc/grafana/grafana.ini"
                subPath: grafana.ini
              - name: ldap
                mountPath: "/etc/grafana/ldap.toml"
                subPath: ldap.toml
              - name: storage
                mountPath: "/var/lib/grafana"
              - name: config
                mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
                subPath: datasources.yaml
            ports:
              - name: service
                containerPort: 80
                protocol: TCP
              - name: grafana
                containerPort: 3000
                protocol: TCP
            env:
              - name: GF_SECURITY_ADMIN_USER
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-user
              - name: GF_SECURITY_ADMIN_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-password
              
            livenessProbe:
              failureThreshold: 10
              httpGet:
                path: /api/health
                port: 3000
              initialDelaySeconds: 60
              timeoutSeconds: 30
            readinessProbe:
              httpGet:
                path: /api/health
                port: 3000
            resources:
              limits:
                cpu: 200m
                memory: 256Mi
              requests:
                cpu: 100m
                memory: 128Mi
        volumes:
          - name: config
            configMap:
              name: grafana
          - name: ldap
            secret:
              secretName: grafana
              items:
                - key: ldap-toml
                  path: ldap.toml
        # nothing
    volumeClaimTemplates:
    - metadata:
        name: storage
      spec:
        accessModes: [ReadWriteOnce]
        storageClassName: 
        resources:
          requests:
            storage: 50

jenkins-infra, jenkins-infra, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
jenkins-infra, jenkins-infra, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: jenkins-infra
    namespace: jenkins-infra
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "jenkins-infra"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: jenkins-infra
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "jenkins-infra"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "jenkins-infra"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
          checksum/config: 20f61d5d0b46862c0ae7c0b42d1a80f59be1de543e6b30e4faf367d90ce83bd6
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.284-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.284-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
                   -Dcasc.reload.token=$(POD_NAME) -XshowSettings:vm -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled -XX:+DisableExplicitGC -XX:MaxRAM=4g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+UseG1GC
  
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "jenkins-infra-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'jenkins-infra'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
+             - name: METHOD
+               value: SLEEP
+             - name: SLEEP_TIME
+               value: "60"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: jenkins-infra
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-infra
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: jenkins-infra

release, default-release-jenkins, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
release, default-release-jenkins, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: default-release-jenkins
    namespace: release
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: default-release-jenkins
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "default-release-jenkins"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "default-release-jenkins"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
          checksum/config: 3df3eb60ff5fed2d1a6b4a2474078ba8c707ae7a90bd659d08f707ea59b6e67c
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.277.1-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.277.1-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
                   -Dcasc.reload.token=$(POD_NAME) -XshowSettings:vm -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled -XX:+DisableExplicitGC -XX:MaxRAM=4g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+UseG1GC
  
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "default-release-jenkins-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'release'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
+             - name: METHOD
+               value: SLEEP
+             - name: SLEEP_TIME
+               value: "60"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: default-release-jenkins
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: default-release-jenkins
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: default-release-jenkins
release, default-release-jenkins-jenkins-config-ldap-settings, ConfigMap (v1) has changed:
  # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: default-release-jenkins-jenkins-config-ldap-settings
    namespace: release
    labels:
      "app.kubernetes.io/name": jenkins
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
      default-release-jenkins-jenkins-config: "true"
  data:
    ldap-settings.yaml: |-
      jenkins:
        securityRealm:
          ldap:
            configurations:
              - server: "${LDAP_SERVER}"
                rootDN: "${LDAP_ROOT_DN}"
                managerDN: "${LDAP_MANAGER_DN}"
                managerPasswordSecret: "${LDAP_MANAGER_PASSWORD}"
                mailAddressAttributeName: "mail"
                userSearch: cn={0}
                userSearchBase: "ou=people"
                groupSearchBase: "ou=groups"
            disableMailAddressResolver: false
            groupIdStrategy: "caseInsensitive"
            userIdStrategy: "caseInsensitive"
            cache:
              size: 100
              ttl: 300
-     advisor-settings: |
-     jenkins:
-       disabledAdministrativeMonitors:
-         - com.cloudbees.jenkins.plugins.advisor.Reminder
-     advisor:
-       acceptToS: true
-       ccs:
-       - "damien.duportal@gmail.com"
-       email: "jenkins@oblak.com"
-       excludedComponents:
-         - "ItemsContent"
-         - "GCLogs"
-         - "Agents"
-         - "RootCAs"
-         - "SlaveLogs"
-         - "HeapUsageHistogram"
-       nagDisabled: true
release, default-release-jenkins-jenkins-config-advisor-settings, ConfigMap (v1) has been added:
- 
+ # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+   name: default-release-jenkins-jenkins-config-advisor-settings
+   namespace: release
+   labels:
+     "app.kubernetes.io/name": jenkins
+     "helm.sh/chart": "jenkins-3.2.4"
+     "app.kubernetes.io/managed-by": "Helm"
+     "app.kubernetes.io/instance": "default-release-jenkins"
+     "app.kubernetes.io/component": "jenkins-controller"
+     default-release-jenkins-jenkins-config: "true"
+ data:
+   advisor-settings.yaml: |-
+     jenkins:
+       disabledAdministrativeMonitors:
+         - com.cloudbees.jenkins.plugins.advisor.Reminder
+     advisor:
+       acceptToS: true
+       ccs:
+       - "damien.duportal@gmail.com"
+       email: "jenkins@oblak.com"
+       excludedComponents:
+         - "ItemsContent"
+         - "GCLogs"
+         - "Agents"
+         - "RootCAs"
+         - "SlaveLogs"
+         - "HeapUsageHistogram"
+       nagDisabled: true

@olblak
Copy link
Member

olblak commented Mar 17, 2021

@timja the duration of 300 looked a lot. I changed to 60 to ensure the configmap are checked often (and to avoid waiting too much when deploying). WDYT?

Is this seconds or minutes?

@dduportal
Copy link
Contributor Author

Is this seconds or minutes?

As per the documentation at https://github.com/kiwigrid/k8s-sidecar, it is in seconds:

SLEEP_TIME
description: How many seconds to wait before updating config-maps/secrets when using SLEEP method.
required: false
default: 60
type: integer

Let me add a comment on the values to help on this

Signed-off-by: Damien Duportal <damien.duportal@gmail.com>
@infra-ci-jenkins-io
Copy link

Helmfile Diff
datadog, datadog, DaemonSet (apps) has changed:
  # Source: datadog/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: datadog
    labels:
      helm.sh/chart: "datadog-2.10.3"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    selector:
      matchLabels:
        app: datadog
    template:
      metadata:
        labels:
          app: datadog
        name: datadog
        annotations:
-         checksum/clusteragent_token: 225fb00eb02b4bf70a31e20af7f7290759738eea9e80d029508a7244318ac2b9
+         checksum/clusteragent_token: 2a766973f037905301b10e3e18730c15a31f03c62cd9f8c060a19cb193f35634
          checksum/api_key: 824730bdc1979b502f87073ea3428a966fdb6cbe371c25ed8ccacbdaf2e3479b
          checksum/install_info: 4cfc444efdc4aa906aad4156228c33de1749f6c6c1a1a655923d3a125ff7089f
          checksum/autoconf-config: 74234e98afe7498fb5daf1f36ac2d78acc339464f950703b8c019892f982b90b
          checksum/confd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
          checksum/checksd-config: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
      spec:
        containers:
        - name: agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["agent", "run"]
          resources:
            {}
          ports:
          - containerPort: 8125
            name: dogstatsdport
            protocol: UDP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_DOGSTATSD_PORT
              value: "8125"
            - name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_APM_ENABLED
              value: "false"
            - name: DD_LOGS_ENABLED
              value: "true"
            - name: DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL
              value: "false"
            - name: DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE
              value: "true"
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "clusterchecks endpointschecks"
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: pointerdir
              mountPath: /opt/datadog-agent/run
              mountPropagation: None
            - name: logpodpath
              mountPath: /var/log/pods
              mountPropagation: None
              readOnly: true
            - name: logdockercontainerpath
              mountPath: /var/lib/docker/containers
              mountPropagation: None
              readOnly: true
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
        - name: trace-agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["trace-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          ports:
          - containerPort: 8126
            hostPort: 8126
            name: traceport
            protocol: TCP
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_APM_ENABLED
              value: "true"
            - name: DD_APM_NON_LOCAL_TRAFFIC
              value: "true"
            - name: DD_APM_RECEIVER_PORT
              value: "8126"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          livenessProbe:
            initialDelaySeconds: 15
            periodSeconds: 15
            tcpSocket:
              port: 8126
            timeoutSeconds: 5
        - name: process-agent
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["process-agent", "-config=/etc/datadog-agent/datadog.yaml"]
          resources:
            {}
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
            - name: DD_CLUSTER_AGENT_ENABLED
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                    name: datadog-cluster-agent
                    key: token
            - name: DD_PROCESS_AGENT_ENABLED
              value: "true"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_SYSTEM_PROBE_ENABLED
              value: "false"
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
          volumeMounts:
            - name: config
              mountPath: /etc/datadog-agent
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: tmpdir
              mountPath: /tmp
              readOnly: false
            - name: cgroups
              mountPath: /host/sys/fs/cgroup
              mountPropagation: None
              readOnly: true
            - name: passwd
              mountPath: /etc/passwd
              readOnly: true
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
        initContainers:
            
        - name: init-volume
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - cp -r /etc/datadog-agent /opt
          volumeMounts:
            - name: config
              mountPath: /opt/datadog-agent
          resources:
            {}
        - name: init-config
          image: "jenkinsciinfra/datadog@sha256:feec56f2f4596213abc4be382f466154077215c10bd5d8e0b22a29783530f5de"
          imagePullPolicy: IfNotPresent
          command: ["bash", "-c"]
          args:
            - for script in $(find /etc/cont-init.d/ -type f -name '*.sh' | sort) ; do bash $script ; done
          volumeMounts:
            - name: logdatadog
              mountPath: /var/log/datadog
            - name: config
              mountPath: /etc/datadog-agent
            - name: procdir
              mountPath: /host/proc
              mountPropagation: None
              readOnly: true
            - name: runtimesocketdir
              mountPath: /host/var/run
              mountPropagation: None
              readOnly: true
          env:
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
            - name: DD_KUBERNETES_KUBELET_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: KUBERNETES
              value: "yes"
            - name: DOCKER_HOST
              value: unix:///host/var/run/docker.sock
          resources:
            {}
        volumes:
        - name: installinfo
          configMap:
            name: datadog-installinfo
        - name: config
          emptyDir: {}
        - hostPath:
            path: /var/run
          name: runtimesocketdir
          
        - name: logdatadog
          emptyDir: {}
        - name: tmpdir
          emptyDir: {}
        - hostPath:
            path: /proc
          name: procdir
        - hostPath:
            path: /sys/fs/cgroup
          name: cgroups
        - name: s6-run
          emptyDir: {}
        - hostPath:
            path: /etc/passwd
          name: passwd
        - hostPath:
            path: /var/lib/datadog-agent/logs
          name: pointerdir
        - hostPath:
            path: /var/log/pods
          name: logpodpath
        - hostPath:
            path: /var/lib/docker/containers
          name: logdockercontainerpath
        tolerations:
        affinity:
          {}
        serviceAccountName: datadog
        nodeSelector:
          kubernetes.io/os: linux
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 10%
      type: RollingUpdate
datadog, datadog-cluster-agent, Deployment (apps) has changed:
  # Source: datadog/templates/cluster-agent-deployment.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: datadog-cluster-agent
    labels:
      helm.sh/chart: "datadog-2.10.3"
      app.kubernetes.io/name: "datadog"
      app.kubernetes.io/instance: "datadog"
      app.kubernetes.io/managed-by: "Helm"
      app.kubernetes.io/version: "7"
  spec:
    replicas: 1
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
      type: RollingUpdate
    selector:
      matchLabels:
        app: datadog-cluster-agent
    template:
      metadata:
        labels:
          app: datadog-cluster-agent
        name: datadog-cluster-agent
        annotations:
-         checksum/clusteragent_token: 6da8861172c20f155dbfe4fd6d884d7e17f4f0efd5b68e4849dd6b5847fb5d01
+         checksum/clusteragent_token: b8a09432b225bdef70a81e9adcff477dacc858304295ed2883c6dca6a89066a4
          checksum/api_key: 824730bdc1979b502f87073ea3428a966fdb6cbe371c25ed8ccacbdaf2e3479b
          checksum/application_key: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/install_info: 4cfc444efdc4aa906aad4156228c33de1749f6c6c1a1a655923d3a125ff7089f
          ad.datadoghq.com/cluster-agent.check_names: '["prometheus"]'
          ad.datadoghq.com/cluster-agent.init_configs: '[{}]'
          ad.datadoghq.com/cluster-agent.instances: |
            [{
              "prometheus_url": "http://%%host%%:5000/metrics",
              "namespace": "datadog.cluster_agent",
              "metrics": [
                "go_goroutines", "go_memstats_*", "process_*",
                "api_requests",
                "datadog_requests", "external_metrics", "rate_limit_queries_*",
                "cluster_checks_*"
              ]
            }]
  
      spec:
        serviceAccountName: datadog-cluster-agent
        containers:
        - name: cluster-agent
          image: "gcr.io/datadoghq/cluster-agent:1.11.0"
          imagePullPolicy: IfNotPresent
          resources:
            {}
          ports:
          - containerPort: 5005
            name: agentport
            protocol: TCP
          env:
            - name: DD_HEALTH_PORT
              value: "5555"
            - name: DD_API_KEY
              valueFrom:
                secretKeyRef:
                  name: "datadog"
                  key: api-key
                  optional: true
            - name: DD_CLUSTER_CHECKS_ENABLED
              value: "true"
            - name: DD_EXTRA_CONFIG_PROVIDERS
              value: "kube_endpoints kube_services"
            - name: DD_EXTRA_LISTENERS
              value: "kube_endpoints kube_services"
            - name: DD_LOG_LEVEL
              value: "INFO"
            - name: DD_LEADER_ELECTION
              value: "true"
            - name: DD_LEADER_LEASE_DURATION
              value: "60"
            - name: DD_COLLECT_KUBERNETES_EVENTS
              value: "true"
            - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
              value: datadog-cluster-agent
            - name: DD_CLUSTER_AGENT_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: datadog-cluster-agent
                  key: token
            - name: DD_KUBE_RESOURCES_NAMESPACE
              value: datadog
            - name: DD_ORCHESTRATOR_EXPLORER_ENABLED
              value: "true"
            - name: DD_ORCHESTRATOR_EXPLORER_CONTAINER_SCRUBBING_ENABLED
              value: "true"
          livenessProbe:
            failureThreshold: 6
            httpGet:
              path: /live
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            failureThreshold: 6
            httpGet:
              path: /ready
              port: 5555
              scheme: HTTP
            initialDelaySeconds: 15
            periodSeconds: 15
            successThreshold: 1
            timeoutSeconds: 5
          volumeMounts:
            - name: installinfo
              subPath: install_info
              mountPath: /etc/datadog-agent/install_info
              readOnly: true
        volumes:
          - name: installinfo
            configMap:
              name: datadog-installinfo
        nodeSelector:
          kubernetes.io/os: linux
datadog, datadog-cluster-agent, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret

grafana, grafana, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
grafana, grafana, StatefulSet (apps) has changed:
  # Source: grafana/templates/statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: grafana
    namespace: grafana
    labels:
      helm.sh/chart: grafana-6.6.3
      app.kubernetes.io/name: grafana
      app.kubernetes.io/instance: grafana
      app.kubernetes.io/version: "7.4.3"
      app.kubernetes.io/managed-by: Helm
  spec:
    replicas: 1
    selector:
      matchLabels:
        app.kubernetes.io/name: grafana
        app.kubernetes.io/instance: grafana
    serviceName: grafana-headless
    template:
      metadata:
        labels:
          app.kubernetes.io/name: grafana
          app.kubernetes.io/instance: grafana
        annotations:
          checksum/config: cd8a06918ba8f33f62727d8992e445809f0f16d59659ed0bf2686fcabc6ea66e
          checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
          checksum/sc-dashboard-provider-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
-         checksum/secret: 38a256e4b494b7855cab2e90c6091bce1c29b5df32cb63a8493abde884e5399f
+         checksum/secret: ac9bdf4810df6cbd0c9caeaf413aa90c491709d2fa597e8cf927660dfa76bc8b
      spec:
        
        serviceAccountName: grafana
        securityContext:
          fsGroup: 472
          runAsGroup: 472
          runAsUser: 472
        initContainers:
          - name: init-chown-data
            image: "busybox:1.31.1"
            imagePullPolicy: IfNotPresent
            securityContext:
              runAsNonRoot: false
              runAsUser: 0
            command: ["chown", "-R", "472:472", "/var/lib/grafana"]
            resources:
              {}
            volumeMounts:
              - name: storage
                mountPath: "/var/lib/grafana"
        containers:
          - name: grafana
            image: "grafana/grafana:7.4.3"
            imagePullPolicy: IfNotPresent
            volumeMounts:
              - name: config
                mountPath: "/etc/grafana/grafana.ini"
                subPath: grafana.ini
              - name: ldap
                mountPath: "/etc/grafana/ldap.toml"
                subPath: ldap.toml
              - name: storage
                mountPath: "/var/lib/grafana"
              - name: config
                mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
                subPath: datasources.yaml
            ports:
              - name: service
                containerPort: 80
                protocol: TCP
              - name: grafana
                containerPort: 3000
                protocol: TCP
            env:
              - name: GF_SECURITY_ADMIN_USER
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-user
              - name: GF_SECURITY_ADMIN_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: grafana
                    key: admin-password
              
            livenessProbe:
              failureThreshold: 10
              httpGet:
                path: /api/health
                port: 3000
              initialDelaySeconds: 60
              timeoutSeconds: 30
            readinessProbe:
              httpGet:
                path: /api/health
                port: 3000
            resources:
              limits:
                cpu: 200m
                memory: 256Mi
              requests:
                cpu: 100m
                memory: 128Mi
        volumes:
          - name: config
            configMap:
              name: grafana
          - name: ldap
            secret:
              secretName: grafana
              items:
                - key: ldap-toml
                  path: ldap.toml
        # nothing
    volumeClaimTemplates:
    - metadata:
        name: storage
      spec:
        accessModes: [ReadWriteOnce]
        storageClassName: 
        resources:
          requests:
            storage: 50

jenkins-infra, jenkins-infra, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
jenkins-infra, jenkins-infra, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: jenkins-infra
    namespace: jenkins-infra
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "jenkins-infra"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: jenkins-infra
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "jenkins-infra"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "jenkins-infra"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
          checksum/config: 20f61d5d0b46862c0ae7c0b42d1a80f59be1de543e6b30e4faf367d90ce83bd6
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.284-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.284-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
                   -Dcasc.reload.token=$(POD_NAME) -XshowSettings:vm -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled -XX:+DisableExplicitGC -XX:MaxRAM=4g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+UseG1GC
  
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
                cpu: "2"
                memory: 4Gi
              requests:
                cpu: "2"
                memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "jenkins-infra-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'jenkins-infra'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
+             - name: METHOD
+               value: SLEEP
+             - name: SLEEP_TIME
+               value: "60"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: jenkins-infra
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-infra
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: jenkins-infra

release, default-release-jenkins, Secret (v1) has changed:
+ Changes suppressed on sensitive content of type Secret
release, default-release-jenkins, StatefulSet (apps) has changed:
  # Source: jenkins/charts/jenkins/templates/jenkins-controller-statefulset.yaml
  apiVersion: apps/v1
  kind: StatefulSet
  metadata:
    name: default-release-jenkins
    namespace: release
    labels:
      "app.kubernetes.io/name": 'jenkins'
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
  spec:
    serviceName: default-release-jenkins
    replicas: 1
    selector:
      matchLabels:
        "app.kubernetes.io/component": "jenkins-controller"
        "app.kubernetes.io/instance": "default-release-jenkins"
    template:
      metadata:
        labels:
          "app.kubernetes.io/name": 'jenkins'
          "app.kubernetes.io/managed-by": "Helm"
          "app.kubernetes.io/instance": "default-release-jenkins"
          "app.kubernetes.io/component": "jenkins-controller"
        annotations:
          checksum/config: 3df3eb60ff5fed2d1a6b4a2474078ba8c707ae7a90bd659d08f707ea59b6e67c
      spec:
        securityContext:
      
          runAsUser: 1000
          fsGroup: 1000
          runAsNonRoot: true
        serviceAccountName: "jenkins-controller"
        initContainers:
          - name: "init"
            image: "jenkins/jenkins:2.277.1-jdk11"
            imagePullPolicy: "Always"
            command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
            resources:
              limits:
-               cpu: 2000m
-               memory: 4096Mi
+               cpu: "2"
+               memory: 4Gi
              requests:
-               cpu: 50m
-               memory: 256Mi
+               cpu: "2"
+               memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_home
                name: jenkins-home
              - mountPath: /var/jenkins_config
                name: jenkins-config
              - mountPath: /usr/share/jenkins/ref/plugins
                name: plugins
              - mountPath: /var/jenkins_plugins
                name: plugin-dir
        containers:
          - name: jenkins
            image: "jenkins/jenkins:2.277.1-jdk11"
            imagePullPolicy: "Always"
            args: [ "--httpPort=8080"]
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: JAVA_OPTS
                value: >-
-                  -Dcasc.reload.token=$(POD_NAME) 
+                  -Dcasc.reload.token=$(POD_NAME) -XshowSettings:vm -XX:+UseStringDeduplication -XX:+ParallelRefProcEnabled -XX:+DisableExplicitGC -XX:MaxRAM=4g -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/ -XX:+UseG1GC
+ 
              - name: JENKINS_OPTS
                value: >-
                  
              - name: JENKINS_SLAVE_AGENT_PORT
                value: "50000"
              - name: SECRETS
                value: /var/jenkins_secrets
              - name: CASC_JENKINS_CONFIG
                value: /var/jenkins_home/casc_configs
            ports:
              - containerPort: 8080
                name: http
              - containerPort: 50000
                name: agent-listener
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            startupProbe:
              failureThreshold: 12
              httpGet:
                path: '/login'
                port: http
              initialDelaySeconds: 120
              periodSeconds: 10
              timeoutSeconds: 5
            resources:
              limits:
-               cpu: 2000m
-               memory: 4096Mi
+               cpu: "2"
+               memory: 4Gi
              requests:
-               cpu: 50m
-               memory: 256Mi
+               cpu: "2"
+               memory: 4Gi
            volumeMounts:
              - mountPath: /var/jenkins_secrets
                name: jenkins-secrets
                readOnly: true
              - mountPath: /var/jenkins_home
                name: jenkins-home
                readOnly: false
              - mountPath: /var/jenkins_config
                name: jenkins-config
                readOnly: true
              - mountPath: /usr/share/jenkins/ref/plugins/
                name: plugin-dir
                readOnly: false
              - name: sc-config-volume
                mountPath: /var/jenkins_home/casc_configs
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-username
                subPath: jenkins-admin-user
                readOnly: true
              - name: admin-secret
                mountPath: /run/secrets/chart-admin-password
                subPath: jenkins-admin-password
                readOnly: true
          - name: config-reload
            image: "kiwigrid/k8s-sidecar:0.1.275"
            imagePullPolicy: IfNotPresent
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: LABEL
                value: "default-release-jenkins-jenkins-config"
              - name: FOLDER
                value: "/var/jenkins_home/casc_configs"
              - name: NAMESPACE
                value: 'release'
              - name: REQ_URL
                value: "http://localhost:8080/reload-configuration-as-code/?casc-reload-token=$(POD_NAME)"
              - name: REQ_METHOD
                value: "POST"
              - name: REQ_RETRY_CONNECT
                value: "10"
+             - name: METHOD
+               value: SLEEP
+             - name: SLEEP_TIME
+               value: "60"
            resources:
              {}
            volumeMounts:
              - name: sc-config-volume
                mountPath: "/var/jenkins_home/casc_configs"
              - name: jenkins-home
                mountPath: /var/jenkins_home
  
        volumes:
        - name: jenkins-secrets
          secret:
            secretName: jenkins-secrets
        - name: plugins
          emptyDir: {}
        - name: jenkins-config
          configMap:
            name: default-release-jenkins
        - name: plugin-dir
          emptyDir: {}
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: default-release-jenkins
        - name: sc-config-volume
          emptyDir: {}
        - name: admin-secret
          secret:
            secretName: default-release-jenkins
release, default-release-jenkins-jenkins-config-ldap-settings, ConfigMap (v1) has changed:
  # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: default-release-jenkins-jenkins-config-ldap-settings
    namespace: release
    labels:
      "app.kubernetes.io/name": jenkins
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
      default-release-jenkins-jenkins-config: "true"
  data:
    ldap-settings.yaml: |-
      jenkins:
        securityRealm:
          ldap:
            configurations:
              - server: "${LDAP_SERVER}"
                rootDN: "${LDAP_ROOT_DN}"
                managerDN: "${LDAP_MANAGER_DN}"
                managerPasswordSecret: "${LDAP_MANAGER_PASSWORD}"
+               mailAddressAttributeName: "mail"
                userSearch: cn={0}
+               userSearchBase: "ou=people"
+               groupSearchBase: "ou=groups"
+           disableMailAddressResolver: false
+           groupIdStrategy: "caseInsensitive"
+           userIdStrategy: "caseInsensitive"
            cache:
              size: 100
              ttl: 300
release, default-release-jenkins-jenkins-config-matrix-settings, ConfigMap (v1) has changed:
  # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: default-release-jenkins-jenkins-config-matrix-settings
    namespace: release
    labels:
      "app.kubernetes.io/name": jenkins
      "helm.sh/chart": "jenkins-3.2.4"
      "app.kubernetes.io/managed-by": "Helm"
      "app.kubernetes.io/instance": "default-release-jenkins"
      "app.kubernetes.io/component": "jenkins-controller"
      default-release-jenkins-jenkins-config: "true"
  data:
    matrix-settings.yaml: |-
      jenkins:
        authorizationStrategy:
          globalMatrix:
            permissions:
              - "Overall/Administer:release-core"
-             - "Overall/SystemRead:all"
-             - "Overall/Read:all"
-             - "Job/Read:all"
+             - "Overall/SystemRead:authenticated"
+             - "Overall/Read:authenticated"
+             - "Job/Read:authenticated"
release, default-release-jenkins-jenkins-config-advisor-settings, ConfigMap (v1) has been added:
- 
+ # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+   name: default-release-jenkins-jenkins-config-advisor-settings
+   namespace: release
+   labels:
+     "app.kubernetes.io/name": jenkins
+     "helm.sh/chart": "jenkins-3.2.4"
+     "app.kubernetes.io/managed-by": "Helm"
+     "app.kubernetes.io/instance": "default-release-jenkins"
+     "app.kubernetes.io/component": "jenkins-controller"
+     default-release-jenkins-jenkins-config: "true"
+ data:
+   advisor-settings.yaml: |-
+     jenkins:
+       disabledAdministrativeMonitors:
+         - com.cloudbees.jenkins.plugins.advisor.Reminder
+     advisor:
+       acceptToS: true
+       ccs:
+       - "damien.duportal@gmail.com"
+       email: "jenkins@oblak.com"
+       excludedComponents:
+         - "ItemsContent"
+         - "GCLogs"
+         - "Agents"
+         - "RootCAs"
+         - "SlaveLogs"
+         - "HeapUsageHistogram"
+       nagDisabled: true
release, default-release-jenkins-jenkins-config-system-settings, ConfigMap (v1) has been added:
- 
+ # Source: jenkins/charts/jenkins/templates/jcasc-config.yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+   name: default-release-jenkins-jenkins-config-system-settings
+   namespace: release
+   labels:
+     "app.kubernetes.io/name": jenkins
+     "helm.sh/chart": "jenkins-3.2.4"
+     "app.kubernetes.io/managed-by": "Helm"
+     "app.kubernetes.io/instance": "default-release-jenkins"
+     "app.kubernetes.io/component": "jenkins-controller"
+     default-release-jenkins-jenkins-config: "true"
+ data:
+   system-settings.yaml: |-
+     jenkins:
+       disabledAdministrativeMonitors:
+         - "jenkins.security.QueueItemAuthenticatorMonitor"

@olblak
Copy link
Member

olblak commented Mar 17, 2021

IMHO 60 is fine, we can still adjust that value later. Thanks for this PR

@olblak olblak merged commit 8154932 into jenkins-infra:master Mar 17, 2021
@dduportal dduportal deleted the chore/switch-ci-sidecars-to-polling branch March 17, 2021 13:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants