Description
Kustomize has a feature to append a hash of the contents of configmap/secret, as part of the name. This feature allows kubectl apply
to automatically trigger a rolling update of a deployment whenever the contents of the configmap/secret changes, while also remaining a no-op when the contents do not change.
This poses some challenges for Argo CD:
-
The nameSuffixHash leaves a bunch of old, unused ConfigMaps/Secrets lying around in the namespace. Argo CD subsequently considers the application OutOfSync (which it in fact is), because there are extra resources in the system which no longer are defined in git.
-
Although Argo CD can automatically delete the old ConfigMaps using the
--prune
flag toargocd app sync
, doing so might be undesirable in some circumstances. For example, during a rolling update, Argo CD would delete the old ConfigMaps referenced by the previous deployment's pods, even when the deployment has not yet fully finished the rolling update.
Describe the solution you'd like
Some proposals:
- We need a way to perform
argocd app sync
but not error out in the CLI when it detects that there are resources that require pruning. To allow users to describe their intent, I propose a new annotation which can be set on resources, which if found on an "extra" resource object, we do not error in the CLI. Something like:
metadata:
annotations:
argocd.argoproj.io/ignore-sync-condition: Extra
-
The annotation to ignore extra resource should also affect application sync status. In other words, if we find extra configmaps lying around the namespace, and those extra ConfigMaps have the
argocd.argoproj.io/ignore-sync-condition: Extra
annotation, then we do not contribute that configmap to the overall OutOfSync condition of the application. -
Today, pruning extra objects in Argo CD is all or nothing. As mentioned above, it is problematic to prune all of the extra objects early because of the issue of deleting the previous ConfigMap from underneath old deployments which are still referencing it. Even if we introduce the annotation to ignore extra configmaps, there will be an accumulation of configmaps indefinitely until
argocd app sync --prune
is invoked.
To mediate this, Argo CD could implement some sort of ConfigMap/Secret garbage collection. Note that there is proposals as part of kubernetes core, to GC configmaps/secrets: kubernetes/kubernetes#22368. So I'm hesitant to implement anything specific to Argo CD.
Furthermore, detecting if a ConfigMap or Secret is still "referenced" might be very difficult to detect. One heuristic might be to look at all pods in the namespace, and see if any pod spec is still referencing the configmap/secret it as a volume mount, env var, etc.... However this may be tricky to get right, subject to timing issues (deployment not fully rolled out), and might not work in all cases (e.g. if the configmap was retrieved via K8s API query)