Description
What steps did you take:
We are unable to use any version newer than v0.59.4 with app-deploy
, failing with resource conflict (approved diff no longer matches)
.
By method of elimination we have tested following versions:
0.63.3: FAIL
0.62.1: FAIL
0.61.0: FAIL
0.60.2: FAIL
0.60.0: FAIL
0.59.4: SUCCESS
What happened:
We are deploying full cluster config from scratch via kapp app-deploy, in sum ~800 resources, within a single app. This works amazingly well with Kapp, way better than Helm!
However, since v0.60.0 on the first apply we encounter this error:
- update daemonset/aws-node (apps/v1) namespace: kube-system: Failed to update due to resource conflict (approved diff no longer matches): Updating resource daemonset/aws-node (apps/v1) namespace: kube-system: API server says: Operation cannot be fulfilled on daemonsets.apps \"aws-node\": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict): Recalculated diff:
3, 3 - annotations:
4, 3 - deprecated.daemonset.template.generation: \"1\"
9, 7 - app.kubernetes.io/managed-by: Helm
11, 8 - app.kubernetes.io/version: v1.19.0
12, 8 - helm.sh/chart: aws-vpc-cni-1.19.0
14, 9 + kapp.k14s.io/app: \"1733129085919676830\"
14, 10 + kapp.k14s.io/association: v1.ca251169611f162ef5186bbf4f512ca0
326,323 - revisionHistoryLimit: 10
332,328 - creationTimestamp: null
337,332 + kapp.k14s.io/app: \"1733129085919676830\"
337,333 + kapp.k14s.io/association: v1.ca251169611f162ef5186bbf4f512ca0
356,353 - - hybrid
357,353 - - auto
362,357 - - name: ANNOTATE_POD_IP
363,357 - value: \"false\"
384,377 - - name: CLUSTER_NAME
385,377 - value: o11n-eks-int-4151
399,390 - value: \"false\"
400,390 + value: \"true\"
402,393 + - name: MINIMUM_IP_TARGET
402,394 + value: \"25\"
405,398 - value: v1.19.0
406,398 - - name: VPC_ID
407,398 - value: vpc-23837a4a
408,398 + value: v1.18.2
409,400 - value: \"1\"
410,400 + value: \"0\"
410,401 + - name: WARM_IP_TARGET
410,402 + value: \"5\"
411,404 - value: \"1\"
412,404 + value: \"0\"
422,415 - image: 602401143452.dkr.ecr.us-east-2.amazonaws.com/amazon-k8s-cni:v1.19.0-eksbuild.1
423,415 - imagePullPolicy: IfNotPresent
424,415 + image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.18.2
431,423 - failureThreshold: 3
433,424 - periodSeconds: 10
434,424 - successThreshold: 1
440,429 - protocol: TCP
448,436 - failureThreshold: 3
450,437 - periodSeconds: 10
451,437 - successThreshold: 1
455,440 - cpu: 25m
456,440 + cpu: 50m
456,441 + memory: 80Mi
461,447 - terminationMessagePath: /dev/termination-log
462,447 - terminationMessagePolicy: File
489,473 - image: 602401143452.dkr.ecr.us-east-2.amazonaws.com/amazon/aws-network-policy-agent:v1.1.5-eksbuild.1
490,473 - imagePullPolicy: Always
491,473 + image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-network-policy-agent:v1.1.2
494,477 - cpu: 25m
495,477 + cpu: 50m
495,478 + memory: 80Mi
500,484 - terminationMessagePath: /dev/termination-log
501,484 - terminationMessagePolicy: File
511,493 - dnsPolicy: ClusterFirst
519,500 - image: 602401143452.dkr.ecr.us-east-2.amazonaws.com/amazon-k8s-cni-init:v1.19.0-eksbuild.1
520,500 - imagePullPolicy: Always
521,500 + image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.18.2
524,504 - cpu: 25m
525,504 + cpu: 50m
525,505 + memory: 80Mi
527,508 - terminationMessagePath: /dev/termination-log
528,508 - terminationMessagePolicy: File
533,512 - restartPolicy: Always
534,512 - schedulerName: default-scheduler
536,513 - serviceAccount: aws-node
544,520 - type: \"\"
548,523 - type: \"\"
552,526 - type: \"\"
568,541 - maxSurge: 0
- update daemonset/kube-proxy (apps/v1) namespace: kube-system: Failed to update due to resource conflict (approved diff no longer matches): Updating resource daemonset/kube-proxy (apps/v1) namespace: kube-system: API server says: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict): Recalculated diff:
3, 3 - annotations:
4, 3 - deprecated.daemonset.template.generation: \"1\"
10, 8 + kapp.k14s.io/app: \"1733129085919676830\"
10, 9 + kapp.k14s.io/association: v1.5c5a114581f350e2b57df0ed7799471d
134,134 + kapp.k14s.io/app: \"1733129085919676830\"
134,135 + kapp.k14s.io/association: v1.5c5a114581f350e2b57df0ed7799471d
153,155 - - auto
159,160 - - --hostname-override=$(NODE_NAME)
160,160 - env:
161,160 - - name: NODE_NAME
162,160 - valueFrom:
163,160 - fieldRef:
164,160 - apiVersion: v1
165,160 - fieldPath: spec.nodeName
166,160 - image: 602401143452.dkr.ecr.us-east-2.amazonaws.com/eks/kube-proxy:v1.29.10-minimal-eksbuild.3
167,160 + image: 602401143452.dkr.ecr.us-east-2.amazonaws.com/eks/kube-proxy:v1.29.7-eksbuild.2
171,165 - cpu: 100m
172,165 + cpu: 50m
172,166 + memory: 45Mi
My assumption is that a webhook or a controller might interfere here with Kapp on certain fields.
However, we need to be able to configure the EKS cluster via Kapp even under a temporary clash.
What did you expect:
Kapp to retry
@praveenrewar
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Prioritized Backlog