-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Empty Statefulset nodeAffinity gets build with string value "null" in complex scenario #5171
Comments
This issue is currently awaiting triage. SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @varshaprasad96 |
Dug a bit deep into the issue. From the findings as @sbuliarca mentioned it looks like this happens in special cases where we have 2 patches. The strategic merge patch is applied two times. After the first patch, the output seems to be on the lines of:
We can see that the Whereas when the second patch is performed, the parser considers both these nodes as However, when I apply a single patch with |
Based on how deep I could dig, looks like this seems to be an issue with yaml parser rather than Kustomize. This maybe an issue with the yaml library (https://github.com/go-yaml/yaml) which is being used in kustomize. |
If this is an issue with the yaml library, then it depends on kubernetes-sigs/yaml#76 before we can properly fix this, and then we should file and fix the issue in the new go yaml fork when it's ready. In the meantime, I think we can't do much about issues with the yaml parser in kustomize, so I will close this for now. You can keep an eye on kubernetes-sigs/yaml#76 for the state of progress, I can try to find time to update that one soon. |
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a special flag node that is `null` but never removed during merges. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. However, this approach will may change the value of the node in the output, e.g. if originally there was `field: ~` it would be replaced by `field: null`. The added test case in `kyaml/yaml/merge2/scalar_test.go` demonstrates this behaviour (this test currently fails as I expect the desired outcome is to preserve the null marker) See also kubernetes-sigs#5365 for an alternative approach
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a special flag node that is `null` but never removed during merges. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. However, this approach will may change the value of the node in the output, e.g. if originally there was `field: ~` it would be replaced by `field: null`. The added test case in `kyaml/yaml/merge2/scalar_test.go` demonstrates this behaviour (this test currently fails as I expect the desired outcome is to preserve the null marker) See also kubernetes-sigs#5365 for an alternative approach
I believe this has the same underlying cause as #5031 I've opened a PR would a proposed solution #5519 which has more details |
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a new attribute on `RNode`s that is checked before clearing any node we should keep. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. See also kubernetes-sigs#5365 for an alternative approach
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a new attribute on `RNode`s that is checked before clearing any node we should keep. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. See also kubernetes-sigs#5365 for an alternative approach
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a new attribute on `RNode`s that is checked before clearing any node we should keep. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. See also kubernetes-sigs#5365 for an alternative approach
What happened?
Starting with kustomize 5.0 we're having this issue where in a statefulset, an empty nodeAffinity gets built with the output as:
which is invalid, as it is a string with the value "null"
We worked around it by patching to drop the empty spec, but maybe this hides some other bug.
What did you expect to happen?
The empty nodeAffinity spec should be removed or built as:
How can we reproduce it (as minimally and precisely as possible)?
The scenario is a bit complex, so I put it in this repo: https://github.com/sbuliarca/kustomize-bug-replication/
There are several key pieces to the puzzle, and without them all it can not be reproduced:
Expected output
Actual output
Kustomize version
5.0.3
Operating system
MacOS
The text was updated successfully, but these errors were encountered: