-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creation timestamp set to invalid value "null" when using kustomize v5.0.0 #5031
Comments
I see that your original resource for the CRD includes the timestamp field in question, and previous version of Kustomize were stripping it out. The behaviour change comes from a bug we fixed in 5.0, where our Strategic Merge Patch implementation violated the spec by removing Please reopen if removing the null from the original document does not solve the problem. /triage resolved |
@KnVerey: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I agree, but I believe it does? In manual tests, I see either |
That's interesting. I observed that even though my original CRD base had |
I'm getting the same issue: a I managed to make a minimal replication, it seems to take two patches to trigger: mycrd.yaml: apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mycrd
creationTimestamp: null
spec: {} kustomization.yaml: apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- mycrd.yaml
patches:
- patch: |
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mycrd
# empty patch
- patch: |
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: mycrd
# empty patch A apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
creationTimestamp: "null"
name: mycrd
spec: {} Please reopen! |
Quote from the API machinery: Two things here:
Edit(answering my own questions): I faced the issue in the context of kubebuilder. Has nothing to do with kustomize. contorller-gen is responsible for adding But the issue related to kustomize still stands. |
@nrvnrvn: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen Thanks for the minimal example @james-callahan! I was able to reproduce with Kustomize 5.0.1. Please note that in addition to removing the timestamp from the source, another workaround that works is to delete it by including |
@KnVerey: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
For those that are coming from kubebuilder/operator-sdk/controller-gen, I have a PR up to remove |
Added a repro MR for anyone who wants to pick this up. |
This tickles a bug in kustomize: kubernetes-sigs/kustomize#5031 (comment)
/assign |
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a special flag node that is `null` but never removed during merges. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. However, this approach will may change the value of the node in the output, e.g. if originally there was `field: ~` it would be replaced by `field: null`. The added test case in `kyaml/yaml/merge2/scalar_test.go` demonstrates this behaviour (this test currently fails as I expect the desired outcome is to preserve the null marker) See also kubernetes-sigs#5365 for an alternative approach
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a special flag node that is `null` but never removed during merges. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. However, this approach will may change the value of the node in the output, e.g. if originally there was `field: ~` it would be replaced by `field: null`. The added test case in `kyaml/yaml/merge2/scalar_test.go` demonstrates this behaviour (this test currently fails as I expect the desired outcome is to preserve the null marker) See also kubernetes-sigs#5365 for an alternative approach
This tickles a bug in kustomize: kubernetes-sigs/kustomize#5031 (comment)
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a new attribute on `RNode`s that is checked before clearing any node we should keep. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. See also kubernetes-sigs#5365 for an alternative approach
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a new attribute on `RNode`s that is checked before clearing any node we should keep. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. See also kubernetes-sigs#5365 for an alternative approach
This should be fixed via #5519 /close |
@stormqueen1990: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Related issues: * kubernetes-sigs#5031 * kubernetes-sigs#5171 After noting this behaviour was not present in d89b448 a `git bisect` pointed to the change 1b7db20. The issue with that change is that upon seeing a `null` node it would replace it with a node whose value was equivalent but without a `!!null` tag. This meant that one application of a patch would have the desired approach: the result would be `null` in the output, but on a second application of a similar patch the field would be rendered as `"null"`. To avoid this, define a new attribute on `RNode`s that is checked before clearing any node we should keep. The added `TestApplySmPatch_Idempotency` test verifies this behaviour. See also kubernetes-sigs#5365 for an alternative approach
What happened?
I was trying to generate manifests for a cluster-api-provider project (tinkerbell/cluster-api-provider) and installed kustomize using the
install_kustomize
script. But the generated manifests had some spurious values forcreationTimestamp
Note the quotes around null. This is causing issues when trying to parse time in a client-go applicatio.
However I don't see this issue in an older version of kustomize.
What did you expect to happen?
I expect that the generated manifest should conform to types of metav1.ObjectMeta where creationTimestamp is a Time object and not string, i.e.,
null
instead of"null"
.How can we reproduce it (as minimally and precisely as possible)?
kustomization.yaml under
config/crd
in https://github.com/tinkerbell/cluster-api-provider-tinkerbell. Commenting out some of the CRDs to reproduce minimallybases/infrastructure.cluster.x-k8s.io_tinkerbellclusters.yaml under
config/crd
in https://github.com/tinkerbell/cluster-api-provider-tinkerbellExpected output
Actual output
Comparing this with the expected output, the only difference is the addition of the creationTimestamp field in
metadata
, with the value of"null"
.Kustomize version
v5.0.0
Operating system
MacOS
The text was updated successfully, but these errors were encountered: