-
Notifications
You must be signed in to change notification settings - Fork 425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove creationTimestamp from generated CRD #402
Comments
I did some digging and found out that this comes from JSON encoding. So, not sure this is easily fixable in this repo. (I'm not a maintainer, but was stumbling over the same issue) |
See kubernetes-sigs/controller-tools#402 According to the schema the generated CRDs are invalid, but kubernetes seems not to care.
See kubernetes-sigs/controller-tools#402 According to the schema the generated CRDs are invalid, but kubernetes seems not to care.
See kubernetes-sigs/controller-tools#402 According to the schema the generated CRDs are invalid, but kubernetes seems not to care.
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Thanks for the nice "digging" @uthark! I am experiencing this issue too, after operator-sdk migrated to kubebuilder. Since this happens to both generated YAML-files, CRD and operator Role, I think you are right that this a general issue with the YAML marshalling. Any ideas on how/where this issue can be resolved? It is a bit annoying to have to modify the generated files..... |
Probable root cause kubernetes/kubernetes#86811, and potential local fix: kubernetes/kubernetes#86811 (comment). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
remove-lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
It seems like this is still an issue, how do we feel about re-opening this? |
/remove-lifecycle rotten |
/reopen |
@asilverman: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen
This issue still exists, How about removing all controller-tools/pkg/genall/genall.go Line 181 in 73391d1
I'm using kubebuilder to implement k8s operator, the generated |
@pg-yang: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@joelanford: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
## Description Bumps [sigs.k8s.io/controller-tools](https://github.com/kubernetes-sigs/controller-tools) from 0.10.0 to 0.14.0. ## Why is this needed EKS-Anywhere needs to use a newer version of controller-gen that includes this [fix](kubernetes-sigs/controller-tools#402) so that it will not populate `creationTimestamp: null` in the generated manifests.
controller-gen currently adds the metadata field
creationTimestamp: null
. I think this field should not be included in the CRD yaml since this value should be added by the server.The text was updated successfully, but these errors were encountered: