Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove creationTimestamp from generated CRD #402

Closed
pgier opened this issue Feb 10, 2020 · 25 comments · Fixed by #800
Closed

Remove creationTimestamp from generated CRD #402

pgier opened this issue Feb 10, 2020 · 25 comments · Fixed by #800
Assignees

Comments

@pgier
Copy link
Contributor

pgier commented Feb 10, 2020

controller-gen currently adds the metadata field creationTimestamp: null. I think this field should not be included in the CRD yaml since this value should be added by the server.

@uthark
Copy link

uthark commented Mar 10, 2020

I did some digging and found out that this comes from JSON encoding.
What happens is that yaml.Marshal is called https://github.com/uthark/controller-tools/blob/master/pkg/genall/genall.go#L106.
This in turn calls json.Marshal https://github.com/kubernetes-sigs/yaml/blob/v1.1.0/yaml.go#L17
and it falls to the if f.omitEmpty && isEmptyValue(fv) check (https://golang.org/src/encoding/json/encode.go, line 747) and isEmptyValue returns false for empty meta.v1.Time (https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/time.go#L33-L35)

So, not sure this is easily fixable in this repo. (I'm not a maintainer, but was stumbling over the same issue)

phil9909 added a commit to phil9909/ytt-lint that referenced this issue May 4, 2020
See kubernetes-sigs/controller-tools#402
According to the schema the generated CRDs are invalid, but kubernetes
seems not to care.
phil9909 added a commit to SAP-archive/ytt-lint that referenced this issue May 22, 2020
See kubernetes-sigs/controller-tools#402
According to the schema the generated CRDs are invalid, but kubernetes
seems not to care.
phil9909 added a commit to phil9909/ytt-lint that referenced this issue May 22, 2020
See kubernetes-sigs/controller-tools#402
According to the schema the generated CRDs are invalid, but kubernetes
seems not to care.
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 8, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 8, 2020
@erikgb
Copy link
Contributor

erikgb commented Jul 18, 2020

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 18, 2020
@erikgb
Copy link
Contributor

erikgb commented Jul 18, 2020

Thanks for the nice "digging" @uthark! I am experiencing this issue too, after operator-sdk migrated to kubebuilder. Since this happens to both generated YAML-files, CRD and operator Role, I think you are right that this a general issue with the YAML marshalling. Any ideas on how/where this issue can be resolved? It is a bit annoying to have to modify the generated files.....

@erikgb
Copy link
Contributor

erikgb commented Jul 18, 2020

Probable root cause kubernetes/kubernetes#86811, and potential local fix: kubernetes/kubernetes#86811 (comment).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2020
@camilamacedo86
Copy link
Member

remove-lifecycle stale

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 8, 2021
@erikgb
Copy link
Contributor

erikgb commented Jan 8, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 8, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 8, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 8, 2021
@erikgb
Copy link
Contributor

erikgb commented May 8, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@shaneutt
Copy link
Member

shaneutt commented Dec 8, 2021

It seems like this is still an issue, how do we feel about re-opening this?

@asilverman
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 13, 2022
@asilverman
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@asilverman: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pg-yang
Copy link

pg-yang commented Mar 1, 2023

/reopen

It seems like this is still an issue, how do we feel about re-opening this?

This issue still exists, How about removing all nil fields from jsonObj , I think it's safe.

var jsonObj map[string]interface{}

I'm using kubebuilder to implement k8s operator, the generated creationTimestamp: null will be transformed to creationTimestamp: "null" by kustomize

@k8s-ci-robot
Copy link
Contributor

@pg-yang: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

It seems like this is still an issue, how do we feel about re-opening this?

This issue still exists, How about removing all nil fields from jsonObj , it's safe.

var jsonObj map[string]interface{}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@joelanford
Copy link
Member

/reopen
/assign

@k8s-ci-robot
Copy link
Contributor

@joelanford: Reopened this issue.

In response to this:

/reopen
/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Apr 21, 2023
mergify bot added a commit to tinkerbell/cluster-api-provider-tinkerbell that referenced this issue Mar 7, 2024
## Description

Bumps [sigs.k8s.io/controller-tools](https://github.com/kubernetes-sigs/controller-tools) from 0.10.0 to 0.14.0.

## Why is this needed

EKS-Anywhere needs to use a newer version of controller-gen that includes this [fix](kubernetes-sigs/controller-tools#402) so that it will not populate `creationTimestamp: null` in the generated manifests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet