Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metadata field of nested objects inside CR getting erased #1066

Closed
umaralam48 opened this issue Jun 1, 2021 · 5 comments
Closed

Metadata field of nested objects inside CR getting erased #1066

umaralam48 opened this issue Jun 1, 2021 · 5 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@umaralam48
Copy link

What happened:
I am writing an operator using operator-sdk. One of my resource has a map of another resource in its spec. The metadata field of the child resource is getting erased from the parent resource object.

What you expected to happen:
The metadata field should be present as provided.

How to reproduce it (as minimally and precisely as possible):

Here are the Resource types :

type WorkerGroupSpec struct {
	// Workers is the list of all workers under this group.
	Workers map[string]Worker `json:"workers,omitempty"`
}
// Worker is the Schema for the workers API
type Worker struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   WorkerSpec   `json:"spec,omitempty"`
	Status WorkerStatus `json:"status,omitempty"`
}

type WorkerSpec struct {
	// Image defines the Docker image to use.
	Image string `json:"image,omitempty"`
}

I m creating this resource using client.Client.Create() :

&v1alpha1.WorkerGroup{
		TypeMeta: v1.TypeMeta{
			Kind:       "WorkerGroup",
			APIVersion: "v1alpha1",
		},
		ObjectMeta: v1.ObjectMeta{
			Name:      "wg-2",
			Namespace: "namespace-1",
		},
		Spec: v1alpha1.WorkerGroupSpec{
			Workers: map[string]v1alpha1.Worker{
				"worker-2": {
					TypeMeta: v1.TypeMeta{
						Kind:       "Worker",
						APIVersion: "v1alpha1",
					},
					ObjectMeta: v1.ObjectMeta{
						Name:      "worker-2",
						Namespace: "namespace-1",
					},
					Spec:   v1alpha1.WorkerSpec{},
					Status: v1alpha1.WorkerStatus{},
				},
			},
		},
	}

When I run kubectl describe -n namespace-1 workergroup wg-2, this I what I get:

Name:         wg-2
Namespace:    namespace-1
Labels:       <none>
Annotations:  <none>
API Version:  core.umar.io/v1alpha1
Kind:         WorkerGroup
Metadata:
  Creation Timestamp:  2021-06-01T14:27:10Z
  Generation:          1
  Managed Fields:
    API Version:  core.umar.io/v1alpha1
    Manager:         main
    Operation:       Update
    Time:            2021-06-01T14:27:10Z
  Resource Version:  57754485
  Self Link:         /apis/core.umar.io/v1alpha1/namespaces/namespace-1/workergroups/wg-2
  UID:               40736dcb-34bc-4830-83c2-d005dca0ed81
Spec:
  Workers:
    worker-2:
      API Version:  v1alpha1
      Kind:         Worker
      Metadata:
      Spec:
      Status:

As you can see the metadata is empty.

Anything else we need to know?:
I tried writing a yaml file and using kubectl apply -f. Same issue.

Environment:

  • Kubernetes client and server versions (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
@umaralam48 umaralam48 added the kind/bug Categorizes issue or PR as related to a bug. label Jun 1, 2021
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 1, 2021
@eddiezane
Copy link
Member

Your server version is a bit out of date. Are you able to reproduce this on a supported version?

We're going to need an easier way to reproduce this. Can you please create a repo with a minimum reproducible example?

@soltysh
Copy link
Contributor

soltysh commented Aug 18, 2021

kubectl describe isn't fully functional against CRDs, it's currently a best effort trying to print the resource. There's a plan to support a mechanism similar to server-side printing which is being used when you invoke kubectl get against your CRD.

/triage accept
/kind feature

@k8s-ci-robot
Copy link
Contributor

@soltysh: The label(s) triage/accept cannot be applied, because the repository doesn't have them.

In response to this:

kubectl describe isn't fully functional against CRDs, it's currently a best effort trying to print the resource. There's a plan to support a mechanism similar to server-side printing which is being used when you invoke kubectl get against your CRD.

/triage accept
/kind feature

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 18, 2021
@eddiezane eddiezane added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Aug 18, 2021
@k8s-ci-robot k8s-ci-robot removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 18, 2021
@soltysh soltysh removed the kind/bug Categorizes issue or PR as related to a bug. label Aug 18, 2021
@soltysh
Copy link
Contributor

soltysh commented Aug 19, 2021

See kubernetes/enhancements#515 for details.
I'm going to close this one, please leave your comments in that other issue tracking that work.
/close

@k8s-ci-robot
Copy link
Contributor

@soltysh: Closing this issue.

In response to this:

See kubernetes/enhancements#515 for details.
I'm going to close this one, please leave your comments in that other issue tracking that work.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

4 participants