Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'kubectl rollout restart' on member cluster don't restart deploy #5120

Open
Patrick0308 opened this issue Jul 2, 2024 · 15 comments · May be fixed by #5128
Open

'kubectl rollout restart' on member cluster don't restart deploy #5120

Patrick0308 opened this issue Jul 2, 2024 · 15 comments · May be fixed by #5128
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@Patrick0308
Copy link

What happened:
kubectl rollout restart deploy xxxx is not work as expected.
The new pod will be terminated soon.

image

What you expected to happen:
Restart deploy successfully

How to reproduce it (as minimally and precisely as possible):
kubectl rollout restart deploy xxxx

Anything else we need to know?:

Environment:

  • Karmada version: 1.9.1
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:
@Patrick0308 Patrick0308 added the kind/bug Categorizes issue or PR as related to a bug. label Jul 2, 2024
@XiShanYongYe-Chang
Copy link
Member

Hi @Patrick0308, the rollout restart of a deployment relies on the creation of a new ReplicaSet to achieve this, but in Karmada, the deployment-controller is not enabled, hence no ReplicaSet will be created.

Additionally, I would like to know where you query the pod resources?

@Patrick0308
Copy link
Author

@XiShanYongYe-Chang I rollout restart deploy on member cluster.

@XiShanYongYe-Chang
Copy link
Member

Thanks for explaining!

There may be a problem here. You can check that the changes to the Deployment on the member cluster maybe overwritten by the Karmada control plane.

@Patrick0308
Copy link
Author

kubectl restart deploy by adding "kubectl.kubernetes.io/restartedAt" annotation.
So I create this ResourceInterpreterCustomization to solve this issue temporarily.

apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
  name: deployment-restart
spec:
  target:
    apiVersion: apps/v1
    kind: Deployment
  customizations:
    retention:
      luaScript: >
        function Retain(desiredObj, observedObj)
          if desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] == nil then
            desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = observedObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"]
          end
          return desiredObj
        end

Maybe we should revise the default ResourceInterpreterCustomization?

@XiShanYongYe-Chang
Copy link
Member

I'm not sure it's appropriate to add that logic to the default interpreter.

Ask more people to help give some opinions.

/cc @RainbowMango @chaunceyjiang @whitewindmills

@Patrick0308
Copy link
Author

The related question is that should we support like server side apply feature to allow other controller or client to modify?

@XiShanYongYe-Chang
Copy link
Member

The retain resource interpreter is designed to leave this choice up to the user.

@whitewindmills
Copy link
Member

Maybe we should revise the default ResourceInterpreterCustomization?

first, I agree that this logic is supported in the default interpreter, cause the administrator has not made any changes to .spec, and Karmada has no reason to prevent the administrator from using the kubectl rollout restart deployments ... command to restart the Deployment.
in addition, I'm not sure if there are other similar situations that we can solve together.

The related question is that should we support like server side apply feature to allow other controller or client to modify?

yes, I think we have supported that, that is retain-interpreter.

@Patrick0308 Patrick0308 changed the title 'kubectl rollout restart' not restart deploy 'kubectl rollout restart' on member cluster don't restart deploy Jul 4, 2024
@XiShanYongYe-Chang
Copy link
Member

cause the administrator has not made any changes to .spec,

In fact, kubectl rollout restart updates the Deployment template content, refer to #5120 (comment) That's why karmada control plane is re-covered.

I'm wondering if this scenario needs to be handled by the default resource interpreter.

Hi @Patrick0308 Do you use this a lot in production?

@Patrick0308
Copy link
Author

Patrick0308 commented Jul 4, 2024

Do you use this a lot in production?

I think sometimes administrator want to restart the deployment of only one cluster. For example when one member cluster secret which be mounted to the deployment is updated by secret management.

The retain interceptor of #5120 (comment) is not a best way to solve this issue. Please see pr #5128 which let do kubectl rollout restart command where you like.

@XiShanYongYe-Chang
Copy link
Member

Thanks for your user story, I think it represents a meaningful use.

The retain interceptor of #5120 (comment) is not a best way to solve this issue.

Is it a inconvenience to use custom interpreters? I'd like to know from the user's point of view. Because we designed it to be better for users to scale.

@Patrick0308
Copy link
Author

The retain interceptor of #5120 (comment) is not a best way to solve this issue.

        function Retain(desiredObj, observedObj)
          if desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] == nil then
            desiredObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"] = observedObj.spec.template.metadata.annotations["kubectl.kubernetes.io/restartedAt"]
          end
          return desiredObj
        end

I mean this code will make doing "kubectl rollout restart deploy xxx" on karmada cluster not work. It's not best way.

@XiShanYongYe-Chang
Copy link
Member

Oh, the logic of this script is not correct. It should keep the latest field in the member cluster, just like the way you described in #5128.

@RainbowMango
Copy link
Member

A question is what if both the Karmada side and member cluster have the annotation kubectl.kubernetes.io/restartedAt?
I mean a user performs kubectl rollout restart on Karmada and member cluster.

@chaunceyjiang
Copy link
Member

deployment.kubernetes.io/revision Should this also be Retain?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

5 participants