-
Notifications
You must be signed in to change notification settings - Fork 883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'kubectl rollout restart' on member cluster don't restart deploy #5120
Comments
Hi @Patrick0308, the rollout restart of a deployment relies on the creation of a new ReplicaSet to achieve this, but in Karmada, the deployment-controller is not enabled, hence no ReplicaSet will be created. Additionally, I would like to know where you query the pod resources? |
@XiShanYongYe-Chang I rollout restart deploy on member cluster. |
Thanks for explaining! There may be a problem here. You can check that the changes to the Deployment on the member cluster maybe overwritten by the Karmada control plane. |
kubectl restart deploy by adding "kubectl.kubernetes.io/restartedAt" annotation.
Maybe we should revise the default ResourceInterpreterCustomization? |
I'm not sure it's appropriate to add that logic to the default interpreter. Ask more people to help give some opinions. |
The related question is that should we support like server side apply feature to allow other controller or client to modify? |
The retain resource interpreter is designed to leave this choice up to the user. |
first, I agree that this logic is supported in the default interpreter, cause the administrator has not made any changes to
yes, I think we have supported that, that is retain-interpreter. |
In fact, I'm wondering if this scenario needs to be handled by the default resource interpreter. Hi @Patrick0308 Do you use this a lot in production? |
I think sometimes administrator want to restart the deployment of only one cluster. For example when one member cluster secret which be mounted to the deployment is updated by secret management. The retain interceptor of #5120 (comment) is not a best way to solve this issue. Please see pr #5128 which let do |
Thanks for your user story, I think it represents a meaningful use.
Is it a inconvenience to use custom interpreters? I'd like to know from the user's point of view. Because we designed it to be better for users to scale. |
I mean this code will make doing "kubectl rollout restart deploy xxx" on karmada cluster not work. It's not best way. |
Oh, the logic of this script is not correct. It should keep the latest field in the member cluster, just like the way you described in #5128. |
A question is what if both the |
|
What happened:
kubectl rollout restart deploy xxxx
is not work as expected.The new pod will be terminated soon.
What you expected to happen:
Restart deploy successfully
How to reproduce it (as minimally and precisely as possible):
kubectl rollout restart deploy xxxx
Anything else we need to know?:
Environment:
kubectl-karmada version
orkarmadactl version
):The text was updated successfully, but these errors were encountered: