-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paused behaviour is inconsistent #6966
Comments
It looks like the current behaviour of paused is different depending on whether you set the spec field or the annotation? With the annotation you get no reconciliation, but with the spec field we end up calling sync() which has even allows some scaling behaviour? 🤔 Is that specific behaviour (sync when paused) used in some of our workflows - maybe a clusterctl move? |
/kind cleanup |
Q: If we add a paused field to the MachineSet would we then propagate the value of paused from the MD to the MS? If yes, I assume this shouldn't trigger a rollout? (and we have to take care it doesn't specifically) |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/triage accepted |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/priority important-longterm |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Maybe we can fix this with v1beta2 |
@sbueringer is this issue already tracked in the umbrella issue for v1beta2? |
Now, yes |
My personal take:
|
I'm going through the same situation. I set MachineDeployment.Spec.Paced to True to prevent me from doing scale up or down when I increased or decreased replica, but it still does scale work. This seems to be because the scale operation is still happening in the sync function as you can see in the link below. |
You should use the paused annotation if you want to prevent things from happening. |
What steps did you take and what happened:
In a MachineDeployment:
In the predicates we check cluster.Spec.Paused or the MachineDeployment has the annotation. We ignore MachineDeployment.Spec.Paused https://github.com/kubernetes-sigs/cluster-api/blob/main/internal/controllers/machinedeployment/machinedeployment_controller.go#L75-L97.
Then in the reconciling logic we first check the cluster.Spec.Paused or the MachineDeployment has the annotation. We ignore We ignore MachineDeployment.Spec.Paused https://github.com/kubernetes-sigs/cluster-api/blob/main/internal/controllers/machinedeployment/machinedeployment_controller.go#L126-L130
Then lines below we check only the MachineDeployment.Spec.Paused. We ignore cluster.Spec.Paused or the MachineDeployment has the annotation. https://github.com/kubernetes-sigs/cluster-api/blob/main/internal/controllers/machinedeployment/machinedeployment_controller.go#L225-L227
What did you expect to happen:
Always honour .spec.paused and fallback to the annotation for backward compatibility.
Introduce .spec.paused in MachineSets.
Review all CRDs to make the above consistent.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The text was updated successfully, but these errors were encountered: