Using the playbook scale.yml to scale out cluster worker nodes will restart kube-proxy. #11272
Labels
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/rotten
Denotes an issue or PR that has aged beyond stale and will be auto-closed.
What happened?
I noticed that using the playbook scale.yml to scale out the cluster worker nodes will restart kube-proxy.
The corresponding task: https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/kubeadm/tasks/main.yml#L204
Is it necessary to restart kube-proxy in the scenario where only worker nodes are being added?
What did you expect to happen?
In the case of only scaling out worker nodes, I don't think it is necessary to restart kube-proxy, so I believe this scenario can be optimized.
How can we reproduce it (as minimally and precisely as possible)?
This can be reproduced by using the playbook scale.yml to scale out the cluster worker nodes.
OS
Version of Ansible
Version of Python
Python 3.10.13
Version of Kubespray (commit)
774d824
Network plugin used
calico
Full inventory with variables
Command used to invoke ansible
ansible-playbook -i /conf/host.yml --become-user root -e "@/conf/group_vars.yml" --private-key /auth/ssh-privatekey /kubespray/scale.yml --limit=dev1-w-10-64-80-147 --forks=10
Output of ansible run
~
Anything else we need to know
No response
The text was updated successfully, but these errors were encountered: