-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale kube-dns to multiple nodes #2
Comments
Relevant pod template spec parts from the
Doesn't seem to be any node selectors/affinities that would limit what nodes the dns pods get scheduled onto... it presumably ends up on the master node because that's the first node that happens to be available. Should just be a matter of |
I think it should have self anti-affinity, see: kubernetes/kubernetes#57683 |
FWIW that PR was reverted in kubernetes/kubernetes#59357 due to kubernetes/kubernetes#54164 scaling issues. Also, that PR did not touch the kube-dns manifest used by |
Alternative (kubernetes/kubernetes#40063 (comment)) is using the Horizontal DNS autoscaling controller with |
The Once the other nodes go back online, I don't see what would end up rescheduling the other pod off the master node.
|
The pragmatic approach to this issue would be to make the number of DNS replicas a configurable parameter (maybe default to something sensible based on the number of nodes in the config?), and then Long-term I think the best idea would be to replace the problematic kubeadm-managed deployment with a daemonset using node labels for the DNS addon, but that would require more work? |
Yes, I think this is the way to go (for now).
Long-term solutions probably requires contributions to kubeadm (to make it less hacky)? |
No description provided.
The text was updated successfully, but these errors were encountered: