-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problematic Multi Node Networking with docker driver and kindnetd CNI #9838
Comments
/assign |
Here is what I have found so far,
As the CNI is broken and not podCIDR is not set, kindnetd returns the first available ip in the host, that causes multiple pods to have same ip in different nodes. I am preparing a fix, will create a PR soon. |
To add cidr, could we --extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 ? |
You are right, CIDR and cni selection are the two problems. |
You are right setting both |
Creating a multi node minikube cluster with docker driver and kindnetd CNI seems to be resulting in broken netwroking inside the pods running in the worker nodes. This networking problem is not presents with calico CNI plugin.
Steps to reproduce the issue:
In above block the IP
127.17.0.3
is assigned in 3 pod running in 3 different nodes.Exec into net-test pod running is first node.
kubectl exec -it net-test-c4f9cfdd4-wxm7c -- /bin/bash
.Trying to curl in cluster service and google.com.
One request to
my-nginx.default
failed but the next one succeeded, next one failed again.No connectivity inside the pod.
Shows no podCIDR set, which seems to be a requirement for kindnetd https://github.com/kubernetes-sigs/kind/blob/master/images/kindnetd/cmd/kindnetd/main.go#L148.
Not sure if this is related, but while using calico via
minikube start -n 3 --enable-default-cni=false --network-plugin=cni --cni='calico'
works.Full output of
minikube start
command used, if not already included:❗ Multi-node clusters are currently experimental and might exhibit unintended behavior.
📘 To track progress on multi-node clusters, see #7538.
👍 Starting node multi-node-m02 in cluster multi-node
🔥 Creating docker container (CPUs=2, Memory=1987MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.59.2
🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
▪ env NO_PROXY=192.168.59.2
🔎 Verifying Kubernetes components...
👍 Starting node multi-node-m03 in cluster multi-node
🔥 Creating docker container (CPUs=2, Memory=1987MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.59.2,192.168.59.3
🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
▪ env NO_PROXY=192.168.59.2
▪ env NO_PROXY=192.168.59.2,192.168.59.3
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "multi-node" cluster and "default" namespace by default
Optional: Full output of
minikube logs
command:The text was updated successfully, but these errors were encountered: