Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreDNS fails on minions on multi-node clusters. Can't resolve external DNS from non-master pods. #8055

Closed
aasmall opened this issue May 9, 2020 · 23 comments · Fixed by #8545 or #10985
Assignees
Labels
area/cni CNI support area/networking networking issues co/coredns CoreDNS related issues co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@aasmall
Copy link

aasmall commented May 9, 2020

So, I already fixed this and lost some of the logs. But it's pretty straight-forward.

  1. Make a cluster
minikube start --vm-driver=kvm2 --cpus=2 --nodes 3 --network-plugin=cni \
--addons registry --enable-default-cni=false \
--insecure-registry "10.0.0.0/24" --insecure-registry "192.168.39.0/24" \
--extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 \
--extra-config=kubelet.network-plugin=cni
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

n.b. I built from head a couple days ago

minikube version: v1.10.0-beta.2
commit: 80c3324b6f526911d46033721df844174fe7f597
  1. make a pod on master and a pod on a node
  2. from node pod: curl google.com
  3. from master pod: curl google.com

CoreDNS was crashing per kubernetes/kubernetes#75414

Fixed with

kubectl patch deployment coredns -n kube-system --patch '{"spec":{"template":{"spec":{"volumes":[{"name":"emptydir-tmp","emptyDir":{}}],"containers":[{"name":"coredns","volumeMounts":[{"name":"emptydir-tmp","mountPath":"/tmp"}]}]}}}}' 

Edit: had wrong flannel yaml listed.

@medyagh
Copy link
Member

medyagh commented May 11, 2020

@aasmall thank you for bringing this to our attention,
interesting !

I have a few questions
1- does this happen only when you have flannel as cni ? or does it happen for all cni ?
2- does this happen only on multi node clusetrs?

I assume it doesn't happen for normal docker runtime no cni secnarios ?

multi node is experimental at the moment but we have WIP PRs that would remove the need for flannel .

@sharifelgamal

@medyagh medyagh added co/coredns CoreDNS related issues triage/needs-information Indicates an issue needs more information in order to work on it. area/cni CNI support area/networking networking issues labels May 11, 2020
@sharifelgamal
Copy link
Collaborator

HEAD should no longer need flannel at all, we should automatically apply CNI for multinode

@medyagh medyagh added co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question. labels May 11, 2020
@aasmall
Copy link
Author

aasmall commented May 12, 2020

@medyagh

  1. it applies to all CNIs as it's in CoreDNS
  2. the bug inherently only applies to multi-node clusters

@sharifelgamal - Thank you. I'll validate in a spell. Busy working on the actual app rn, though I AM having a lot of fun playing with minikube.

@kfox1111
Copy link

Not sure this is related or not, But I experienced dns failing on the minions.
started like:
minikube start --cpus 2 --memory=2096 --disk-size=20000mb -n 3
on minikube version: v1.10.1
coredns seems stable, but not accessible outside of the master.
linux, kvm

@kfox1111
Copy link

Tried disabling kindnet so I could add my own driver:

minikube start --cpus 2 --memory=2048 --disk-size=20000mb -n 3 --enable-default-cni=false --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.244.0.0/16
$ kubectl get pods -n kube-system | grep kindnet
kindnet-b4qqn                      1/1     Running   0          45s
kindnet-tvlt5                      1/1     Running   0          32s
kindnet-xxmk2                      1/1     Running   0          14s

Not sure how to disable kindnet.

@kfox1111
Copy link

I checked connectivity in the pods via launching a pod on each node and trying to connect to each other with nc.

workers work. master connectivity is not.

I deleted the coredns pods and they restarted on the non master nodes. and dns started working.

So something is not working with kindnet on the master.

@kfox1111
Copy link

there seems to be a difference between the master and the workers. Not sure its relevant though:

$ diff -u /tmp/ip-worker3 /tmp/ip-master 
--- /tmp/ip-worker3	2020-05-16 10:59:37.064470264 -0700
+++ /tmp/ip-master	2020-05-16 10:59:16.232281763 -0700
@@ -1,9 +1,11 @@
-# Generated by iptables-save v1.8.3 on Sat May 16 17:58:30 2020
+# Generated by iptables-save v1.8.3 on Sat May 16 17:59:03 2020
 *nat
-:PREROUTING ACCEPT [75:5268]
-:INPUT ACCEPT [2:120]
-:OUTPUT ACCEPT [79:4740]
-:POSTROUTING ACCEPT [150:9776]
+:PREROUTING ACCEPT [31:1860]
+:INPUT ACCEPT [29:1740]
+:OUTPUT ACCEPT [215:12962]
+:POSTROUTING ACCEPT [191:11460]
+:CNI-4b264cc7114301b74b8d967a - [0:0]
+:CNI-de8faca36f95f967aca64e60 - [0:0]
 :DOCKER - [0:0]
 :KIND-MASQ-AGENT - [0:0]
 :KUBE-KUBELET-CANARY - [0:0]
@@ -30,7 +32,13 @@
 -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
 -A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
 -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
+-A POSTROUTING -s 10.88.0.2/32 -m comment --comment "name: \"podman\" id: \"9af909fd3a3d3822201c1c4a504ec7baa704b71309872154020ea915b8571d88\"" -j CNI-de8faca36f95f967aca64e60
+-A POSTROUTING -s 10.88.0.3/32 -m comment --comment "name: \"podman\" id: \"84903f24fb88f7a7d040819e0865388cd9112ce365881e1f1284e4a5add42438\"" -j CNI-4b264cc7114301b74b8d967a
 -A POSTROUTING -m addrtype ! --dst-type LOCAL -m comment --comment "kind-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain" -j KIND-MASQ-AGENT
+-A CNI-4b264cc7114301b74b8d967a -d 10.88.0.0/16 -m comment --comment "name: \"podman\" id: \"84903f24fb88f7a7d040819e0865388cd9112ce365881e1f1284e4a5add42438\"" -j ACCEPT
+-A CNI-4b264cc7114301b74b8d967a ! -d 224.0.0.0/4 -m comment --comment "name: \"podman\" id: \"84903f24fb88f7a7d040819e0865388cd9112ce365881e1f1284e4a5add42438\"" -j MASQUERADE
+-A CNI-de8faca36f95f967aca64e60 -d 10.88.0.0/16 -m comment --comment "name: \"podman\" id: \"9af909fd3a3d3822201c1c4a504ec7baa704b71309872154020ea915b8571d88\"" -j ACCEPT
+-A CNI-de8faca36f95f967aca64e60 ! -d 224.0.0.0/4 -m comment --comment "name: \"podman\" id: \"9af909fd3a3d3822201c1c4a504ec7baa704b71309872154020ea915b8571d88\"" -j MASQUERADE
 -A DOCKER -i docker0 -j RETURN
 -A KIND-MASQ-AGENT -d 10.244.0.0/16 -m comment --comment "kind-masq-agent: local traffic is not subject to MASQUERADE" -j RETURN
 -A KIND-MASQ-AGENT -m comment --comment "kind-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain)" -j MASQUERADE
@@ -68,23 +76,25 @@
 -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-EJJ3L23ZA35VLW6X
 -A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-RJHMR3QLYGJVBWVL
 COMMIT
-# Completed on Sat May 16 17:58:30 2020
-# Generated by iptables-save v1.8.3 on Sat May 16 17:58:30 2020
+# Completed on Sat May 16 17:59:03 2020
+# Generated by iptables-save v1.8.3 on Sat May 16 17:59:03 2020
 *mangle
-:PREROUTING ACCEPT [35417:67062632]
-:INPUT ACCEPT [34008:66687141]
-:FORWARD ACCEPT [1389:374215]
-:OUTPUT ACCEPT [20262:1829311]
-:POSTROUTING ACCEPT [21666:2204396]
+:PREROUTING ACCEPT [405427:144956612]
+:INPUT ACCEPT [405404:144954818]
+:FORWARD ACCEPT [14:1090]
+:OUTPUT ACCEPT [400927:112195409]
+:POSTROUTING ACCEPT [400950:112196985]
 :KUBE-KUBELET-CANARY - [0:0]
 :KUBE-PROXY-CANARY - [0:0]
 COMMIT
-# Completed on Sat May 16 17:58:30 2020
-# Generated by iptables-save v1.8.3 on Sat May 16 17:58:30 2020
+# Completed on Sat May 16 17:59:03 2020
+# Generated by iptables-save v1.8.3 on Sat May 16 17:59:03 2020
 *filter
-:INPUT ACCEPT [1837:1495255]
-:FORWARD ACCEPT [143:10102]
-:OUTPUT ACCEPT [1864:289671]
+:INPUT ACCEPT [85938:56432847]
+:FORWARD ACCEPT [10:600]
+:OUTPUT ACCEPT [82148:22332958]
+:CNI-ADMIN - [0:0]
+:CNI-FORWARD - [0:0]
 :DOCKER - [0:0]
 :DOCKER-ISOLATION-STAGE-1 - [0:0]
 :DOCKER-ISOLATION-STAGE-2 - [0:0]
@@ -98,6 +108,8 @@
 -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
 -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
 -A INPUT -j KUBE-FIREWALL
+-A FORWARD -m comment --comment "CNI firewall plugin rules" -j CNI-FORWARD
+-A FORWARD -m comment --comment "CNI firewall plugin rules" -j CNI-FORWARD
 -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
 -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
 -A FORWARD -j DOCKER-USER
@@ -108,6 +120,11 @@
 -A FORWARD -i docker0 -o docker0 -j ACCEPT
 -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
 -A OUTPUT -j KUBE-FIREWALL
+-A CNI-FORWARD -m comment --comment "CNI firewall plugin rules" -j CNI-ADMIN
+-A CNI-FORWARD -d 10.88.0.3/32 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+-A CNI-FORWARD -d 10.88.0.2/32 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
+-A CNI-FORWARD -s 10.88.0.3/32 -j ACCEPT
+-A CNI-FORWARD -s 10.88.0.2/32 -j ACCEPT
 -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
 -A DOCKER-ISOLATION-STAGE-1 -j RETURN
 -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
@@ -119,5 +136,5 @@
 -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 COMMIT
-# Completed on Sat May 16 17:58:30 2020
+# Completed on Sat May 16 17:59:03 2020

@priyawadhwa priyawadhwa added the triage/discuss Items for discussion label May 19, 2020
@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. and removed triage/discuss Items for discussion labels May 20, 2020
@priyawadhwa priyawadhwa removed the triage/needs-information Indicates an issue needs more information in order to work on it. label May 20, 2020
@priyawadhwa
Copy link

Hey @kfox1111 thanks for providing the additional info. Looks like this is a bug & will be tracked in #7538

@tstromberg tstromberg self-assigned this May 26, 2020
@tstromberg tstromberg added this to the v1.12.0-realc milestone Jun 1, 2020
@tstromberg
Copy link
Contributor

I'm going to take a look into this today.

@tstromberg
Copy link
Contributor

tstromberg commented Jun 4, 2020

I think this is a bug, but at the same time, I think it should be a fairly rare bug to run into, at least with the current state of minikube: minikube will only deploy CoreDNS to the master pod by default. Even if you scale the deployment to 30 replicas.

I do see now that it does not appear to be possible to select CNI's in multi-node (kind is applied by default): That will be fixed by #8222 - probably by adding a flag like --cni=flannel.

@tstromberg
Copy link
Contributor

I scaled minikube up to 150 DNS replicas in order to get it scaled across the 3 nodes, and had no issue with pods crashing or not resolving records. I wonder if we accidentally fixed this due to applying a default CNI.

$ ./out/minikube start -n=3 --enable-default-cni=false --network-plugin=cni
$ kubectl scale deployment --replicas=150 coredns --namespace=kube-system

I will revisit this once I'm able to disable kindnet as part of #8222

@turnes
Copy link

turnes commented Jun 9, 2020

My tests were based on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution.

My env
Ubuntu 19.10
Minikube v1.11.0
Multi-node
KVM2

Scenario 1 minikube start -p dns --cpus=2 --memory=2g --nodes=2 --driver=kvm2 --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
  • CoreDNS started normally. Just an event.
    • Event: Warning FailedScheduling 36m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  • DNS did not work internally in a pod.
    • kubectl exec -ti dnsutils -- nslookup kubernetes.default
  • delete DNS pods, k8s recreates DNS PODS. Then DNS works normally.
Scenario 2 minikube start -p dns --cpus=2 --memory=2g --nodes=2 --driver=kvm2 --enable-default-cni=false --network-plugin=cni
  • CoreDNS started normally. Just an event.
    • Event: Event: Warning FailedScheduling 36m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  • DNS did not work internally in a pod.
    • kubectl exec -ti dnsutils -- nslookup kubernetes.default
  • delete DNS pods, k8s recreates DNS PODS. Then DNS works normally in pods.

Conclusion:

  • Inicially the DNS pods were hosted on master node. And DNS was not working in PODs.
  • in both situation I had to delete de DNS pods. Then DNS pods were spread to the nodes. DNS worked in PODS
  • Forced DNS Pods run only in master node. It worked normally.
  • It's somehow cluster's startup related.

@tstromberg tstromberg added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jun 10, 2020
@tstromberg tstromberg changed the title CoreDNS fails on minions on muilt-node clusters. Can't resolve external DNS from non-master pods. CoreDNS fails on minions on multi-node clusters. Can't resolve external DNS from non-master pods. Jun 15, 2020
@tstromberg
Copy link
Contributor

After testing, I can confirm that resolution of Kubernetes hosts from non-master pods is broken. I was not able to replicate issues with DNS resolution, however.

In a nutshell, I believe that the issue of CoreDNS access from non-master nodes is a sign of a broken CNI configuration. I'll continue to investigate.

@gunterze
Copy link

Unfortunately the issue is still there:

$ minikube start --driver=virtualbox -n 2
😄  minikube v1.16.0 on Ubuntu 20.10
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting node minikube-m02 in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.99.152
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    ▪ env NO_PROXY=192.168.99.152
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
pod/dnsutils created
$ kubectl exec -i -t dnsutils -- nslookup kubernetes.default
;; connection timed out; no servers could be reached

command terminated with exit code 1

@leozilla
Copy link

I also still see problems with multi node clusters and kvm2. This happens on first creation of the cluster but also on restarting of the cluster. Here u see the logs when I restart a 3 node cluster.

minikube start --cpus=6 --memory=8g --disk-size=18g --driver=kvm2 --kubernetes-version=latest --nodes=3
😄  minikube v1.16.0 on Ubuntu 20.04
✨  Using the kvm2 driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing kvm2 VM for "minikube" ...
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
❗  The cluster minikube already exists which means the --nodes parameter will be ignored. Use "minikube node add" to add nodes to an existing cluster.
👍  Starting node minikube-m02 in cluster minikube
🔄  Restarting existing kvm2 VM for "minikube-m02" ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.39.52
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    ▪ env NO_PROXY=192.168.39.52
🔎  Verifying Kubernetes components...
👍  Starting node minikube-m03 in cluster minikube
🔄  Restarting existing kvm2 VM for "minikube-m03" ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.39.52,192.168.39.217
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    ▪ env NO_PROXY=192.168.39.52
    ▪ env NO_PROXY=192.168.39.52,192.168.39.217
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

CoreDNS pod is running but the problem seems that its started too early.
Logs of CoreDNS pod.

E0117 11:59:13.235020       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
E0117 11:59:13.235021       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
E0117 11:59:13.235026       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

After restarting the CoreDNS pod, there are no more erros visible in the logs and DNS starts working.

.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

@tstromberg can we reopen this issue or create a new one for it?

@changsijay
Copy link

changsijay commented Jan 28, 2021

Same issue here, why the issue is very easy to reproduce but closed...

@YektaLeblebici
Copy link

This issue seems to be closed by mistake.

If I understood correctly @tstromberg wrote "Does not fix #.." in his PR and issue got closed automatically w/o taking "Does not" part into consideration :)

btw I can confirm that the issue persists on latest MacOS and minikube v1.17.1 (latest), when I run it like this: minikube start --nodes 2 --vm-driver=hyperkit

DNS resolves fine inside minikube nodes, but containers fail to resolve.

@applitect
Copy link

applitect commented Feb 25, 2021

After some work on this yesterday, I can confirm there was a bug with kube-proxy starting up and a merge request to fix it has been submitted. See #10581 Now that it is fixed, I can run kubectl scale --replicas=2 -n kube-system deployment coredns and get dns resolution to sort of succeed. However, it's inconsistent and I can't reach pods between nodes. More research is necessary.

@applitect
Copy link

Okay, I think I've figured something out. I'm going to open a new ticket. This is all based on problems in the iptables. I'll add a link to the new ticket when I get it put together.

@applitect
Copy link

I've figured out my problem. I was trying to use kubrenetes 1.16 with multinode. 1.16 ends up putting the docker ip address range of 172.17.0.1/16 in the kube-proxy controlled ip tables. The 172 range of addresses are not exposed when run in the docker driver. If I upgrade to kubernetes 1.20.1, then the problem goes away as the iptables use the 192.168.49.0 addresses.

@applitect
Copy link

So it appears that the iptables inside the nodes are based on the 172 addresses until 1.20. So multinode will not work with any other versions without some work.

@kfox1111
Copy link

Is that problem specific then to the docker driver or would the kvm2 driver have it too? Wondering if there are multiple problems here or not.

@timstoop
Copy link

Just ran into this issue as well, solved by the kubectl patch from the original post.

$ minikube version
minikube version: v1.28.0
commit: 986b1ebd987211ed16f8cc10aed7d2c42fc8392f
$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.4
Kustomize Version: v4.5.7
Server Version: v1.24.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cni CNI support area/networking networking issues co/coredns CoreDNS related issues co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet