-
Notifications
You must be signed in to change notification settings - Fork 670
Weave Net Daemonset fails to restart pod due to existing dummy interface #3414
Comments
thanks @nguyenquangminh0711 for reporting this issue. Its strange your cluster ended up in this situation. As you may have seen in the code, dummy interface is created and immediately deleted. Do you happen to see any reason in the weave pod logs why dummy interface failed to get deleted? Its corner case, but its good to handle gracefully. Please submit the PR. May be you can simplify if err = netlink.LinkAdd(dummy); err != syscall.EEXIST {
return errors.Wrap(err, "creating dummy interface")
} |
Hi @murali-reddy, thanks for your quick reply, I have just capture the logs of the previous session:
It looks like it crashes at these lines, right before the dummy interface is deleted. That's why it is left to the next restart. if err := netlink.LinkSetMasterByIndex(dummy, b.bridge.Attrs().Index); err != nil {
return errors.Wrap(err, "setting dummy interface master")
} |
The |
Just call |
This indicates that there were 1024 connected veth pairs to the |
@brb Not much, just 41 pods on that nodes. And most of the pods are in ContainerCreating because they can not schedule the pods. |
@nguyenquangminh0711 Can you run |
@brb When running your command, there are 1023 items. It looks like it nearly exceed the total number of 1024 in a new restart. The details of the ip links: Btw, I'm spotting that other pods in other nodes start to restart randomly, although not collapse completely like the node I'm investigating. |
@nguyenquangminh0711 Thanks. Can you paste |
@brb I fetched the logs from the nodes, including kube-controller-manager, kube-proxy and kube-scheduler of the failing nodes. That's all what I can find in I observe that there are a lot of failures when the node tries to read from etcd. I don't whether that relates to the issue? |
@nguyenquangminh0711 Do you use systemd? If yes, can you run I'm interested to see whether there were any errors reported by kubelet when removing the interfaces via CNI. |
Hi @brb, sorry for my late response. Our logs are centrialized and not indexed properly, so I have to take some actions to extract the logs. Here is the logs of the failed node from start to killed: It looks like before it goes to the situation of CrashLoop, there are tons of logs like this:
|
@nguyenquangminh0711 Thanks. I was hoping to see more Anyway, the |
@brb We're experiencing this on 2.4.1 although I think we saw it on 2.4.0 as well. From one of the affected worker nodes:
I have attached output for both |
@iAnomaly Looking at the output you shared for the
Kubelet logs indicate weave is not running. I suspect weave pod went into
This is only side-effect. Root cause is dummy interface which should have been deleted after it was created by Weave. By any chance you have old Weave pod logs which can indicate why |
As pointed out dummy interface deletion should have been in the https://github.com/weaveworks/weave/blob/v2.5.0/net/bridge.go#L332-L343 Let me know if you are still interested @nguyenquangminh0711 to raise a PR. Else I will raise a PR. |
What you expected to happen?
The Weave Net Daemonset, who controls the weave-net pod in each node, should be able to restart a pod in a node when it fails / stopped for any reason.
What happened?
The weave-net pod gets Error and CrashLoopBack status and unable to function again, until I terminate that node.
How to reproduce it?
SSH into a node, use docker command to kill the weave-net container. Of course, this is just for re-produce. In fact, when on our production cluster, we sometimes meet the situation when weave-net crashes on a node and don't know why.
The logs point out that weave-net fails to create dummy interface:
I have a small investigation, and it looks like the bug comes from
net/bridge.go
, in functioninitPrep
, at those lines:Before the weave-net starts, it creates a dummy interface object, and when my pod starts, the interface already exists, checked with
ip link | grep vethwedu
command:It looks like in the previous session of weave-net, it fails to delete this dummy interface, or it is killed before deleting it. When I delete the dummy manually with
ip link delete vethwedu
, the pod runs smoothly and back to normal.Adding a small check and delete if the dummy exists before creating a new one would solve this problem. Is it a good solution for this? If that's okay, I'll open an PR.
Anything else we need to know?
I run our Kubernetes cluster on AWS, using KOPS.
Versions:
Logs:
The text was updated successfully, but these errors were encountered: