You could isolate a problematic node for further troubleshooting by cordonning it off. You could also drain it while preparing for maintenance.
kubectl get pods -o wide
kubectl cordon node4
node/node4 cordoned
kubectl drain k8s-node2
kubectl get node
kubectl uncordon k8s-node2
node/k8s-node2 uncordoned