Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube 1.7.0 will not start with TaintNodesByCondition=false feature gate #6516

Closed
jim-barber-he opened this issue Feb 6, 2020 · 3 comments

Comments

@jim-barber-he
Copy link

jim-barber-he commented Feb 6, 2020

We have a script that stands up local development environments that sets the TaintNodesByCondition=false feature gate.
For all previous versions of Minikube up to and including version 1.6.2 has worked fine.
For Minikube 1.7.0 for some reason a taint is being applied to the Minikube node, and critical pods in the kube-system area such as coredns will not start due to it.

The reason we are setting this feature gate is described in our GIT commit for the setup script as follows:

    Kubernetes automatically taints node so that pods can't be scheduled on
    them when it detects things such as low memory or low disk.
    
    The values it does this at can be conservative to suit a production
    environment where you want plenty of headroom and where there are other
    nodes in the cluster that the pods will start on.
    
    Minikube is one single node in a cluster. If you taint it, everything is
    broken.
    Therefore disable the auto-tainting by disabling the
    TaintNodesByCondition feature flag and let developers use all their disk
    or RAM if they want to.
    Then if it messes up, it'll be more clear to them why.

It is strange that using this flag to disable taints causes a taint to be applied to the node.

If I remove the --feature-gates TaintNodesByCondition=false start up option then Minikube 1.7.0 starts without issue, but then we'll get the undesirable behaviour in our local development environments again.

The exact command to reproduce the issue:

sudo minikube start --feature-gates TaintNodesByCondition=false --vm-driver=none

The full output of the command that failed:

$ kubectl get pod --all-namespaces
NAME                               READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-98rhw           0/1     Pending   0          8m41s
coredns-5c98db65d4-rnzbm           0/1     Pending   0          8m41s
etcd-minikube                      1/1     Running   0          7m24s
kube-apiserver-minikube            1/1     Running   0          7m25s
kube-controller-manager-minikube   1/1     Running   0          7m46s
kube-proxy-mzxsm                   1/1     Running   0          8m41s
kube-scheduler-minikube            1/1     Running   0          7m45s
storage-provisioner                0/1     Pending   0          8m46s
$ kubectl get node -o yaml | grep -A2 taints:
    taints:
    - effect: NoSchedule
      key: node.kubernetes.io/not-ready
$ kubectl -n kube-system describe pod coredns-5c98db65d4-98rhw | grep -A10 Events:
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  4s (x9 over 10m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
$ kubectl -n kube-system describe pod storage-provisioner | grep -A10 Events:
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  70s (x8 over 10m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

The output of the minikube logs command:

$ minikube logs
==> Docker <==
-- Logs begin at Tue 2019-12-31 07:45:47 AWST, end at Thu 2020-02-06 10:10:52 AWST. --
Feb 06 09:55:29 sre1 dockerd[848]: time="2020-02-06T09:55:29.353692600+08:00" level=warning msg="6abf1288e745f7934f99cc4c8cf9b831ca775348c83f3f216bd7b16f4bc0a26c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6abf1288e745f7934f99cc4c8cf9b831ca775348c83f3f216bd7b16f4bc0a26c/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 09:55:29 sre1 dockerd[848]: time="2020-02-06T09:55:29.510752684+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:29 sre1 dockerd[848]: time="2020-02-06T09:55:29.654703016+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:29 sre1 dockerd[848]: time="2020-02-06T09:55:29.654960258+08:00" level=warning msg="c3389492a8f927ed696852e3543752d3f0394da6b6aad49266b1f00f8d5649fb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c3389492a8f927ed696852e3543752d3f0394da6b6aad49266b1f00f8d5649fb/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 09:55:29 sre1 dockerd[848]: time="2020-02-06T09:55:29.816717061+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.106848795+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.432380380+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.619419813+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.619502068+08:00" level=warning msg="c417bdab660150b791900fac7ff60f1e20aba56b0f62cbbc96f33889522f846c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c417bdab660150b791900fac7ff60f1e20aba56b0f62cbbc96f33889522f846c/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.793691006+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.793778057+08:00" level=warning msg="a5361b0a6914bea12a9b6c67b18b06b637b822dee045ecee3af09add176394e9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a5361b0a6914bea12a9b6c67b18b06b637b822dee045ecee3af09add176394e9/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.931527218+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:30 sre1 dockerd[848]: time="2020-02-06T09:55:30.931678261+08:00" level=warning msg="3eefc805684c0a2483e0b9cf6231f8c4062cad3693d6a6f67c23fbe6b5cb296c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3eefc805684c0a2483e0b9cf6231f8c4062cad3693d6a6f67c23fbe6b5cb296c/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 09:55:31 sre1 dockerd[848]: time="2020-02-06T09:55:31.103042279+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:31 sre1 dockerd[848]: time="2020-02-06T09:55:31.103149677+08:00" level=warning msg="fefb3e3eaa694bddd615f65431f49197586e12a5e657112726da33511b3d0350 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/fefb3e3eaa694bddd615f65431f49197586e12a5e657112726da33511b3d0350/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 09:55:31 sre1 dockerd[848]: time="2020-02-06T09:55:31.264741900+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:31 sre1 dockerd[848]: time="2020-02-06T09:55:31.458481402+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:31 sre1 dockerd[848]: time="2020-02-06T09:55:31.638582676+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 09:55:31 sre1 dockerd[848]: time="2020-02-06T09:55:31.797400018+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:42 sre1 dockerd[848]: time="2020-02-06T10:01:42.713295576+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:42 sre1 dockerd[848]: time="2020-02-06T10:01:42.713356513+08:00" level=warning msg="8404bdbe71ec76e239576b42a7f585a5547373b5bb5ae2b0f3866dc645afb961 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8404bdbe71ec76e239576b42a7f585a5547373b5bb5ae2b0f3866dc645afb961/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:01:42 sre1 dockerd[848]: time="2020-02-06T10:01:42.882667518+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.063499272+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.063632874+08:00" level=warning msg="501a5d7bcbfcc957042b8f88798d05e2a158abfa49850035415f6c3305302380 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/501a5d7bcbfcc957042b8f88798d05e2a158abfa49850035415f6c3305302380/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.223074496+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.223238442+08:00" level=warning msg="84b699fac1321a877d41c6604beb642607f750ef4aa7a4e96dff2e9ca547a648 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/84b699fac1321a877d41c6604beb642607f750ef4aa7a4e96dff2e9ca547a648/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.367798193+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.367890464+08:00" level=warning msg="16a25195b47efd4fee1ed589fc9c4bc6ae370ee51e00744a042d21e381038e92 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/16a25195b47efd4fee1ed589fc9c4bc6ae370ee51e00744a042d21e381038e92/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.543881762+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.543956345+08:00" level=warning msg="254852cde37c8c9485bfe7c38b77306fac39a2a7f3a3fbffd47b598cf66724ee cleanup: failed to unmount IPC: umount /var/lib/docker/containers/254852cde37c8c9485bfe7c38b77306fac39a2a7f3a3fbffd47b598cf66724ee/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.714058631+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:43 sre1 dockerd[848]: time="2020-02-06T10:01:43.919674802+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:44 sre1 dockerd[848]: time="2020-02-06T10:01:44.114003760+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:01:44 sre1 dockerd[848]: time="2020-02-06T10:01:44.325352492+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:03:10 sre1 dockerd[848]: time="2020-02-06T10:03:10.908285391+08:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Feb 06 10:03:11 sre1 dockerd[848]: time="2020-02-06T10:03:11.066240994+08:00" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.275589296+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.275671715+08:00" level=warning msg="38a580e18eaa46a351db8210e3edb3039d1e2125f70791ed96b628fc5b71aa5a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/38a580e18eaa46a351db8210e3edb3039d1e2125f70791ed96b628fc5b71aa5a/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.415993580+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.416055786+08:00" level=warning msg="65e3e4a72e8aae454a29982f35b958573b4e60d2b6748c84ce022b1c2b352731 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/65e3e4a72e8aae454a29982f35b958573b4e60d2b6748c84ce022b1c2b352731/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.626671238+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.626743655+08:00" level=warning msg="5a18efdf750448b5e7ec7535816f1f13be6205a963c93cbad3a298cfcb757f54 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5a18efdf750448b5e7ec7535816f1f13be6205a963c93cbad3a298cfcb757f54/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.797783600+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.797876170+08:00" level=warning msg="0ac39fcccbfc7458ca7a1935d66639cb43ff86371cdb67ecc845f4d7adbe0548 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0ac39fcccbfc7458ca7a1935d66639cb43ff86371cdb67ecc845f4d7adbe0548/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:28 sre1 dockerd[848]: time="2020-02-06T10:05:28.970756621+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:29 sre1 dockerd[848]: time="2020-02-06T10:05:29.127712784+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:29 sre1 dockerd[848]: time="2020-02-06T10:05:29.424930818+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:29 sre1 dockerd[848]: time="2020-02-06T10:05:29.777184323+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:29 sre1 dockerd[848]: time="2020-02-06T10:05:29.919884607+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:29 sre1 dockerd[848]: time="2020-02-06T10:05:29.919981513+08:00" level=warning msg="922679566a3af0d9e376d28b8ab7be9b49089c460182118769116f34e8010654 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/922679566a3af0d9e376d28b8ab7be9b49089c460182118769116f34e8010654/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.065278599+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.065352344+08:00" level=warning msg="35e915a8bbc4a7e6fce6900aeca80fa75605722768090fb2f3390202e217e219 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/35e915a8bbc4a7e6fce6900aeca80fa75605722768090fb2f3390202e217e219/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.203256743+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.203368116+08:00" level=warning msg="88074c5d5d2934d1245e1920424a3cdd61a3ace2482376137b50a65b655ebfcd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/88074c5d5d2934d1245e1920424a3cdd61a3ace2482376137b50a65b655ebfcd/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.341308309+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.341385212+08:00" level=warning msg="d139ba072bd38585d31900e0e236fd04c70d5d5d4dbb4313fb5e4952b0bad973 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d139ba072bd38585d31900e0e236fd04c70d5d5d4dbb4313fb5e4952b0bad973/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.529993318+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.729945361+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:30 sre1 dockerd[848]: time="2020-02-06T10:05:30.889528061+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 10:05:31 sre1 dockerd[848]: time="2020-02-06T10:05:31.080089846+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
sudo: crictl: command not found
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                     PORTS               NAMES
baa7d3990e29        d756327a2327           "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                                   k8s_kube-proxy_kube-proxy-mzxsm_kube-system_2e281a81-26b3-4e77-9747-bce49aef655b_0
8778f6176303        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                                   k8s_POD_kube-proxy-mzxsm_kube-system_2e281a81-26b3-4e77-9747-bce49aef655b_0
72212e3a12d4        502e54938456           "kube-scheduler --bi…"   3 minutes ago       Up 3 minutes                                   k8s_kube-scheduler_kube-scheduler-minikube_kube-system_e464de791d8ef68d9d1f8708211226ce_0
2481c17f4b2d        83ab61bd43ad           "kube-controller-man…"   3 minutes ago       Up 3 minutes                                   k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_a99a4cb4884a61f6185b470808e3932f_0
b44234917eae        9f612b9e9bbf           "kube-apiserver --ad…"   3 minutes ago       Up 3 minutes                                   k8s_kube-apiserver_kube-apiserver-minikube_kube-system_05d5e4d5912b4b3a41eb064529de4155_0
6267a0a846ad        2c4adeb21b4f           "etcd --advertise-cl…"   3 minutes ago       Up 3 minutes                                   k8s_etcd_etcd-minikube_kube-system_c3c08bff237ac66ecb7ad750aaa3a148_0
8f89e98865de        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                                   k8s_POD_etcd-minikube_kube-system_c3c08bff237ac66ecb7ad750aaa3a148_0
1f760ae538bc        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                                   k8s_POD_kube-scheduler-minikube_kube-system_e464de791d8ef68d9d1f8708211226ce_0
8ef5cf21a304        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                                   k8s_POD_kube-controller-manager-minikube_kube-system_a99a4cb4884a61f6185b470808e3932f_0
08e04d8350c9        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                                   k8s_POD_kube-apiserver-minikube_kube-system_05d5e4d5912b4b3a41eb064529de4155_0
dd82bd7d170e        b8da3f63e9d3           "/bin/sh -c ./instal…"   8 weeks ago         Exited (127) 8 weeks ago                       optimistic_chebyshev

==> dmesg <==
[Feb 6 08:26] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[  +0.000000]  #3
[  +0.003294] pmd_set_huge: Cannot satisfy [mem 0xe0000000-0xe0200000] with a huge-page mapping due to MTRR override.
[  +0.002237] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.569492] usbcore: unknown parameter 'autosuspend_delay_ms' ignored
[  +0.016682] usb: port power management may be unreliable
[  +0.835808] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.379793] tpm_crb MSFT0101:00: can't request region for resource [mem 0x37f65000-0x37f6502f]
[  +0.000081] tpm_crb: probe of MSFT0101:00 failed with error -16
[  +0.822503] thermal thermal_zone3: failed to read out thermal zone (-61)
[  +0.617352] uvcvideo 1-7:1.0: Entity type for entity Extension 4 was not initialized!
[  +0.000001] uvcvideo 1-7:1.0: Entity type for entity Extension 3 was not initialized!
[  +0.000001] uvcvideo 1-7:1.0: Entity type for entity Processing 2 was not initialized!
[  +0.000001] uvcvideo 1-7:1.0: Entity type for entity Camera 1 was not initialized!
[  +0.337976] usb 1-3.3.4.3: firmware: failed to load ti_usb-v0451-p3410.fw (-2)
[  +0.000000] firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
[  +0.000002] usb 1-3.3.4.3: Direct firmware load for ti_usb-v0451-p3410.fw failed with error -2
[  +0.000021] usb 1-3.3.4.3: firmware: failed to load ti_3410.fw (-2)
[  +0.000002] usb 1-3.3.4.3: Direct firmware load for ti_3410.fw failed with error -2
[  +0.000009] ti_usb_3410_5052: probe of 1-3.3.4.3:1.0 failed with error -2
[Feb 6 08:43] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Feb 6 08:44] tee (10457): /proc/9983/oom_adj is deprecated, please use /proc/9983/oom_score_adj instead.

==> kernel <==
 10:10:52 up  1:44,  1 user,  load average: 0.62, 0.89, 0.95
Linux sre1 5.4.0-3-amd64 #1 SMP Debian 5.4.13-1 (2020-01-19) x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 10 (buster)"

==> kube-apiserver [b44234917eae] <==
E0206 02:07:01.018643       1 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018680       1 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018703       1 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018735       1 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018754       1 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018776       1 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018793       1 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018835       1 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018876       1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018909       1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 02:07:01.018932       1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0206 02:07:01.018955       1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0206 02:07:01.018964       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0206 02:07:01.020377       1 client.go:354] parsed scheme: ""
I0206 02:07:01.020390       1 client.go:354] scheme "" not registered, fallback to default scheme
I0206 02:07:01.020424       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0206 02:07:01.020483       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 02:07:01.030126       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 02:07:01.030765       1 client.go:354] parsed scheme: ""
I0206 02:07:01.030780       1 client.go:354] scheme "" not registered, fallback to default scheme
I0206 02:07:01.030813       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0206 02:07:01.030860       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 02:07:01.039283       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 02:07:02.448507       1 secure_serving.go:116] Serving securely on [::]:8443
I0206 02:07:02.448546       1 available_controller.go:376] Starting AvailableConditionController
I0206 02:07:02.448562       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0206 02:07:02.449491       1 controller.go:81] Starting OpenAPI AggregationController
I0206 02:07:02.449546       1 crd_finalizer.go:255] Starting CRDFinalizer
I0206 02:07:02.449588       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0206 02:07:02.449602       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0206 02:07:02.449621       1 autoregister_controller.go:140] Starting autoregister controller
I0206 02:07:02.449627       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0206 02:07:02.451610       1 controller.go:83] Starting OpenAPI controller
I0206 02:07:02.451626       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0206 02:07:02.451645       1 naming_controller.go:288] Starting NamingConditionController
I0206 02:07:02.451662       1 establishing_controller.go:73] Starting EstablishingController
I0206 02:07:02.451679       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0206 02:07:02.451702       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0206 02:07:02.451708       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
E0206 02:07:02.474255       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.10.96, ResourceVersion: 0, AdditionalErrorMsg: 
I0206 02:07:02.548740       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0206 02:07:02.549741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0206 02:07:02.549966       1 cache.go:39] Caches are synced for autoregister controller
I0206 02:07:02.552541       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0206 02:07:02.628422       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0206 02:07:03.447124       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0206 02:07:03.447185       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0206 02:07:03.447230       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0206 02:07:03.459289       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0206 02:07:03.470932       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0206 02:07:03.470987       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0206 02:07:04.148236       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0206 02:07:04.200722       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0206 02:07:04.296625       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.10.96]
I0206 02:07:04.298086       1 controller.go:606] quota admission added evaluator for: endpoints
I0206 02:07:05.021992       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0206 02:07:05.791161       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0206 02:07:06.057490       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0206 02:07:11.583824       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0206 02:07:11.603614       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-controller-manager [2481c17f4b2d] <==
E0206 02:07:10.914555       1 prometheus.go:176] failed to register latency metric certificate: duplicate metrics collector registration attempted
E0206 02:07:10.914642       1 prometheus.go:188] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
E0206 02:07:10.914740       1 prometheus.go:203] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
E0206 02:07:10.914796       1 prometheus.go:216] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
E0206 02:07:10.914875       1 prometheus.go:139] failed to register retries metric certificate: duplicate metrics collector registration attempted
E0206 02:07:10.914942       1 prometheus.go:228] failed to register retries metric certificate: duplicate metrics collector registration attempted
I0206 02:07:10.915029       1 controllermanager.go:532] Started "csrapproving"
I0206 02:07:10.915126       1 certificate_controller.go:113] Starting certificate controller
I0206 02:07:10.915185       1 controller_utils.go:1029] Waiting for caches to sync for certificate controller
I0206 02:07:11.063770       1 controllermanager.go:532] Started "csrcleaner"
I0206 02:07:11.063876       1 cleaner.go:81] Starting CSR cleaner controller
E0206 02:07:11.315731       1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0206 02:07:11.315785       1 controllermanager.go:524] Skipping "service"
I0206 02:07:11.464270       1 node_lifecycle_controller.go:77] Sending events to api server
E0206 02:07:11.464414       1 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0206 02:07:11.464507       1 controllermanager.go:524] Skipping "cloud-node-lifecycle"
I0206 02:07:11.465553       1 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
I0206 02:07:11.471956       1 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0206 02:07:11.531191       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0206 02:07:11.536742       1 controller_utils.go:1036] Caches are synced for namespace controller
I0206 02:07:11.565155       1 controller_utils.go:1036] Caches are synced for TTL controller
I0206 02:07:11.565855       1 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
I0206 02:07:11.566301       1 controller_utils.go:1036] Caches are synced for service account controller
I0206 02:07:11.577519       1 controller_utils.go:1036] Caches are synced for daemon sets controller
E0206 02:07:11.594752       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0206 02:07:11.602084       1 controller_utils.go:1036] Caches are synced for deployment controller
I0206 02:07:11.605364       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"c4b0148e-316c-4d91-8b70-e46f49db08fe", APIVersion:"apps/v1", ResourceVersion:"218", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mzxsm
I0206 02:07:11.613284       1 controller_utils.go:1036] Caches are synced for bootstrap_signer controller
I0206 02:07:11.613691       1 controller_utils.go:1036] Caches are synced for HPA controller
I0206 02:07:11.615851       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"156e0544-13fc-40e0-9d20-b7db27710062", APIVersion:"apps/v1", ResourceVersion:"191", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5c98db65d4 to 2
I0206 02:07:11.616714       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
I0206 02:07:11.620684       1 controller_utils.go:1036] Caches are synced for stateful set controller
E0206 02:07:11.633444       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0206 02:07:11.638797       1 controller_utils.go:1036] Caches are synced for job controller
I0206 02:07:11.662531       1 controller_utils.go:1036] Caches are synced for PVC protection controller
I0206 02:07:11.665169       1 controller_utils.go:1036] Caches are synced for disruption controller
I0206 02:07:11.665341       1 disruption.go:341] Sending events to api server.
I0206 02:07:11.665626       1 controller_utils.go:1036] Caches are synced for ReplicaSet controller
I0206 02:07:11.665177       1 controller_utils.go:1036] Caches are synced for GC controller
I0206 02:07:11.675096       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5c98db65d4", UID:"5366c83d-c567-4b69-9b1c-abfe1886a715", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5c98db65d4-rnzbm
I0206 02:07:11.682000       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5c98db65d4", UID:"5366c83d-c567-4b69-9b1c-abfe1886a715", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5c98db65d4-98rhw
I0206 02:07:11.915504       1 controller_utils.go:1036] Caches are synced for certificate controller
I0206 02:07:11.916796       1 controller_utils.go:1036] Caches are synced for certificate controller
I0206 02:07:11.982752       1 log.go:172] [INFO] signed certificate with serial number 485505334473217407044724471704879930827644360022
I0206 02:07:12.016758       1 controller_utils.go:1036] Caches are synced for endpoint controller
I0206 02:07:12.165717       1 controller_utils.go:1036] Caches are synced for resource quota controller
I0206 02:07:12.181240       1 controller_utils.go:1036] Caches are synced for resource quota controller
I0206 02:07:12.272169       1 controller_utils.go:1036] Caches are synced for garbage collector controller
I0206 02:07:12.279096       1 controller_utils.go:1036] Caches are synced for persistent volume controller
I0206 02:07:12.315632       1 controller_utils.go:1036] Caches are synced for expand controller
I0206 02:07:12.316743       1 controller_utils.go:1036] Caches are synced for taint controller
I0206 02:07:12.316807       1 taint_manager.go:182] Starting NoExecuteTaintManager
I0206 02:07:12.316813       1 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0206 02:07:12.316889       1 node_lifecycle_controller.go:863] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0206 02:07:12.316915       1 node_lifecycle_controller.go:1089] Controller detected that zone  is now in state Normal.
I0206 02:07:12.316945       1 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"20bb58bc-4d19-4e8c-8564-c90a3ca7aa44", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0206 02:07:12.353365       1 controller_utils.go:1036] Caches are synced for garbage collector controller
I0206 02:07:12.353383       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0206 02:07:12.365419       1 controller_utils.go:1036] Caches are synced for PV protection controller
I0206 02:07:12.366501       1 controller_utils.go:1036] Caches are synced for attach detach controller

==> kube-proxy [baa7d3990e29] <==
W0206 02:07:12.328526       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0206 02:07:12.335811       1 server_others.go:143] Using iptables Proxier.
W0206 02:07:12.335920       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0206 02:07:12.336110       1 server.go:534] Version: v1.15.6
I0206 02:07:12.385406       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0206 02:07:12.386012       1 config.go:96] Starting endpoints config controller
I0206 02:07:12.386081       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0206 02:07:12.386144       1 config.go:187] Starting service config controller
I0206 02:07:12.386266       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0206 02:07:12.486404       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0206 02:07:12.486725       1 controller_utils.go:1036] Caches are synced for service config controller

==> kube-scheduler [72212e3a12d4] <==
I0206 02:06:58.688418       1 serving.go:319] Generated self-signed cert in-memory
W0206 02:06:59.851730       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0206 02:06:59.851746       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0206 02:06:59.851759       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0206 02:06:59.854459       1 server.go:142] Version: v1.15.6
W0206 02:06:59.855591       1 authorization.go:47] Authorization is disabled
W0206 02:06:59.855682       1 authentication.go:55] Authentication is disabled
I0206 02:06:59.855721       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0206 02:06:59.856384       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0206 02:07:02.517074       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0206 02:07:02.537995       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0206 02:07:02.538307       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0206 02:07:02.540766       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0206 02:07:02.540954       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0206 02:07:02.541026       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0206 02:07:02.548116       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0206 02:07:02.548192       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0206 02:07:02.556709       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0206 02:07:02.558423       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0206 02:07:03.520368       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0206 02:07:03.540674       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0206 02:07:03.546280       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0206 02:07:03.550829       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0206 02:07:03.552065       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0206 02:07:03.555134       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0206 02:07:03.559287       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0206 02:07:03.564594       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0206 02:07:03.566468       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0206 02:07:03.572169       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0206 02:07:05.458433       1 leaderelection.go:235] attempting to acquire leader lease  kube-system/kube-scheduler...
I0206 02:07:05.471808       1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0206 02:07:06.697416       1 factory.go:702] pod is already present in the activeQ
E0206 02:07:11.689385       1 factory.go:702] pod is already present in the activeQ
E0206 02:07:11.718988       1 factory.go:702] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Tue 2019-12-31 07:45:47 AWST, end at Thu 2020-02-06 10:10:52 AWST. --
Feb 06 10:06:59 sre1 kubelet[31906]: E0206 10:06:59.570186   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:06:59 sre1 kubelet[31906]: E0206 10:06:59.670292   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:06:59 sre1 kubelet[31906]: E0206 10:06:59.770406   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:06:59 sre1 kubelet[31906]: E0206 10:06:59.870518   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:06:59 sre1 kubelet[31906]: E0206 10:06:59.970617   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.070827   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.170922   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.271139   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: I0206 10:07:00.370529   31906 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Feb 06 10:07:00 sre1 kubelet[31906]: I0206 10:07:00.370596   31906 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Feb 06 10:07:00 sre1 kubelet[31906]: I0206 10:07:00.371462   31906 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Feb 06 10:07:00 sre1 kubelet[31906]: I0206 10:07:00.371605   31906 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.372190   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.472447   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.572902   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.673075   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.773246   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.873439   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:00 sre1 kubelet[31906]: E0206 10:07:00.973555   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.073701   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.173845   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.274032   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.374143   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.474301   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.574471   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.674596   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.774716   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.874842   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:01 sre1 kubelet[31906]: E0206 10:07:01.974948   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.075131   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.175265   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.275411   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.375526   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.475597   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: I0206 10:07:02.575001   31906 reconciler.go:150] Reconciler: start to sync state
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.575993   31906 kubelet.go:2252] node "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.577105   31906 controller.go:204] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
Feb 06 10:07:02 sre1 kubelet[31906]: I0206 10:07:02.577634   31906 kubelet_node_status.go:75] Successfully registered node minikube
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.636780   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae0798a5bcb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb1033063cb2, ext:20497335757, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb1033063cb2, ext:20497335757, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.691782   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a1523758", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104017ed58, ext:20642857048, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104017ed58, ext:20642857048, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.748626   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a153ec6f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019a26f, ext:20642968939, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019a26f, ext:20642968939, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.802256   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a154080e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019be0e, ext:20642976004, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019be0e, ext:20642976004, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.860847   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a1523758", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104017ed58, ext:20642857048, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb10422e4706, ext:20677876228, loc:(*time.Location)(0x7632720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:02 sre1 kubelet[31906]: W0206 10:07:02.884660   31906 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/cfa0e51b-d735-4cce-a76e-70409eae25f9/volumes" does not exist
Feb 06 10:07:02 sre1 kubelet[31906]: W0206 10:07:02.884754   31906 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/4c9fe2c16888e009cff100467a01a432/volumes" does not exist
Feb 06 10:07:02 sre1 kubelet[31906]: W0206 10:07:02.884812   31906 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/b3e72657-768b-4da1-9897-5a1362ebbb73/volumes" does not exist
Feb 06 10:07:02 sre1 kubelet[31906]: W0206 10:07:02.884863   31906 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/0a915cf2-4c85-4839-b9a7-0639387d7a8b/volumes" does not exist
Feb 06 10:07:02 sre1 kubelet[31906]: W0206 10:07:02.884935   31906 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/a01bcbfab1798a63e12bfbfeaaca83cc/volumes" does not exist
Feb 06 10:07:02 sre1 kubelet[31906]: W0206 10:07:02.884984   31906 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ddf9e0b8-b096-40de-9772-27244096bcd3/volumes" does not exist
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.919987   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a153ec6f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019a26f, ext:20642968939, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb10422e7218, ext:20677887255, loc:(*time.Location)(0x7632720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:02 sre1 kubelet[31906]: E0206 10:07:02.977927   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a154080e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019be0e, ext:20642976004, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb10422e8064, ext:20677890908, loc:(*time.Location)(0x7632720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:03 sre1 kubelet[31906]: E0206 10:07:03.033973   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a37f8eec", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb10424544ec, ext:20679383028, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb10424544ec, ext:20679383028, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:03 sre1 kubelet[31906]: E0206 10:07:03.094446   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a1523758", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104017ed58, ext:20642857048, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104b1914ab, ext:20827482038, loc:(*time.Location)(0x7632720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:03 sre1 kubelet[31906]: E0206 10:07:03.492748   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a153ec6f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019a26f, ext:20642968939, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104b193d7a, ext:20827492475, loc:(*time.Location)(0x7632720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:03 sre1 kubelet[31906]: E0206 10:07:03.893558   31906 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15f0ae07a154080e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104019be0e, ext:20642976004, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf86fb104b194fb2, ext:20827497151, loc:(*time.Location)(0x7632720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 10:07:11 sre1 kubelet[31906]: I0206 10:07:11.699286   31906 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2e281a81-26b3-4e77-9747-bce49aef655b-kube-proxy") pod "kube-proxy-mzxsm" (UID: "2e281a81-26b3-4e77-9747-bce49aef655b")
Feb 06 10:07:11 sre1 kubelet[31906]: I0206 10:07:11.702050   31906 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2e281a81-26b3-4e77-9747-bce49aef655b-lib-modules") pod "kube-proxy-mzxsm" (UID: "2e281a81-26b3-4e77-9747-bce49aef655b")
Feb 06 10:07:11 sre1 kubelet[31906]: I0206 10:07:11.702409   31906 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2e281a81-26b3-4e77-9747-bce49aef655b-xtables-lock") pod "kube-proxy-mzxsm" (UID: "2e281a81-26b3-4e77-9747-bce49aef655b")
Feb 06 10:07:11 sre1 kubelet[31906]: I0206 10:07:11.702741   31906 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-d22pp" (UniqueName: "kubernetes.io/secret/2e281a81-26b3-4e77-9747-bce49aef655b-kube-proxy-token-d22pp") pod "kube-proxy-mzxsm" (UID: "2e281a81-26b3-4e77-9747-bce49aef655b")
Feb 06 10:07:16 sre1 kubelet[31906]: I0206 10:07:16.434317   31906 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials

The operating system version:

$ cat /etc/debian_version 
10.2
@tstromberg
Copy link
Contributor

Thank you for the excellent bug report. I tried this with HEAD, as well as minikube v1.7.0-beta.1 and Kubernetes v1.7.0, and ran into this error, shown by the kubelet logs:

Feb 06 04:10:25 minikube kubelet[4649]: F0206 04:10:25.511173    4649 server.go:182] cannot set feature gate TaintNodesByCondition to false, feature is locked to true
Feb 06 04:10:25 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION

If I specify --kubernetes-version=v1.16.3, minikube v1.7.0 works just fine on my mac. I haven't tried with the none driver.

I noticed from your logs, that at least some of your components are v1.15.6, which should work. Could it be possible that minikube was trying to upgrade your cluster to v1.17.x?

It would be helpful if you could share the output of minikube start --alsologtostderr. Thanks!

@jim-barber-he
Copy link
Author

Oh. Sorry.
I usually run Kubernetes v1.15.6 since that matches what we have on our production clusters.
I missed the parameter to keep it at that version.

Here's the run again, this time though I've done the following:

  • Used minikube version 1.7.1 since that was released since I raised the bug report.

  • Removed my ~/.minikube directory to make sure I'm starting from a clean slate

  • Added the startup flag to lock to version v1.15.6 as well as the --alsologtostderr flag

Starting it like so:

sudo minikube start --feature-gates TaintNodesByCondition=false --vm-driver=none --kubernetes-version=v1.15.6 --alsologtostderr

The output produced is:

$ sudo minikube start --feature-gates TaintNodesByCondition=false --vm-driver=none --kubernetes-version=v1.15.6 --alsologtostderr
W0206 13:14:28.083411   19115 root.go:244] Error reading config file at /home/jim/.minikube/config/config.json: open /home/jim/.minikube/config/config.json: no such file or directory
I0206 13:14:28.084551   19115 notify.go:125] Checking for updates...
I0206 13:14:28.545842   19115 start.go:257] hostinfo: {"hostname":"sre1","uptime":17277,"bootTime":1580948791,"procs":278,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"10.2","kernelVersion":"5.4.0-3-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"e7d14858-9434-e711-9bd2-fc4596ecffea"}
I0206 13:14:28.546388   19115 start.go:267] virtualization: kvm host
😄  minikube v1.7.1 on Debian 10.2
I0206 13:14:28.546507   19115 driver.go:199] Setting default libvirt URI to qemu:///system
✨  Using the none driver based on user configuration
I0206 13:14:28.546571   19115 start.go:304] selected driver: none
I0206 13:14:28.546579   19115 start.go:608] validating driver "none" against <nil>
I0206 13:14:28.546588   19115 start.go:614] status for none: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0206 13:14:28.546861   19115 profile.go:100] Saving config to /home/jim/.minikube/profiles/minikube/config.json ...
I0206 13:14:28.546948   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/profiles/minikube/config.json: {Name:mk64356110cc7830a12d1c6da12ce564fa30fd20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:28.547413   19115 start.go:213] acquiring machines lock for minikube: {Name:mkc7ca3178cc9d8955964105ba2fc72b8092c525 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0206 13:14:28.547498   19115 start.go:217] acquired machines lock for "minikube" in 63.17µs
I0206 13:14:28.547516   19115 start.go:79] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.7.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.15.6 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates:TaintNodesByCondition=false ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.15.6 ControlPlane:true Worker:true}] Addons:map[]}
I0206 13:14:28.547589   19115 start.go:98] createHost starting for "minikube" (driver="none")
🤹  Running on localhost (CPUs=4, Memory=19948MB, Disk=231773MB) ...
I0206 13:14:28.548301   19115 start.go:131] libmachine.API.Create for "minikube" (driver="none")
I0206 13:14:28.548364   19115 main.go:110] libmachine: Creating CA: /home/jim/.minikube/certs/ca.pem
I0206 13:14:28.692038   19115 main.go:110] libmachine: Creating client certificate: /home/jim/.minikube/certs/cert.pem
I0206 13:14:28.895753   19115 start.go:137] libmachine.API.Create for "minikube" took 347.456764ms
I0206 13:14:28.895775   19115 start.go:151] post-start starting for "minikube" (driver="none")
I0206 13:14:28.895802   19115 start.go:161] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor]
I0206 13:14:28.895821   19115 start.go:191] returning ExecRunner for "none" driver
I0206 13:14:28.902453   19115 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
ℹ️   OS release is Debian GNU/Linux 10 (buster)
I0206 13:14:28.902582   19115 filesync.go:67] Scanning /home/jim/.minikube/addons for local assets ...
I0206 13:14:28.902658   19115 filesync.go:67] Scanning /home/jim/.minikube/files for local assets ...
I0206 13:14:28.902698   19115 start.go:154] post-start completed in 6.907743ms
I0206 13:14:28.902898   19115 start.go:101] createHost completed in 355.299014ms
I0206 13:14:28.902908   19115 start.go:70] releasing machines lock for "minikube", held for 355.398966ms
I0206 13:14:29.617833   19115 profile.go:100] Saving config to /home/jim/.minikube/profiles/minikube/config.json ...
🐳  Preparing Kubernetes v1.15.6 on Docker '19.03.5' ...
I0206 13:14:29.701960   19115 settings.go:123] acquiring lock: {Name:mke8591a63ba7de7b76be49418dd2566cc7ca4d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:29.702079   19115 settings.go:131] Updating kubeconfig:  /home/jim/.kube/config
I0206 13:14:29.707949   19115 lock.go:35] WriteFile acquiring /home/jim/.kube/config: {Name:mk12b9c88112d1b63c1d0abed6f4f97807d3a1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:29.792264   19115 kubeadm.go:448] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.15.6/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver='cgroupfs' --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --feature-gates=TaintNodesByCondition=false --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.10.96 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.15.6 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates:TaintNodesByCondition=false ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false}
W0206 13:14:29.812729   19115 kubeadm.go:453] unable to stop kubelet: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet": exit status 1
stdout:

stderr:
 command: "/bin/bash -c \"pgrep kubelet && sudo systemctl stop kubelet\"" output: ""
💾  Downloading kubectl v1.15.6
💾  Downloading kubelet v1.15.6
💾  Downloading kubeadm v1.15.6
I0206 13:14:49.133421   19115 certs.go:66] Setting up /home/jim/.minikube for IP: 192.168.10.96
I0206 13:14:49.133448   19115 certs.go:75] acquiring lock: {Name:mk45847a33d0f68f18db9ac3cd90a0ee16408a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.252518   19115 crypto.go:157] Writing cert to /home/jim/.minikube/ca.crt ...
I0206 13:14:49.252549   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/ca.crt: {Name:mkddcea55ecd4f2c7eb1945d2cc76c30b3d871a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.252774   19115 crypto.go:165] Writing key to /home/jim/.minikube/ca.key ...
I0206 13:14:49.252786   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/ca.key: {Name:mk5f2335be48f5eb5bb4dccb0904c35375438948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.313732   19115 crypto.go:157] Writing cert to /home/jim/.minikube/proxy-client-ca.crt ...
I0206 13:14:49.313773   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/proxy-client-ca.crt: {Name:mk3420375a7efea7385a1c8d2001b0d908cd1501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.314015   19115 crypto.go:165] Writing key to /home/jim/.minikube/proxy-client-ca.key ...
I0206 13:14:49.314046   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/proxy-client-ca.key: {Name:mk587e01e66feac1e6e214bb6aad26e03f50bc34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.314161   19115 crypto.go:69] Generating cert /home/jim/.minikube/client.crt with IP's: []
I0206 13:14:49.586736   19115 crypto.go:157] Writing cert to /home/jim/.minikube/client.crt ...
I0206 13:14:49.586765   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/client.crt: {Name:mk7d2ff4b012f50c40054d7700d15814be53930d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.586931   19115 crypto.go:165] Writing key to /home/jim/.minikube/client.key ...
I0206 13:14:49.586944   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/client.key: {Name:mkd0af67277547393f90d54aae207f99233cbe51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.587034   19115 crypto.go:69] Generating cert /home/jim/.minikube/apiserver.crt with IP's: [192.168.10.96 10.96.0.1 127.0.0.1 10.0.0.1]
I0206 13:14:49.724712   19115 crypto.go:157] Writing cert to /home/jim/.minikube/apiserver.crt ...
I0206 13:14:49.724736   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/apiserver.crt: {Name:mk0482fe952ee1582640b4a6ae47c17a399cc05b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.724940   19115 crypto.go:165] Writing key to /home/jim/.minikube/apiserver.key ...
I0206 13:14:49.724952   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/apiserver.key: {Name:mk45eb75d029a9885a3e6c598e56c3b277992fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.725063   19115 crypto.go:69] Generating cert /home/jim/.minikube/proxy-client.crt with IP's: []
I0206 13:14:49.971244   19115 crypto.go:157] Writing cert to /home/jim/.minikube/proxy-client.crt ...
I0206 13:14:49.971270   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/proxy-client.crt: {Name:mk16ac4b868f83cae7c3d0c45c1515f2add39d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0206 13:14:49.971470   19115 crypto.go:165] Writing key to /home/jim/.minikube/proxy-client.key ...
I0206 13:14:49.971483   19115 lock.go:35] WriteFile acquiring /home/jim/.minikube/proxy-client.key: {Name:mkeef0fc21ed4141100e75e5d08ccf46daa5bc3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
🚜  Pulling images ...
I0206 13:15:21.800781   19115 exec_runner.go:76] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.6:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml": (31.808785583s)
🚀  Launching Kubernetes ... 
I0206 13:15:21.822131   19115 kubeadm.go:157] existence check: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd: exit status 2
stdout:

stderr:
ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0206 13:15:21.822200   19115 kubeadm.go:160] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.7.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.15.6 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates:TaintNodesByCondition=false ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false} Nodes:[{Name: IP:192.168.10.96 Port:8443 KubernetesVersion:v1.15.6 ControlPlane:true Worker:true}] Addons:map[]}
I0206 13:15:55.294029   19115 exec_runner.go:76] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.15.6:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": (33.471591718s)
I0206 13:15:55.294176   19115 kubeadm.go:220] Configuring cluster permissions ...
I0206 13:15:55.348017   19115 rbac.go:71] duration metric: took 40.082469ms to wait for elevateKubeSystemPrivileges.
I0206 13:15:55.379458   19115 rbac.go:81] apiserver oom_adj: 16
I0206 13:15:55.379495   19115 rbac.go:86] adjusting apiserver oom_adj to -10
I0206 13:15:55.408387   19115 kubeadm.go:162] StartCluster complete in 33.586186137s
I0206 13:15:55.408443   19115 addons.go:272] enableAddons start: toEnable=map[], additional=[]
🌟  Enabling addons: default-storageclass, storage-provisioner
I0206 13:15:55.410033   19115 addons.go:46] Setting default-storageclass=true in profile "minikube"
I0206 13:15:55.410152   19115 addons.go:226] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0206 13:15:55.414786   19115 none.go:130] GetState called
I0206 13:15:55.416949   19115 kverify.go:126] Checking apiserver status ...
I0206 13:15:55.442580   19115 kverify.go:142] apiserver freezer: "3:freezer:/kubepods/burstable/pod05d5e4d5912b4b3a41eb064529de4155/d6b3d65abb67fce5667d694f569661378c3f5b488f3b93d309e121ebf74be828"
I0206 13:15:55.448723   19115 kverify.go:156] freezer state: "THAWED"
I0206 13:15:55.448747   19115 kverify.go:166] Checking apiserver healthz at https://192.168.10.96:8443/healthz ...
I0206 13:15:55.455179   19115 addons.go:106] Setting addon default-storageclass=true in "minikube"
W0206 13:15:55.455331   19115 addons.go:121] addon default-storageclass should already be in state true
I0206 13:15:55.455417   19115 status.go:65] Checking if "minikube" exists ...
I0206 13:15:55.455904   19115 none.go:130] GetState called
I0206 13:15:55.458908   19115 kverify.go:126] Checking apiserver status ...
I0206 13:15:55.479818   19115 kverify.go:142] apiserver freezer: "3:freezer:/kubepods/burstable/pod05d5e4d5912b4b3a41eb064529de4155/d6b3d65abb67fce5667d694f569661378c3f5b488f3b93d309e121ebf74be828"
I0206 13:15:55.487202   19115 kverify.go:156] freezer state: "THAWED"
I0206 13:15:55.487229   19115 kverify.go:166] Checking apiserver healthz at https://192.168.10.96:8443/healthz ...
I0206 13:15:55.491487   19115 addons.go:194] installing /etc/kubernetes/addons/storageclass.yaml
I0206 13:15:55.491787   19115 addons.go:215] Running: /usr/bin/sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.15.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0206 13:15:55.894509   19115 addons.go:220] output:
-- stdout --
storageclass.storage.k8s.io/standard created

-- /stdout --
I0206 13:15:55.894583   19115 addons.go:72] Writing out "minikube" config to set default-storageclass=true...
I0206 13:15:55.895077   19115 addons.go:46] Setting storage-provisioner=true in profile "minikube"
I0206 13:15:55.895418   19115 addons.go:106] Setting addon storage-provisioner=true in "minikube"
W0206 13:15:55.895691   19115 addons.go:121] addon storage-provisioner should already be in state true
I0206 13:15:55.895929   19115 status.go:65] Checking if "minikube" exists ...
I0206 13:15:55.896976   19115 none.go:130] GetState called
I0206 13:15:55.902406   19115 kverify.go:126] Checking apiserver status ...
I0206 13:15:55.937853   19115 kverify.go:142] apiserver freezer: "3:freezer:/kubepods/burstable/pod05d5e4d5912b4b3a41eb064529de4155/d6b3d65abb67fce5667d694f569661378c3f5b488f3b93d309e121ebf74be828"
I0206 13:15:55.943955   19115 kverify.go:156] freezer state: "THAWED"
I0206 13:15:55.943979   19115 kverify.go:166] Checking apiserver healthz at https://192.168.10.96:8443/healthz ...
I0206 13:15:55.948113   19115 addons.go:194] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0206 13:15:55.948367   19115 addons.go:215] Running: /usr/bin/sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.15.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0206 13:15:56.150686   19115 addons.go:220] output:
-- stdout --
serviceaccount/storage-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
pod/storage-provisioner created

-- /stdout --
I0206 13:15:56.150715   19115 addons.go:72] Writing out "minikube" config to set storage-provisioner=true...
I0206 13:15:56.150925   19115 addons.go:274] enableAddons completed in 742.483442ms
🤹  Configuring local host environment ...

⚠️  The 'none' driver provides limited isolation and may reduce system security and reliability.
⚠️  For more information, see:
👉  https://minikube.sigs.k8s.io/docs/reference/drivers/none/

⚠️  kubectl and minikube configuration will be stored in /home/jim
⚠️  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /home/jim/.kube /home/jim/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
⌛  Waiting for cluster to come online ...
I0206 13:15:56.151198   19115 kverify.go:42] waiting for apiserver process to appear ...
I0206 13:15:56.165617   19115 kverify.go:56] duration metric: took 14.430042ms to wait for apiserver process to appear ...
I0206 13:15:56.165641   19115 kverify.go:99] waiting for apiserver healthz status ...
I0206 13:15:56.165663   19115 kverify.go:166] Checking apiserver healthz at https://192.168.10.96:8443/healthz ...
I0206 13:15:56.169808   19115 kverify.go:120] duration metric: took 4.149469ms to wait for apiserver healthz status ...
I0206 13:15:56.169830   19115 kverify.go:72] waiting for kube-system pods to appear ...
I0206 13:15:56.175738   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:56.680440   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:57.180439   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:57.680298   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:58.180777   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:58.677943   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:59.182088   19115 kverify.go:84] 1 kube-system pods found
I0206 13:15:59.681931   19115 kverify.go:84] 1 kube-system pods found
I0206 13:16:00.181122   19115 kverify.go:84] 1 kube-system pods found
I0206 13:16:00.680606   19115 kverify.go:84] 1 kube-system pods found
I0206 13:16:01.181075   19115 kverify.go:84] 1 kube-system pods found
I0206 13:16:01.680623   19115 kverify.go:84] 1 kube-system pods found
I0206 13:16:02.182700   19115 kverify.go:84] 1 kube-system pods found
I0206 13:16:02.683900   19115 kverify.go:84] 4 kube-system pods found
I0206 13:16:02.683979   19115 kverify.go:93] duration metric: took 6.514128527s to wait for pod list to return data ...
🏄  Done! kubectl is now configured to use "minikube"
I0206 13:16:02.863940   19115 start.go:562] kubectl: 1.17.1, cluster: 1.15.6 (minor skew: 2)
⚠️  /usr/local/bin/kubectl is version 1.17.1, and is incompatible with Kubernetes 1.15.6. You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster

I still have the same issue:

$ kubectl get pod --all-namespaces && kubectl get node -o yaml | grep -A2 taints:
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-8rt7f       0/1     Pending   0          5m9s
kube-system   coredns-5c98db65d4-t5rtt       0/1     Pending   0          5m9s
kube-system   etcd-sre1                      1/1     Running   0          4m20s
kube-system   kube-apiserver-sre1            1/1     Running   0          4m3s
kube-system   kube-controller-manager-sre1   1/1     Running   0          4m7s
kube-system   kube-proxy-6wtk6               1/1     Running   0          5m9s
kube-system   kube-scheduler-sre1            1/1     Running   0          4m16s
kube-system   storage-provisioner            0/1     Pending   0          5m15s
    taints:
    - effect: NoSchedule
      key: node.kubernetes.io/not-ready

And just in case here's minikube logs again.

$ minikube logs
==> Docker <==
-- Logs begin at Tue 2019-12-31 07:45:47 AWST, end at Thu 2020-02-06 13:22:04 AWST. --
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.047121224+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.047215088+08:00" level=warning msg="60251eb048b0374f461efe416ee97fe69155a9c017c181c28e2600ad170b1dba cleanup: failed to unmount IPC: umount /var/lib/docker/containers/60251eb048b0374f461efe416ee97fe69155a9c017c181c28e2600ad170b1dba/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.231506465+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.231587705+08:00" level=warning msg="119cb9aa1e8f3452b2fb75ba8fd0ef2a12192f680c21600f33041e8ead0098f1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/119cb9aa1e8f3452b2fb75ba8fd0ef2a12192f680c21600f33041e8ead0098f1/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.456388575+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.456535380+08:00" level=warning msg="96b4f93facc5793f74ff7b38d52df5ef947dd291f8bf560496125cde5e094d3f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/96b4f93facc5793f74ff7b38d52df5ef947dd291f8bf560496125cde5e094d3f/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:24 sre1 dockerd[848]: time="2020-02-06T13:06:24.703228697+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:25 sre1 dockerd[848]: time="2020-02-06T13:06:25.072035884+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:25 sre1 dockerd[848]: time="2020-02-06T13:06:25.541625985+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:26 sre1 dockerd[848]: time="2020-02-06T13:06:26.153007490+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:26 sre1 dockerd[848]: time="2020-02-06T13:06:26.153105558+08:00" level=warning msg="7806777553a7bf680dfe663ab91ea0f91410b31b32e800ab4ba6707b992b49bd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7806777553a7bf680dfe663ab91ea0f91410b31b32e800ab4ba6707b992b49bd/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:26 sre1 dockerd[848]: time="2020-02-06T13:06:26.451106059+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:26 sre1 dockerd[848]: time="2020-02-06T13:06:26.899373766+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:26 sre1 dockerd[848]: time="2020-02-06T13:06:26.899449058+08:00" level=warning msg="908639bbef4a6ce8c78ea7aef9baee64c0e1f9b791302cef4df2189780aa83ce cleanup: failed to unmount IPC: umount /var/lib/docker/containers/908639bbef4a6ce8c78ea7aef9baee64c0e1f9b791302cef4df2189780aa83ce/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:27 sre1 dockerd[848]: time="2020-02-06T13:06:27.119198947+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:27 sre1 dockerd[848]: time="2020-02-06T13:06:27.119264663+08:00" level=warning msg="9de6dcc72b522b101c30446b34447f79ce7a208d008b0852642e34ed15061eed cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9de6dcc72b522b101c30446b34447f79ce7a208d008b0852642e34ed15061eed/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:27 sre1 dockerd[848]: time="2020-02-06T13:06:27.352170650+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:27 sre1 dockerd[848]: time="2020-02-06T13:06:27.806611704+08:00" level=warning msg="41a6d6876ca5bb89c910951efa454f3828f67f8baae577f0f68d351ede57f74d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/41a6d6876ca5bb89c910951efa454f3828f67f8baae577f0f68d351ede57f74d/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:27 sre1 dockerd[848]: time="2020-02-06T13:06:27.806725658+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:28 sre1 dockerd[848]: time="2020-02-06T13:06:28.052875291+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:28 sre1 dockerd[848]: time="2020-02-06T13:06:28.495712416+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:28 sre1 dockerd[848]: time="2020-02-06T13:06:28.951632036+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:28 sre1 dockerd[848]: time="2020-02-06T13:06:28.951707339+08:00" level=warning msg="c250dc98e19538461ac22ee6f042afa9b7ee88fdca0b1f31372deb94267c9305 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c250dc98e19538461ac22ee6f042afa9b7ee88fdca0b1f31372deb94267c9305/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:29 sre1 dockerd[848]: time="2020-02-06T13:06:29.204714183+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:29 sre1 dockerd[848]: time="2020-02-06T13:06:29.748904977+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:29 sre1 dockerd[848]: time="2020-02-06T13:06:29.748982785+08:00" level=warning msg="b5c6751d7fb4cbcd65c8a5cb53910b51a955eabf3020aafa5b76dacb56ee749b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b5c6751d7fb4cbcd65c8a5cb53910b51a955eabf3020aafa5b76dacb56ee749b/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:30 sre1 dockerd[848]: time="2020-02-06T13:06:30.062434744+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:30 sre1 dockerd[848]: time="2020-02-06T13:06:30.501315177+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:30 sre1 dockerd[848]: time="2020-02-06T13:06:30.501398860+08:00" level=warning msg="103f77387f6fcc4fb5775cf6691a1d06023e74783ef45cc15531fc9e9ef58e97 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/103f77387f6fcc4fb5775cf6691a1d06023e74783ef45cc15531fc9e9ef58e97/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:30 sre1 dockerd[848]: time="2020-02-06T13:06:30.770702684+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.159173327+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.159315076+08:00" level=warning msg="48bd29f86701d7c2a160505f5f1c0f5148e5e5f0bbed4a79186cfca7de41b52b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/48bd29f86701d7c2a160505f5f1c0f5148e5e5f0bbed4a79186cfca7de41b52b/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.353669487+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.353751231+08:00" level=warning msg="9b26481382a71503690f2f7edd6b5a6e325cd05a676aa7841462b5b2cd02f7d5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9b26481382a71503690f2f7edd6b5a6e325cd05a676aa7841462b5b2cd02f7d5/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.590666289+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.867251466+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:31 sre1 dockerd[848]: time="2020-02-06T13:06:31.867325085+08:00" level=warning msg="b7ae55bf013b02503edb96106e22d60c07fbcbc66ec508c2d10503c672a4eee6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b7ae55bf013b02503edb96106e22d60c07fbcbc66ec508c2d10503c672a4eee6/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:32 sre1 dockerd[848]: time="2020-02-06T13:06:32.052942557+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:32 sre1 dockerd[848]: time="2020-02-06T13:06:32.053060177+08:00" level=warning msg="95e9b70e15437beada59369c86e9543ed5da48a60e3be3a11e5f700dc6bdcf88 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/95e9b70e15437beada59369c86e9543ed5da48a60e3be3a11e5f700dc6bdcf88/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:32 sre1 dockerd[848]: time="2020-02-06T13:06:32.304847508+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:32 sre1 dockerd[848]: time="2020-02-06T13:06:32.718589915+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:33 sre1 dockerd[848]: time="2020-02-06T13:06:33.139004654+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:33 sre1 dockerd[848]: time="2020-02-06T13:06:33.584516920+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:33 sre1 dockerd[848]: time="2020-02-06T13:06:33.584557381+08:00" level=warning msg="2ebf5768c8f08192ace9c9bcfbcaa625ed6c3d04469c562c944ab0d66eedb2ae cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2ebf5768c8f08192ace9c9bcfbcaa625ed6c3d04469c562c944ab0d66eedb2ae/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:33 sre1 dockerd[848]: time="2020-02-06T13:06:33.807874668+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.036964890+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.037066235+08:00" level=warning msg="056011d1a38cc634b96a1193f358fcd0d0af4b6f0f907662b24408112f6e2416 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/056011d1a38cc634b96a1193f358fcd0d0af4b6f0f907662b24408112f6e2416/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.274077631+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.274171397+08:00" level=warning msg="6aadcc44cc2434f291189e13f06f48fa3e47321bf47a024e6b9b96e00a537838 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6aadcc44cc2434f291189e13f06f48fa3e47321bf47a024e6b9b96e00a537838/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.484995224+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.485074535+08:00" level=warning msg="e61380d9e83ea1b7858b3d6d5e66d03e1bfeee3ca5ba062f872eb447d0bb4114 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e61380d9e83ea1b7858b3d6d5e66d03e1bfeee3ca5ba062f872eb447d0bb4114/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.702145271+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.702247812+08:00" level=warning msg="afb09b60ccc6e98e662a3c9488e17e97f5fb80d5fc0bdfc4e6ef5499852f7e9e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/afb09b60ccc6e98e662a3c9488e17e97f5fb80d5fc0bdfc4e6ef5499852f7e9e/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.917355901+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:34 sre1 dockerd[848]: time="2020-02-06T13:06:34.917434472+08:00" level=warning msg="25a2bfc72e93a7e3a6a5f07195e73dcc6cf1795e3d8fff1c40966cd3aa1354db cleanup: failed to unmount IPC: umount /var/lib/docker/containers/25a2bfc72e93a7e3a6a5f07195e73dcc6cf1795e3d8fff1c40966cd3aa1354db/mounts/shm, flags: 0x2: no such file or directory"
Feb 06 13:06:35 sre1 dockerd[848]: time="2020-02-06T13:06:35.132877063+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:35 sre1 dockerd[848]: time="2020-02-06T13:06:35.421422221+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:35 sre1 dockerd[848]: time="2020-02-06T13:06:35.669088354+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:35 sre1 dockerd[848]: time="2020-02-06T13:06:35.918265558+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 13:06:36 sre1 dockerd[848]: time="2020-02-06T13:06:36.204048910+08:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
sudo: crictl: command not found
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                     PORTS               NAMES
b8af1416b363        d756327a2327           "/usr/local/bin/kube…"   6 minutes ago       Up 6 minutes                                   k8s_kube-proxy_kube-proxy-6wtk6_kube-system_acdbf016-f284-4dbd-911a-4add47ec44ae_0
b653625adb69        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                                   k8s_POD_kube-proxy-6wtk6_kube-system_acdbf016-f284-4dbd-911a-4add47ec44ae_0
62bba7ea1541        83ab61bd43ad           "kube-controller-man…"   6 minutes ago       Up 6 minutes                                   k8s_kube-controller-manager_kube-controller-manager-sre1_kube-system_a99a4cb4884a61f6185b470808e3932f_0
fa12ad0b23ee        502e54938456           "kube-scheduler --bi…"   6 minutes ago       Up 6 minutes                                   k8s_kube-scheduler_kube-scheduler-sre1_kube-system_e464de791d8ef68d9d1f8708211226ce_0
d6b3d65abb67        9f612b9e9bbf           "kube-apiserver --ad…"   6 minutes ago       Up 6 minutes                                   k8s_kube-apiserver_kube-apiserver-sre1_kube-system_05d5e4d5912b4b3a41eb064529de4155_0
061525c0821f        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                                   k8s_POD_kube-scheduler-sre1_kube-system_e464de791d8ef68d9d1f8708211226ce_0
f8cec5f4e687        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                                   k8s_POD_kube-controller-manager-sre1_kube-system_a99a4cb4884a61f6185b470808e3932f_0
8768f91378e5        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                                   k8s_POD_kube-apiserver-sre1_kube-system_05d5e4d5912b4b3a41eb064529de4155_0
14d9085b2454        2c4adeb21b4f           "etcd --advertise-cl…"   6 minutes ago       Up 6 minutes                                   k8s_etcd_etcd-sre1_kube-system_61de0bb4ae9b67b1edb8848375087637_0
86404da04168        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes                                   k8s_POD_etcd-sre1_kube-system_61de0bb4ae9b67b1edb8848375087637_0
dd82bd7d170e        b8da3f63e9d3           "/bin/sh -c ./instal…"   8 weeks ago         Exited (127) 8 weeks ago                       optimistic_chebyshev

==> dmesg <==
[Feb 6 08:26] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.379793] tpm_crb MSFT0101:00: can't request region for resource [mem 0x37f65000-0x37f6502f]
[  +0.000081] tpm_crb: probe of MSFT0101:00 failed with error -16
[  +0.822503] thermal thermal_zone3: failed to read out thermal zone (-61)
[  +0.617352] uvcvideo 1-7:1.0: Entity type for entity Extension 4 was not initialized!
[  +0.000001] uvcvideo 1-7:1.0: Entity type for entity Extension 3 was not initialized!
[  +0.000001] uvcvideo 1-7:1.0: Entity type for entity Processing 2 was not initialized!
[  +0.000001] uvcvideo 1-7:1.0: Entity type for entity Camera 1 was not initialized!
[  +0.337976] usb 1-3.3.4.3: firmware: failed to load ti_usb-v0451-p3410.fw (-2)
[  +0.000000] firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
[  +0.000002] usb 1-3.3.4.3: Direct firmware load for ti_usb-v0451-p3410.fw failed with error -2
[  +0.000021] usb 1-3.3.4.3: firmware: failed to load ti_3410.fw (-2)
[  +0.000002] usb 1-3.3.4.3: Direct firmware load for ti_3410.fw failed with error -2
[  +0.000009] ti_usb_3410_5052: probe of 1-3.3.4.3:1.0 failed with error -2
[Feb 6 08:43] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Feb 6 08:44] tee (10457): /proc/9983/oom_adj is deprecated, please use /proc/9983/oom_score_adj instead.
[Feb 6 10:35] acer_wmi: Unknown function number - 6 - 1
[Feb 6 10:36] acer_wmi: Unknown function number - 6 - 1
[Feb 6 11:31] usb 1-1: device descriptor read/64, error -71
[  +0.239989] usb 1-1: device descriptor read/64, error -71
[  +0.364445] usb 1-1: device descriptor read/64, error -71
[  +0.235685] usb 1-1: device descriptor read/64, error -71
[  +0.763975] usb 1-1: Device not responding to setup address.
[  +0.207998] usb 1-1: Device not responding to setup address.
[  +0.211899] usb 1-1: device not accepting address 15, error -71
[  +0.128125] usb 1-1: Device not responding to setup address.
[  +0.208034] usb 1-1: Device not responding to setup address.
[  +0.207858] usb 1-1: device not accepting address 16, error -71
[  +0.000138] usb usb1-port1: unable to enumerate USB device
[Feb 6 12:12] usb 1-1: device descriptor read/64, error -71
[  +0.236021] usb 1-1: device descriptor read/64, error -71
[  +0.371940] usb 1-1: device descriptor read/64, error -71
[  +0.240042] usb 1-1: device descriptor read/64, error -71
[  +0.764217] usb 1-1: Device not responding to setup address.
[  +0.208007] usb 1-1: Device not responding to setup address.
[  +0.207696] usb 1-1: device not accepting address 19, error -71
[  +0.130724] usb 1-1: Device not responding to setup address.
[  +0.205539] usb 1-1: Device not responding to setup address.
[  +0.211801] usb 1-1: device not accepting address 20, error -71
[  +0.000120] usb usb1-port1: unable to enumerate USB device

==> kernel <==
 13:22:04 up  4:55,  1 user,  load average: 0.57, 0.67, 1.16
Linux sre1 5.4.0-3-amd64 #1 SMP Debian 5.4.13-1 (2020-01-19) x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 10 (buster)"

==> kube-apiserver [d6b3d65abb67] <==
E0206 05:15:50.447821       1 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.447859       1 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.447877       1 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.447904       1 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.447923       1 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.447950       1 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.447973       1 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.448017       1 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.448056       1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.448083       1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0206 05:15:50.448109       1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0206 05:15:50.448144       1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0206 05:15:50.448152       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0206 05:15:50.449502       1 client.go:354] parsed scheme: ""
I0206 05:15:50.449511       1 client.go:354] scheme "" not registered, fallback to default scheme
I0206 05:15:50.449563       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0206 05:15:50.449602       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 05:15:50.456232       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 05:15:50.456719       1 client.go:354] parsed scheme: ""
I0206 05:15:50.456733       1 client.go:354] scheme "" not registered, fallback to default scheme
I0206 05:15:50.456762       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0206 05:15:50.456802       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 05:15:50.464045       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0206 05:15:51.858442       1 secure_serving.go:116] Serving securely on [::]:8443
I0206 05:15:51.858480       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0206 05:15:51.858497       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0206 05:15:51.858511       1 available_controller.go:376] Starting AvailableConditionController
I0206 05:15:51.858521       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0206 05:15:51.858547       1 crd_finalizer.go:255] Starting CRDFinalizer
I0206 05:15:51.858569       1 controller.go:81] Starting OpenAPI AggregationController
I0206 05:15:51.858583       1 controller.go:83] Starting OpenAPI controller
I0206 05:15:51.858594       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0206 05:15:51.858620       1 naming_controller.go:288] Starting NamingConditionController
I0206 05:15:51.858655       1 establishing_controller.go:73] Starting EstablishingController
I0206 05:15:51.858750       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0206 05:15:51.859865       1 autoregister_controller.go:140] Starting autoregister controller
I0206 05:15:51.859877       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0206 05:15:51.862070       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0206 05:15:51.862083       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
E0206 05:15:51.865722       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.10.96, ResourceVersion: 0, AdditionalErrorMsg: 
I0206 05:15:51.960209       1 cache.go:39] Caches are synced for autoregister controller
I0206 05:15:51.963881       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0206 05:15:51.963902       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0206 05:15:51.964815       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0206 05:15:52.011696       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0206 05:15:52.857070       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0206 05:15:52.857135       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0206 05:15:52.857426       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0206 05:15:52.869626       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0206 05:15:52.889013       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0206 05:15:52.889077       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0206 05:15:53.583486       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0206 05:15:53.662386       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0206 05:15:53.834678       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.10.96]
I0206 05:15:53.836296       1 controller.go:606] quota admission added evaluator for: endpoints
I0206 05:15:54.957833       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0206 05:15:54.999501       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0206 05:15:55.260670       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0206 05:16:02.495627       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0206 05:16:02.574219       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [62bba7ea1541] <==
I0206 05:16:01.168780       1 controllermanager.go:532] Started "ttl"
I0206 05:16:01.168867       1 ttl_controller.go:116] Starting TTL controller
I0206 05:16:01.168920       1 controller_utils.go:1029] Waiting for caches to sync for TTL controller
I0206 05:16:01.419833       1 controllermanager.go:532] Started "tokencleaner"
I0206 05:16:01.419936       1 tokencleaner.go:116] Starting token cleaner controller
I0206 05:16:01.419985       1 controller_utils.go:1029] Waiting for caches to sync for token_cleaner controller
I0206 05:16:01.520167       1 controller_utils.go:1036] Caches are synced for token_cleaner controller
I0206 05:16:01.669906       1 controllermanager.go:532] Started "persistentvolume-expander"
W0206 05:16:01.669959       1 controllermanager.go:524] Skipping "ttl-after-finished"
I0206 05:16:01.669998       1 expand_controller.go:300] Starting expand controller
I0206 05:16:01.670047       1 controller_utils.go:1029] Waiting for caches to sync for expand controller
I0206 05:16:01.936857       1 controllermanager.go:532] Started "namespace"
I0206 05:16:01.936956       1 namespace_controller.go:186] Starting namespace controller
I0206 05:16:01.937023       1 controller_utils.go:1029] Waiting for caches to sync for namespace controller
I0206 05:16:02.319120       1 controllermanager.go:532] Started "disruption"
I0206 05:16:02.319168       1 disruption.go:333] Starting disruption controller
I0206 05:16:02.319234       1 controller_utils.go:1029] Waiting for caches to sync for disruption controller
I0206 05:16:02.320643       1 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
I0206 05:16:02.340802       1 controller_utils.go:1036] Caches are synced for namespace controller
I0206 05:16:02.358014       1 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
I0206 05:16:02.385466       1 controller_utils.go:1036] Caches are synced for bootstrap_signer controller
I0206 05:16:02.393472       1 controller_utils.go:1036] Caches are synced for certificate controller
I0206 05:16:02.420359       1 controller_utils.go:1036] Caches are synced for service account controller
I0206 05:16:02.420441       1 controller_utils.go:1036] Caches are synced for PV protection controller
I0206 05:16:02.420629       1 controller_utils.go:1036] Caches are synced for certificate controller
I0206 05:16:02.428953       1 controller_utils.go:1036] Caches are synced for HPA controller
I0206 05:16:02.463175       1 log.go:172] [INFO] signed certificate with serial number 496623600423804467655595300119073819435030352222
I0206 05:16:02.467168       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
I0206 05:16:02.469701       1 controller_utils.go:1036] Caches are synced for GC controller
W0206 05:16:02.472078       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="sre1" does not exist
I0206 05:16:02.494447       1 controller_utils.go:1036] Caches are synced for deployment controller
I0206 05:16:02.497479       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"42801cc9-6e88-4672-aaa6-c21432c490f5", APIVersion:"apps/v1", ResourceVersion:"175", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5c98db65d4 to 2
I0206 05:16:02.520342       1 controller_utils.go:1036] Caches are synced for ReplicaSet controller
I0206 05:16:02.523658       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5c98db65d4", UID:"ef463ec0-90c8-4a8c-904b-436871f364a6", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5c98db65d4-8rt7f
I0206 05:16:02.527923       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5c98db65d4", UID:"ef463ec0-90c8-4a8c-904b-436871f364a6", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5c98db65d4-t5rtt
I0206 05:16:02.569145       1 controller_utils.go:1036] Caches are synced for TTL controller
I0206 05:16:02.570460       1 controller_utils.go:1036] Caches are synced for daemon sets controller
I0206 05:16:02.571937       1 controller_utils.go:1036] Caches are synced for taint controller
I0206 05:16:02.572165       1 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0206 05:16:02.572380       1 node_lifecycle_controller.go:863] Missing timestamp for Node sre1. Assuming now as a timestamp.
I0206 05:16:02.572554       1 node_lifecycle_controller.go:1089] Controller detected that zone  is now in state Normal.
I0206 05:16:02.572723       1 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"8e4d85a7-f03f-43bc-a3d5-65fa4f34a4a4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node sre1 event: Registered Node sre1 in Controller
I0206 05:16:02.572768       1 taint_manager.go:182] Starting NoExecuteTaintManager
I0206 05:16:02.581742       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e59aeb9b-8a0a-4872-b3a4-f393863f5636", APIVersion:"apps/v1", ResourceVersion:"181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-6wtk6
E0206 05:16:02.596237       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"e59aeb9b-8a0a-4872-b3a4-f393863f5636", ResourceVersion:"181", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716562955, loc:(*time.Location)(0x731ab60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001db7700), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001e31840), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001db7720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001db7740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.15.6", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001db7780)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e4aaf0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016597e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e32780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000f788)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001659828)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0206 05:16:02.619382       1 controller_utils.go:1036] Caches are synced for disruption controller
I0206 05:16:02.619406       1 disruption.go:341] Sending events to api server.
I0206 05:16:02.969984       1 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
I0206 05:16:02.979029       1 controller_utils.go:1036] Caches are synced for job controller
I0206 05:16:03.071079       1 controller_utils.go:1036] Caches are synced for endpoint controller
I0206 05:16:03.128671       1 controller_utils.go:1036] Caches are synced for resource quota controller
I0206 05:16:03.129665       1 controller_utils.go:1036] Caches are synced for garbage collector controller
I0206 05:16:03.129677       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0206 05:16:03.142625       1 controller_utils.go:1036] Caches are synced for attach detach controller
I0206 05:16:03.158697       1 controller_utils.go:1036] Caches are synced for garbage collector controller
I0206 05:16:03.164990       1 controller_utils.go:1036] Caches are synced for stateful set controller
I0206 05:16:03.170227       1 controller_utils.go:1036] Caches are synced for expand controller
I0206 05:16:03.170687       1 controller_utils.go:1036] Caches are synced for PVC protection controller
I0206 05:16:03.170945       1 controller_utils.go:1036] Caches are synced for persistent volume controller
I0206 05:16:03.220841       1 controller_utils.go:1036] Caches are synced for resource quota controller

==> kube-proxy [b8af1416b363] <==
W0206 05:16:03.377808       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0206 05:16:03.393585       1 server_others.go:143] Using iptables Proxier.
W0206 05:16:03.393794       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0206 05:16:03.394487       1 server.go:534] Version: v1.15.6
I0206 05:16:03.433569       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0206 05:16:03.436202       1 config.go:187] Starting service config controller
I0206 05:16:03.436268       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0206 05:16:03.436730       1 config.go:96] Starting endpoints config controller
I0206 05:16:03.436826       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0206 05:16:03.536944       1 controller_utils.go:1036] Caches are synced for service config controller
I0206 05:16:03.537456       1 controller_utils.go:1036] Caches are synced for endpoints config controller

==> kube-scheduler [fa12ad0b23ee] <==
I0206 05:15:48.307024       1 serving.go:319] Generated self-signed cert in-memory
W0206 05:15:48.733181       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0206 05:15:48.733196       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0206 05:15:48.733238       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0206 05:15:48.741579       1 server.go:142] Version: v1.15.6
W0206 05:15:48.742971       1 authorization.go:47] Authorization is disabled
W0206 05:15:48.742982       1 authentication.go:55] Authentication is disabled
I0206 05:15:48.742996       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0206 05:15:48.744659       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0206 05:15:51.949166       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0206 05:15:51.949423       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0206 05:15:51.949650       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0206 05:15:51.961772       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0206 05:15:51.961997       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0206 05:15:51.962046       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0206 05:15:51.962117       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0206 05:15:51.962210       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0206 05:15:51.962238       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0206 05:15:51.962300       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0206 05:15:52.952442       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0206 05:15:52.955283       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0206 05:15:52.959367       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0206 05:15:52.964945       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0206 05:15:52.965810       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0206 05:15:52.969131       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0206 05:15:52.969194       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0206 05:15:52.972817       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0206 05:15:52.975791       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0206 05:15:52.975998       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0206 05:15:54.847370       1 leaderelection.go:235] attempting to acquire leader lease  kube-system/kube-scheduler...
I0206 05:15:54.861508       1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0206 05:15:56.146248       1 factory.go:702] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Tue 2019-12-31 07:45:47 AWST, end at Thu 2020-02-06 13:22:05 AWST. --
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.201149   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.301428   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.401599   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.501774   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.601942   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.702057   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.802232   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:50 sre1 kubelet[19945]: E0206 13:15:50.902408   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.002573   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.102768   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.202941   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.303086   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.403298   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.503474   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.603607   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.703794   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.803942   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.907029   19945 kubelet.go:2252] node "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: E0206 13:15:51.957382   19945 controller.go:204] failed to get node "sre1" when trying to set owner ref to the node lease: nodes "sre1" not found
Feb 06 13:15:51 sre1 kubelet[19945]: I0206 13:15:51.989664   19945 kubelet_node_status.go:75] Successfully registered node sre1
Feb 06 13:15:51 sre1 kubelet[19945]: I0206 13:15:51.997640   19945 reconciler.go:150] Reconciler: start to sync state
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.087491   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8557a5c21d1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf8706209c1c8dd1, ext:20602842863, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf8706209c1c8dd1, ext:20602842863, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.142967   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558234c6b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node sre1 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f532b4, ext:20734481298, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f532b4, ext:20734481298, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.198808   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558234eaef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node sre1 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f556ef, ext:20734490577, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f556ef, ext:20734490577, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.254436   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b85582347cd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node sre1 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f4e8d4, ext:20734462395, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f4e8d4, ext:20734462395, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.315187   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b85582347cd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node sre1 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f4e8d4, ext:20734462395, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a4e4f0b1, ext:20750193037, loc:(*time.Location)(0x7632720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.375944   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558234c6b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node sre1 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f532b4, ext:20734481298, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a4e510a2, ext:20750201214, loc:(*time.Location)(0x7632720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.436308   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558234eaef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node sre1 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f556ef, ext:20734490577, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a4e51c5b, ext:20750204216, loc:(*time.Location)(0x7632720)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.498263   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558331c9de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a4f235de, ext:20751062720, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a4f235de, ext:20751062720, loc:(*time.Location)(0x7632720)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513142   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/b88b90cf-8284-41f0-b90d-7787e5159569/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513229   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/bb3fc426-4a4d-4742-9b86-c4a5e92db422/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513285   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/e1868d88-5b95-448a-b679-0a23729ea01f/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513338   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/0c61696c-89ae-4475-85ca-5e3ee11075dc/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513412   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/76825556-a251-4f95-a1d1-df3e06cfac3c/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513484   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/8a05a447-ddfb-479b-a806-d20530f47069/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513564   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ac4e8cbb-237b-4509-b177-4e58b0070494/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513642   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/cf4e6336a657e16190a3f9de4c87e3c0/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513722   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/f7d3bd9bbbbdd48d97a3437e231fff24/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513798   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/24f131a8-7999-4394-9881-969abebf5b2d/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513869   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/358ddef8-b1e7-4f72-93e8-68983c89d6c8/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.513952   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/45045590-3500-401b-a042-ac661ed38461/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514033   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7a3b44ee-27a5-4d61-9c8c-f5f8e5c551a8/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514108   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/afd1fb08-c704-463e-8ade-a43e2993c99b/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514185   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/bc9820c9-eb29-480e-8c20-0a76c96ffaec/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514253   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/98c09d8c-c391-43d3-a9ad-dce6ed5a7d78/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514331   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ac50f8f26aaf79982c3509eadf2c30bd/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514407   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/baa3e6e1-92fb-4ece-b05d-a903d4c4c543/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514487   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/38a164f7-424d-4237-b08c-b78a9ea53741/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514571   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/ba1c8b7f-cc5e-4d71-9b4c-bb627339765b/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514647   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/4ef1311f-9fbf-413b-bdf8-543324814a30/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: W0206 13:15:52.514727   19945 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/8d93a44a-89ad-4b35-bfae-a96ca0129a8b/volumes" does not exist
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.571516   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b85582347cd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node sre1 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f4e8d4, ext:20734462395, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620b298ae6b, ext:20980076437, loc:(*time.Location)(0x7632720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:52 sre1 kubelet[19945]: E0206 13:15:52.945934   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558234c6b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node sre1 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f532b4, ext:20734481298, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620b2991c1e, ext:20980104517, loc:(*time.Location)(0x7632720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:53 sre1 kubelet[19945]: E0206 13:15:53.347545   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b8558234eaef", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node sre1 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f556ef, ext:20734490577, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620b2994e69, ext:20980117384, loc:(*time.Location)(0x7632720)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:15:53 sre1 kubelet[19945]: E0206 13:15:53.744153   19945 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"sre1.15f0b85582347cd4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"sre1", UID:"sre1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node sre1 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"sre1"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf870620a3f4e8d4, ext:20734462395, loc:(*time.Location)(0x7632720)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf870620b468817e, ext:21010473593, loc:(*time.Location)(0x7632720)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Feb 06 13:16:02 sre1 kubelet[19945]: I0206 13:16:02.629287   19945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9lwgq" (UniqueName: "kubernetes.io/secret/acdbf016-f284-4dbd-911a-4add47ec44ae-kube-proxy-token-9lwgq") pod "kube-proxy-6wtk6" (UID: "acdbf016-f284-4dbd-911a-4add47ec44ae")
Feb 06 13:16:02 sre1 kubelet[19945]: I0206 13:16:02.629453   19945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/acdbf016-f284-4dbd-911a-4add47ec44ae-xtables-lock") pod "kube-proxy-6wtk6" (UID: "acdbf016-f284-4dbd-911a-4add47ec44ae")
Feb 06 13:16:02 sre1 kubelet[19945]: I0206 13:16:02.629512   19945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/acdbf016-f284-4dbd-911a-4add47ec44ae-kube-proxy") pod "kube-proxy-6wtk6" (UID: "acdbf016-f284-4dbd-911a-4add47ec44ae")
Feb 06 13:16:02 sre1 kubelet[19945]: I0206 13:16:02.629603   19945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/acdbf016-f284-4dbd-911a-4add47ec44ae-lib-modules") pod "kube-proxy-6wtk6" (UID: "acdbf016-f284-4dbd-911a-4add47ec44ae")
Feb 06 13:16:05 sre1 kubelet[19945]: I0206 13:16:05.942697   19945 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials

@jim-barber-he
Copy link
Author

This seems to be resolved as of Minikube version 1.7.2
Because this issue wasn't updated, I figured it was still broken, but on a whim I gave 1.7.3 a try and it worked, so I went back to 1.7.2 and it also worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants