Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube多节点pod间网络不互通 #9921

Closed
LY1806620741 opened this issue Dec 10, 2020 · 11 comments
Closed

minikube多节点pod间网络不互通 #9921

LY1806620741 opened this issue Dec 10, 2020 · 11 comments
Labels
kind/support Categorizes issue or PR as a support question. l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code

Comments

@LY1806620741
Copy link

LY1806620741 commented Dec 10, 2020

重现问题所需的命令

helm install hue gethue/hue #部署任意服务

失败的命令的完整输出


命令无失败,但是hue-postgres-kdfvf、hue-j6vbx分别运行在两个节点,他们无法互相访问tcp端口
hue的报错:

OperationalError: could not connect to server: Connection refused
	Is the server running on host "hue-postgres" (10.98.13.3) and accepting
	TCP/IP connections on port 5432?

pods信息:

[vagrant@control-plane ~]$ kubectl get po -o wide
NAME                 READY   STATUS    RESTARTS   AGE    IP           NODE       NOMINATED NODE   READINESS GATES
hue-j6vbx            1/1     Running   0          5m2s   172.17.0.5   test       <none>           <none>
hue-postgres-qvj5z   1/1     Running   0          5m2s   172.17.0.2   test-m02   <none>           <none>

minikube logs命令的输出

* ==> Docker <==
* -- Logs begin at Thu 2020-12-10 08:46:43 UTC, end at Thu 2020-12-10 11:59:39 UTC. --
* Dec 10 08:51:44 test dockerd[165]: time="2020-12-10T08:51:44.108467189Z" level=info msg="Container 1f387d7f8b093e017d1e6ff26a3162911813e623d4f4f5f35d17ef2ea0fa3fd8 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:51:57 test dockerd[165]: time="2020-12-10T08:51:57.229247377Z" level=info msg="Container 1f387d7f8b09 failed to exit within 10 seconds of kill - trying direct SIGKILL"
* Dec 10 08:51:59 test dockerd[165]: time="2020-12-10T08:51:59.587921340Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:52:00 test dockerd[165]: time="2020-12-10T08:52:00.810301662Z" level=info msg="Container f30cb9ceb7c14d42f8ffdc81f6c334cd9702dc73900b32d15e6cdd9146294f82 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:52:01 test dockerd[165]: time="2020-12-10T08:52:01.789988389Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:53:43 test dockerd[165]: time="2020-12-10T08:53:43.235347083Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:54:28 test dockerd[165]: time="2020-12-10T08:54:28.567694745Z" level=info msg="Container eb8786bd072822c83ce8fcf01a83a0afaa9809215c4181cbc9e1d566d7150a95 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:54:28 test dockerd[165]: time="2020-12-10T08:54:28.598900348Z" level=info msg="Container e4595d58efbc6a7d69d3128c64ea5bba940e0c50adcc79bfa39c9d0e3a5c60b5 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:54:34 test dockerd[165]: time="2020-12-10T08:54:34.868739722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:54:34 test dockerd[165]: time="2020-12-10T08:54:34.872249759Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:55:43 test dockerd[165]: time="2020-12-10T08:55:43.526785013Z" level=info msg="Container cfcbb76d493265ae9f01e67dedda591a2253184233e52cbda0ee99de3945954f failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:55:44 test dockerd[165]: time="2020-12-10T08:55:44.345504186Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:31 test dockerd[165]: time="2020-12-10T08:58:31.569501808Z" level=info msg="Container 83c13b6bb4dab4bfcd6b30c512b86bd3670eeeb01d33e39b8bfce980d5e7150a failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:58:36 test dockerd[165]: time="2020-12-10T08:58:36.936258295Z" level=error msg="stream copy error: reading from a closed fifo"
* Dec 10 08:58:36 test dockerd[165]: time="2020-12-10T08:58:36.936375102Z" level=error msg="stream copy error: reading from a closed fifo"
* Dec 10 08:58:37 test dockerd[165]: time="2020-12-10T08:58:37.099326343Z" level=error msg="b452324593a9dc51c49e6fc9195f41ed46240cb43d7ee9b04507db3098731e47 cleanup: failed to delete container from containerd: no such container"
* Dec 10 08:58:37 test dockerd[165]: time="2020-12-10T08:58:37.099367115Z" level=error msg="Handler for POST /v1.40/containers/b452324593a9dc51c49e6fc9195f41ed46240cb43d7ee9b04507db3098731e47/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/kubelet/pods/482fd02a-8fef-45c5-bdd6-78502256e60d/volumes/kubernetes.io~empty-dir/dfs\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/4f90ecf644fb7b0f5891f6accc69384a3fc890f1e8b828932bd14c600df461b8/merged\\\\\\\" at \\\\\\\"/dfs\\\\\\\" caused \\\\\\\"stat /var/lib/kubelet/pods/482fd02a-8fef-45c5-bdd6-78502256e60d/volumes/kubernetes.io~empty-dir/dfs: no such file or directory\\\\\\\"\\\"\": unknown"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.141329509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.157001671Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.227006579Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:38 test dockerd[165]: time="2020-12-10T08:58:38.829719753Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:39 test dockerd[165]: time="2020-12-10T08:58:39.326882930Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:45 test dockerd[165]: time="2020-12-10T08:58:45.045050561Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:58:45 test dockerd[165]: time="2020-12-10T08:58:45.148109841Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.492483069Z" level=info msg="Container 75b23ffaa68f4e6b6ae2cb44638f240762f486c8404a461d1ccc4d818cde3c50 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.499012653Z" level=info msg="Container 98ffbbe44120117377bd17c704ebd2f697a101c4a4cd2e39f655aaa1f91a87bd failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.518419629Z" level=info msg="Container c58436d0440da6940af8a802817b59d82e0d11ad0b0403402da428c7fba718b9 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.608732860Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.613139713Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:06 test dockerd[165]: time="2020-12-10T08:59:06.634497769Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.266542850Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.295927636Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.345220550Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.510018113Z" level=info msg="Container e42a4d295ad2396ae303ec82e7f6ce9a55a55ecfd44308cafc321efab13b0533 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.638278831Z" level=info msg="Container 1317e3999b83b00dbdfac2c31a0ba137e634e3f1c3e4e4655308efd09bb400eb failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.944736544Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:07 test dockerd[165]: time="2020-12-10T08:59:07.947834312Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 08:59:08 test dockerd[165]: time="2020-12-10T08:59:08.591941154Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:23:24 test dockerd[165]: time="2020-12-10T09:23:24.751524766Z" level=info msg="Container 83a30c7e9e0700d28fe4c0617b8ce5f029fea535cdf9185461194c102ccb6d51 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:23:26 test dockerd[165]: time="2020-12-10T09:23:26.315887562Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:23:36 test dockerd[165]: time="2020-12-10T09:23:36.316217145Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:24:34 test dockerd[165]: time="2020-12-10T09:24:34.732616056Z" level=info msg="Container 6923aaa794bc0383b4b5450a6d63b481f95cc769fe161d4913466fd0537e06a7 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:24:35 test dockerd[165]: time="2020-12-10T09:24:35.104108563Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:24:54 test dockerd[165]: time="2020-12-10T09:24:54.328544327Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:25:44 test dockerd[165]: time="2020-12-10T09:25:44.921543368Z" level=info msg="Container 684414a16ceba8e2802ad55d5c04e4851317fbc584f3b4a75926bfd8c7048af6 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:25:45 test dockerd[165]: time="2020-12-10T09:25:45.300098103Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:26:20 test dockerd[165]: time="2020-12-10T09:26:20.219958765Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:26:54 test dockerd[165]: time="2020-12-10T09:26:54.702303947Z" level=info msg="Container c6dd4817f57d20f4919c25e40dd5a6adde748c82a9a079e23a926781a461db48 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:26:54 test dockerd[165]: time="2020-12-10T09:26:54.877890402Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:28:01 test dockerd[165]: time="2020-12-10T09:28:01.388206278Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:28:04 test dockerd[165]: time="2020-12-10T09:28:04.950233188Z" level=info msg="Container 1ba8eb50a63ba971f25263961d162ca605e4748f5ee3ab51874fd6a85599a93b failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:28:05 test dockerd[165]: time="2020-12-10T09:28:05.527262777Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:29:15 test dockerd[165]: time="2020-12-10T09:29:15.107722623Z" level=info msg="Container 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:29:15 test dockerd[165]: time="2020-12-10T09:29:15.700774626Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:05 test dockerd[165]: time="2020-12-10T09:30:05.942677904Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:21 test dockerd[165]: time="2020-12-10T09:30:21.769523703Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:21 test dockerd[165]: time="2020-12-10T09:30:21.769553030Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:49 test dockerd[165]: time="2020-12-10T09:30:49.996451006Z" level=info msg="Container a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6 failed to exit within 30 seconds of signal 15 - using the force"
* Dec 10 09:30:50 test dockerd[165]: time="2020-12-10T09:30:50.580285053Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 10 09:30:50 test dockerd[165]: time="2020-12-10T09:30:50.674003483Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* 
* ==> container status <==
* CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID
* d11c49fc234ec       4024184338620                                                                              2 hours ago         Running             hue                         0                   a23a0e9a5d818
* d6a179f3b8e53       bad58561c4be7                                                                              3 hours ago         Running             storage-provisioner         25                  b3bd1560f103d
* d438a591e9f6f       bad58561c4be7                                                                              3 hours ago         Exited              storage-provisioner         24                  b3bd1560f103d
* 8f8c7c3378785       503bc4b7440b9                                                                              3 hours ago         Running             kubernetes-dashboard        12                  a3169f5dbdca8
* 51400c84328ff       2186a1a396deb                                                                              3 hours ago         Running             kindnet-cni                 1                   17b32cddc4e63
* 4c6b38e30704b       bfe3a36ebd252                                                                              3 hours ago         Running             coredns                     5                   f904068d0a345
* 559ec4cdc5d7c       86262685d9abb                                                                              3 hours ago         Running             dashboard-metrics-scraper   5                   6c45b974068d5
* 1a408ee50d62c       503bc4b7440b9                                                                              3 hours ago         Exited              kubernetes-dashboard        11                  a3169f5dbdca8
* 147855e4d4e4e       635b36f4d89f0                                                                              3 hours ago         Running             kube-proxy                  1                   af88e7da2d65d
* eb97bca9e9de6       b15c6247777d7                                                                              3 hours ago         Running             kube-apiserver              4                   238b73f1f0d94
* cbed667768826       0369cf4303ffd                                                                              3 hours ago         Running             etcd                        3                   1223c475e6a0d
* ef1e373ce7683       4830ab6185860                                                                              3 hours ago         Running             kube-controller-manager     0                   10dc9502f09dd
* f1b156662850f       14cd22f7abe78                                                                              3 hours ago         Running             kube-scheduler              3                   9bed364abff80
* 2986147d45c59       kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98   4 hours ago         Exited              kindnet-cni                 0                   98a22854feb42
* fc879734428d5       bfe3a36ebd252                                                                              2 days ago          Exited              coredns                     4                   cff22597be120
* b2c42bfd2c462       86262685d9abb                                                                              2 days ago          Exited              dashboard-metrics-scraper   4                   2ee8f04be2f42
* 9618e44b528d2       b15c6247777d7                                                                              3 days ago          Exited              kube-apiserver              3                   8182bffe3ee2f
* c0580b146a273       0369cf4303ffd                                                                              3 days ago          Exited              etcd                        2                   cc51009e65321
* 476a76f1583b3       14cd22f7abe78                                                                              3 days ago          Exited              kube-scheduler              2                   9524824b6b951
* 1c2a69f238695       635b36f4d89f0                                                                              9 days ago          Exited              kube-proxy                  0                   a0d3f90edb01a
* 
* ==> coredns [4c6b38e30704] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* 
* ==> coredns [fc879734428d] <==
* I1210 08:27:53.921823       1 trace.go:116] Trace[1415033323]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-10 08:27:30.131294738 +0000 UTC m=+194743.927898894) (total time: 23.723342794s):
* Trace[1415033323]: [23.408751256s] [23.408751256s] Objects listed
* I1210 08:30:17.444978       1 trace.go:116] Trace[485945017]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-10 08:27:56.972150944 +0000 UTC m=+194770.768755076) (total time: 2m16.738043209s):
* Trace[485945017]: [2m15.749642532s] [2m15.749642532s] Objects listed
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* 
* ==> describe nodes <==
* Name:               test
* Roles:              master
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=test
*                     kubernetes.io/os=linux
*                     minikube.k8s.io/commit=3e098ff146b8502f597849dfda420a2fa4fa43f0
*                     minikube.k8s.io/name=test
*                     minikube.k8s.io/updated_at=2020_12_01T09_48_21_0700
*                     minikube.k8s.io/version=v1.15.0
*                     node-role.kubernetes.io/master=
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Tue, 01 Dec 2020 09:48:17 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  test
*   AcquireTime:     <unset>
*   RenewTime:       Thu, 10 Dec 2020 11:59:30 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Thu, 10 Dec 2020 11:58:19 +0000   Thu, 10 Dec 2020 08:54:17 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.2
*   Hostname:    test
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 4118f6bc99d24394b4ba31544b6db6ce
*   System UUID:                66f20968-7100-4cb7-812c-92ad564ae316
*   Boot ID:                    f70f46b3-a8c8-47b5-a6b9-6f283fa1aeab
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.13
*   Kubelet Version:            v1.19.4
*   Kube-Proxy Version:         v1.19.4
* PodCIDR:                      10.244.0.0/24
* PodCIDRs:                     10.244.0.0/24
* Non-terminated Pods:          (11 in total)
*   Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
*   default                     hue-nhx74                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         149m
*   kube-system                 coredns-f9fd979d6-h6tvx                      100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     9d
*   kube-system                 etcd-test                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 kindnet-wzqft                                100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      3h48m
*   kube-system                 kube-apiserver-test                          250m (6%)     0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 kube-controller-manager-test                 200m (5%)     0 (0%)      0 (0%)           0 (0%)         3h12m
*   kube-system                 kube-proxy-wlzsk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 kube-scheduler-test                          100m (2%)     0 (0%)      0 (0%)           0 (0%)         9d
*   kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-g4xqz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
*   kubernetes-dashboard        kubernetes-dashboard-584f46694c-7gdhx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9d
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests    Limits
*   --------           --------    ------
*   cpu                750m (18%)  100m (2%)
*   memory             120Mi (3%)  220Mi (5%)
*   ephemeral-storage  0 (0%)      0 (0%)
*   hugepages-2Mi      0 (0%)      0 (0%)
* Events:              <none>
* 
* 
* Name:               test-m02
* Roles:              <none>
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=test-m02
*                     kubernetes.io/os=linux
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Thu, 10 Dec 2020 09:20:27 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  test-m02
*   AcquireTime:     <unset>
*   RenewTime:       Thu, 10 Dec 2020 11:59:30 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Thu, 10 Dec 2020 11:59:29 +0000   Thu, 10 Dec 2020 09:20:28 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.3
*   Hostname:    test-m02
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 68baf903f57f49438e4668481465f6d5
*   System UUID:                3b1e0fc2-eb9e-4c19-8ad0-baded9c8f84a
*   Boot ID:                    f70f46b3-a8c8-47b5-a6b9-6f283fa1aeab
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.13
*   Kubelet Version:            v1.19.4
*   Kube-Proxy Version:         v1.19.4
* PodCIDR:                      10.244.4.0/24
* PodCIDRs:                     10.244.4.0/24
* Non-terminated Pods:          (3 in total)
*   Namespace                   Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                  ------------  ----------  ---------------  -------------  ---
*   default                     hue-postgres-kdfvf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         149m
*   kube-system                 kindnet-xvbnz         100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      159m
*   kube-system                 kube-proxy-rs74k      0 (0%)        0 (0%)      0 (0%)           0 (0%)         159m
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests   Limits
*   --------           --------   ------
*   cpu                100m (2%)  100m (2%)
*   memory             50Mi (1%)  50Mi (1%)
*   ephemeral-storage  0 (0%)     0 (0%)
*   hugepages-2Mi      0 (0%)     0 (0%)
* Events:              <none>
* 
* ==> dmesg <==
* [Dec10 08:44] NOTE: The elevator= kernel parameter is deprecated.
* [  +0.000000]  #2
* [  +0.001999]  #3
* [  +0.032939] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [  +0.857170] e1000: E1000 MODULE IS NOT SUPPORTED
* [  +1.316793] systemd: 18 output lines suppressed due to ratelimiting
* [  +2.704163] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* [  +0.554386] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* [  +0.357728] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* [Dec10 08:49] hrtimer: interrupt took 5407601 ns
* 
* ==> etcd [c0580b146a27] <==
* 2020-12-10 08:42:31.284211 I | embed: rejected connection from "127.0.0.1:57122" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284245 I | embed: rejected connection from "127.0.0.1:57124" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284254 I | embed: rejected connection from "127.0.0.1:57126" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284265 I | embed: rejected connection from "127.0.0.1:57128" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284272 I | embed: rejected connection from "127.0.0.1:56862" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284596 I | embed: rejected connection from "127.0.0.1:56864" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284610 I | embed: rejected connection from "127.0.0.1:56866" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284620 I | embed: rejected connection from "127.0.0.1:56940" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284633 I | embed: rejected connection from "127.0.0.1:56872" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284643 I | embed: rejected connection from "127.0.0.1:57214" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284657 I | embed: rejected connection from "127.0.0.1:57020" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284731 I | embed: rejected connection from "127.0.0.1:57194" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284745 I | embed: rejected connection from "127.0.0.1:57022" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284753 I | embed: rejected connection from "127.0.0.1:57196" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284760 I | embed: rejected connection from "127.0.0.1:57024" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284766 I | embed: rejected connection from "127.0.0.1:57026" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284773 I | embed: rejected connection from "127.0.0.1:57028" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284780 I | embed: rejected connection from "127.0.0.1:57198" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284787 I | embed: rejected connection from "127.0.0.1:57030" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284793 I | embed: rejected connection from "127.0.0.1:57032" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284800 I | embed: rejected connection from "127.0.0.1:56900" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284808 I | embed: rejected connection from "127.0.0.1:57202" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284815 I | embed: rejected connection from "127.0.0.1:56902" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284821 I | embed: rejected connection from "127.0.0.1:56904" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284828 I | embed: rejected connection from "127.0.0.1:56916" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284834 I | embed: rejected connection from "127.0.0.1:56982" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284842 I | embed: rejected connection from "127.0.0.1:56970" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284849 I | embed: rejected connection from "127.0.0.1:57168" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284857 I | embed: rejected connection from "127.0.0.1:56918" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284863 I | embed: rejected connection from "127.0.0.1:56984" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284870 I | embed: rejected connection from "127.0.0.1:57186" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284877 I | embed: rejected connection from "127.0.0.1:56844" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284887 I | embed: rejected connection from "127.0.0.1:56976" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284894 I | embed: rejected connection from "127.0.0.1:56986" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284901 I | embed: rejected connection from "127.0.0.1:56856" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284909 I | embed: rejected connection from "127.0.0.1:56988" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284915 I | embed: rejected connection from "127.0.0.1:57110" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284923 I | embed: rejected connection from "127.0.0.1:56990" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284932 I | embed: rejected connection from "127.0.0.1:57208" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.284940 I | embed: rejected connection from "127.0.0.1:57222" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.405425 I | embed: rejected connection from "127.0.0.1:57216" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.416021 I | embed: rejected connection from "127.0.0.1:57002" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.493820 I | embed: rejected connection from "127.0.0.1:57130" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.502079 I | embed: rejected connection from "127.0.0.1:57006" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.504159 I | embed: rejected connection from "127.0.0.1:57132" (error "EOF", ServerName "")
* 2020-12-10 08:42:31.507232 I | embed: rejected connection from "127.0.0.1:57010" (error "EOF", ServerName "")
* 2020-12-10 08:42:32.842748 I | embed: rejected connection from "127.0.0.1:57228" (error "EOF", ServerName "")
* 2020-12-10 08:42:32.993044 I | embed: rejected connection from "127.0.0.1:56922" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.035938 I | embed: rejected connection from "127.0.0.1:56924" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.201211 I | embed: rejected connection from "127.0.0.1:57084" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.455334 I | embed: rejected connection from "127.0.0.1:56896" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.455392 I | embed: rejected connection from "127.0.0.1:57200" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.455406 I | embed: rejected connection from "127.0.0.1:56860" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.492780 I | embed: rejected connection from "127.0.0.1:57204" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.495771 I | embed: rejected connection from "127.0.0.1:56848" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.515463 I | embed: rejected connection from "127.0.0.1:57206" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.549769 I | embed: rejected connection from "127.0.0.1:57218" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.586989 I | embed: rejected connection from "127.0.0.1:57224" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.587551 I | embed: rejected connection from "127.0.0.1:57004" (error "EOF", ServerName "")
* 2020-12-10 08:42:33.929736 I | embed: rejected connection from "127.0.0.1:56932" (error "EOF", ServerName "")
* 
* ==> etcd [cbed66776882] <==
* 2020-12-10 11:50:39.931889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:50:49.932555 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:50:59.932280 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:09.933438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:19.933821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:29.932805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:39.932365 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:49.931971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:51:59.932810 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:09.934698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:10.060926 I | mvcc: store.index: compact 562998
* 2020-12-10 11:52:10.062601 I | mvcc: finished scheduled compaction at 562998 (took 1.195036ms)
* 2020-12-10 11:52:19.935104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:30.298813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:39.931981 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:49.932150 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:52:59.932139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:09.934680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:19.933500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:29.934514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:39.932742 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:49.932197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:53:59.933573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:09.934588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:19.932112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:29.934208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:54:39.936795 I | etcdserver/api/etcdhttp: /health OK (status code 200)
E1210 11:59:40.542893  128565 out.go:286] unable to execute * 2020-12-10 11:54:47.349044 W | etcdserver: request "header:<ID:8128001495567104299 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:563365 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1015 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:18" took too long (387.524803ms) to execute
: html/template:* 2020-12-10 11:54:47.349044 W | etcdserver: request "header:<ID:8128001495567104299 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:563365 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1015 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:18" took too long (387.524803ms) to execute
: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
* 2020-12-10 11:54:47.349044 W | etcdserver: request "header:<ID:8128001495567104299 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:563365 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1015 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:18" took too long (387.524803ms) to execute
* 2020-12-10 11:54:49.932027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:00.010020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:09.935178 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:19.932826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:29.932779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:39.931853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:49.934104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:55:59.932525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:09.933648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:19.935441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:29.935586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:39.933743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:49.932109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:56:59.933094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:09.934326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:10.070044 I | mvcc: store.index: compact 563239
* 2020-12-10 11:57:10.070910 I | mvcc: finished scheduled compaction at 563239 (took 366.904µs)
* 2020-12-10 11:57:19.933753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:29.943536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:39.934743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:49.934588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:57:59.932111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:09.932432 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:20.196705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:29.932965 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:39.932457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:49.932715 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:58:59.933524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:09.933220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:19.934173 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:29.934052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-10 11:59:39.933375 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 
* ==> kernel <==
*  11:59:40 up  3:15,  0 users,  load average: 0.70, 0.78, 0.65
* Linux test 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 20.04.1 LTS"
* 
* ==> kube-apiserver [9618e44b528d] <==
* W1210 08:43:11.450789       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.450891       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.450934       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.450970       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451007       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451042       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451074       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451156       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451189       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451228       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451260       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451300       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.451331       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:11.801968       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:12.041959       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:12.042027       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:12.689571       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.234201       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.234268       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.234304       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:13.236318       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:15.912800       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:16.357199       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808145       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808634       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808711       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.808795       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.973090       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.973157       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:17.973241       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:18.956067       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:18.956126       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:18.956287       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.273213       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.406156       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.406497       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.407175       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.729161       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:19.729225       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:20.404219       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context canceled". Reconnecting...
* W1210 08:43:21.095038       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.095328       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.095597       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.095642       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.382179       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.382270       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.542977       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.543130       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:21.706046       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.024596       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026047       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026162       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026295       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:22.026332       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.089113       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.089432       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.636864       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.636928       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* W1210 08:43:24.636961       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: context deadline exceeded". Reconnecting...
* 
* ==> kube-apiserver [eb97bca9e9de] <==
* I1210 11:49:49.737587       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:50:25.647996       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:50:25.648026       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:50:25.648031       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:51:03.544584       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:51:03.544689       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:51:03.544713       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:51:12.995874       1 trace.go:205] Trace[756723386]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:500,continue: (10-Dec-2020 11:51:12.494) (total time: 500ms):
* Trace[756723386]: [500.885483ms] [500.885483ms] END
* I1210 11:51:12.995963       1 trace.go:205] Trace[122590512]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.19.4 (linux/amd64) kubernetes/d360454/system:serviceaccount:kube-system:cronjob-controller,client:192.168.49.2 (10-Dec-2020 11:51:12.494) (total time: 500ms):
* Trace[122590512]: ---"Listing from storage done" 500ms (11:51:00.995)
* Trace[122590512]: [500.992914ms] [500.992914ms] END
* I1210 11:51:38.596450       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:51:38.596481       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:51:38.596488       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:52:19.802640       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:52:19.802842       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:52:19.802868       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:53:00.154578       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:53:00.154608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:53:00.154614       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:53:43.793049       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:53:43.793394       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:53:43.793449       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:54:14.328104       1 trace.go:205] Trace[1970391022]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2 (10-Dec-2020 11:54:13.827) (total time: 500ms):
* Trace[1970391022]: ---"About to write a response" 500ms (11:54:00.328)
* Trace[1970391022]: [500.936997ms] [500.936997ms] END
* I1210 11:54:16.438348       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:54:16.438932       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:54:16.438967       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:54:51.530963       1 trace.go:205] Trace[86500591]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Dec-2020 11:54:51.027) (total time: 503ms):
* Trace[86500591]: ---"Transaction prepared" 501ms (11:54:00.529)
* Trace[86500591]: [503.653583ms] [503.653583ms] END
* I1210 11:54:55.473177       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:54:55.473206       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:54:55.473214       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:55:28.961420       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:55:28.961443       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:55:28.961449       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:56:01.960202       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:56:01.960454       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:56:01.960482       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:56:39.860902       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:56:39.860929       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:56:39.860936       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:57:20.086398       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:57:20.086475       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:57:20.086494       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:57:51.939546       1 trace.go:205] Trace[1202313898]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (10-Dec-2020 11:57:51.434) (total time: 505ms):
* Trace[1202313898]: ---"Transaction prepared" 503ms (11:57:00.938)
* Trace[1202313898]: [505.372815ms] [505.372815ms] END
* I1210 11:58:02.740440       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:58:02.740487       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:58:02.740518       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:58:41.077505       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:58:41.077539       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:58:41.077546       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1210 11:59:14.827406       1 client.go:360] parsed scheme: "passthrough"
* I1210 11:59:14.827569       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1210 11:59:14.827597       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* 
* ==> kube-controller-manager [ef1e373ce768] <==
* I1210 08:54:50.286393       1 event.go:291] "Event occurred" object="default/hive-hdfs-namenode-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-hdfs-namenode-0"
* I1210 08:54:50.286446       1 event.go:291] "Event occurred" object="default/hive-metastore-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-metastore-0"
* I1210 08:54:50.286457       1 event.go:291] "Event occurred" object="default/hive-postgresql-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-postgresql-0"
* I1210 08:54:50.286468       1 event.go:291] "Event occurred" object="default/hive-server-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-server-0"
* I1210 08:54:50.286477       1 event.go:291] "Event occurred" object="default/hue-postgres-4q92t" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-postgres-4q92t"
* I1210 08:54:50.286484       1 event.go:291] "Event occurred" object="default/hive-hdfs-datanode-0" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-hdfs-datanode-0"
* I1210 08:54:50.286506       1 event.go:291] "Event occurred" object="default/hive-hdfs-httpfs-6cd6bc65d9-q75qj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hive-hdfs-httpfs-6cd6bc65d9-q75qj"
* I1210 08:54:50.286515       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6-h6tvx" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-f9fd979d6-h6tvx"
* I1210 08:58:35.562656       1 stateful_set.go:419] StatefulSet has been deleted default/hive-postgresql
* I1210 08:58:35.562700       1 stateful_set.go:419] StatefulSet has been deleted default/hive-server
* I1210 08:58:35.562879       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-namenode
* I1210 08:58:35.562957       1 stateful_set.go:419] StatefulSet has been deleted default/hive-metastore
* I1210 08:58:35.563112       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-datanode
* I1210 08:59:06.632977       1 event.go:291] "Event occurred" object="test-m04" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node test-m04 event: Removing Node test-m04 from Controller"
* I1210 08:59:41.971556       1 event.go:291] "Event occurred" object="test-m03" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node test-m03 event: Removing Node test-m03 from Controller"
* I1210 08:59:49.827238       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-7qqth
* I1210 08:59:49.839658       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-7qqth succeeded
* I1210 08:59:49.839679       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-tklxj
* E1210 08:59:49.849720       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"bcd72590-ad1e-4230-8818-32fa84750a3b", ResourceVersion:"553798", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63743184688, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002a5faa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a5fac0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002a5fae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a5fb00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002a5fb20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002a5fb40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002a5fb60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002a5fb80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002a5fba0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002a5fbe0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001ce50e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021ba678), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002a7e230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006b48a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0021ba6c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* I1210 08:59:49.850239       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-tklxj succeeded
* I1210 09:00:29.861181       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-mp89t
* I1210 09:00:29.868925       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-mp89t succeeded
* I1210 09:00:29.868938       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-wz6qv
* I1210 09:00:29.874833       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-wz6qv succeeded
* I1210 09:17:17.859158       1 event.go:291] "Event occurred" object="test-m02" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node test-m02 event: Removing Node test-m02 from Controller"
* I1210 09:18:10.676370       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kube-proxy-7wzw9
* I1210 09:18:10.688617       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kube-proxy-7wzw9 succeeded
* I1210 09:18:10.688632       1 gc_controller.go:78] PodGC is force deleting Pod: kube-system/kindnet-d4z4f
* I1210 09:18:10.695340       1 gc_controller.go:189] Forced deletion of orphaned Pod kube-system/kindnet-d4z4f succeeded
* W1210 09:20:28.133411       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-m02" does not exist
* I1210 09:20:28.147884       1 range_allocator.go:373] Set node test-m02 PodCIDR to [10.244.4.0/24]
* I1210 09:20:28.158549       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rs74k"
* I1210 09:20:28.163593       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xvbnz"
* E1210 09:20:28.200957       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"0da472a6-f8ed-45fe-970b-31291eae9763", ResourceVersion:"555116", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63742412901, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002437d20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002437d40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002437d60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002437d80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002437da0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0017a1380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002437dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002437de0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002437e20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0027b7260), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cb9068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001d51f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00041fcb8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000cb90c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* I1210 09:20:33.067352       1 event.go:291] "Event occurred" object="test-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-m02 event: Registered Node test-m02 in Controller"
* I1210 09:22:16.100919       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-f9kcs"
* I1210 09:22:16.137415       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-24vdr"
* I1210 09:22:16.137440       1 event.go:291] "Event occurred" object="default/hive-hdfs-httpfs" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hive-hdfs-httpfs-6cd6bc65d9 to 1"
* I1210 09:22:16.159305       1 event.go:291] "Event occurred" object="default/hive-hdfs-httpfs-6cd6bc65d9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hive-hdfs-httpfs-6cd6bc65d9-lrqb2"
* I1210 09:22:16.179523       1 event.go:291] "Event occurred" object="default/hive-metastore" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-metastore-0 in StatefulSet hive-metastore successful"
* I1210 09:22:16.179544       1 event.go:291] "Event occurred" object="default/hive-server" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-server-0 in StatefulSet hive-server successful"
* I1210 09:22:16.179555       1 event.go:291] "Event occurred" object="default/hive-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Claim data-hive-postgresql-0 Pod hive-postgresql-0 in StatefulSet hive-postgresql success"
* I1210 09:22:16.184394       1 event.go:291] "Event occurred" object="default/hive-hdfs-datanode" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-hdfs-datanode-0 in StatefulSet hive-hdfs-datanode successful"
* I1210 09:22:16.186110       1 event.go:291] "Event occurred" object="default/hive-hdfs-namenode" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-hdfs-namenode-0 in StatefulSet hive-hdfs-namenode successful"
* I1210 09:22:16.189485       1 event.go:291] "Event occurred" object="default/data-hive-postgresql-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
* I1210 09:22:16.189887       1 event.go:291] "Event occurred" object="default/data-hive-postgresql-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
* I1210 09:22:16.198942       1 event.go:291] "Event occurred" object="default/hive-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod hive-postgresql-0 in StatefulSet hive-postgresql successful"
* I1210 09:29:31.083591       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-g6l2f"
* I1210 09:30:18.823878       1 stateful_set.go:419] StatefulSet has been deleted default/hive-metastore
* I1210 09:30:18.824626       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-datanode
* I1210 09:30:18.825037       1 stateful_set.go:419] StatefulSet has been deleted default/hive-hdfs-namenode
* I1210 09:30:18.826020       1 stateful_set.go:419] StatefulSet has been deleted default/hive-postgresql
* I1210 09:30:18.826865       1 stateful_set.go:419] StatefulSet has been deleted default/hive-server
* I1210 09:30:28.537836       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-kdfvf"
* I1210 09:30:28.537852       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-nhx74"
* I1210 09:47:15.422577       1 cleaner.go:181] Cleaning CSR "csr-tstcs" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.426723       1 cleaner.go:181] Cleaning CSR "csr-6fh65" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.434504       1 cleaner.go:181] Cleaning CSR "csr-wjjfc" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.441703       1 cleaner.go:181] Cleaning CSR "csr-2h6fg" as it is more than 1h0m0s old and approved.
* I1210 10:47:15.444107       1 cleaner.go:181] Cleaning CSR "csr-bfd8r" as it is more than 1h0m0s old and approved.
* 
* ==> kube-proxy [147855e4d4e4] <==
* I1210 08:47:14.015675       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
* I1210 08:47:14.015716       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
* W1210 08:47:16.734463       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I1210 08:47:16.745420       1 server_others.go:186] Using iptables Proxier.
* W1210 08:47:16.745436       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I1210 08:47:16.745439       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I1210 08:47:16.745617       1 server.go:650] Version: v1.19.4
* I1210 08:47:16.745878       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1210 08:47:16.745890       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E1210 08:47:16.746653       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I1210 08:47:16.746733       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1210 08:47:16.746758       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1210 08:47:16.747051       1 config.go:315] Starting service config controller
* I1210 08:47:16.747056       1 shared_informer.go:240] Waiting for caches to sync for service config
* I1210 08:47:16.747065       1 config.go:224] Starting endpoint slice config controller
* I1210 08:47:16.747068       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1210 08:47:16.849582       1 shared_informer.go:247] Caches are synced for endpoint slice config 
* I1210 08:47:16.849586       1 shared_informer.go:247] Caches are synced for service config 
* I1210 08:49:50.708763       1 trace.go:205] Trace[1176923502]: "iptables Monitor CANARY check" (10-Dec-2020 08:49:47.428) (total time: 2793ms):
* Trace[1176923502]: [2.793602539s] [2.793602539s] END
* I1210 08:50:20.780453       1 trace.go:205] Trace[1901112382]: "iptables Monitor CANARY check" (10-Dec-2020 08:50:16.773) (total time: 4006ms):
* Trace[1901112382]: [4.006873012s] [4.006873012s] END
* I1210 08:51:57.090526       1 trace.go:205] Trace[385707588]: "iptables Monitor CANARY check" (10-Dec-2020 08:51:46.765) (total time: 8726ms):
* Trace[385707588]: [8.726548904s] [8.726548904s] END
* I1210 08:53:29.809980       1 trace.go:205] Trace[764479901]: "iptables Monitor CANARY check" (10-Dec-2020 08:53:16.764) (total time: 9581ms):
* Trace[764479901]: [9.581968281s] [9.581968281s] END
* I1210 08:53:58.071872       1 trace.go:205] Trace[112474632]: "iptables Monitor CANARY check" (10-Dec-2020 08:53:49.896) (total time: 8039ms):
* Trace[112474632]: [8.039081406s] [8.039081406s] END
* I1210 08:54:25.962695       1 trace.go:205] Trace[39398182]: "iptables Monitor CANARY check" (10-Dec-2020 08:54:18.568) (total time: 6549ms):
* Trace[39398182]: [6.549417389s] [6.549417389s] END
* I1210 08:54:55.389600       1 trace.go:205] Trace[1338509158]: "iptables Monitor CANARY check" (10-Dec-2020 08:54:49.801) (total time: 5455ms):
* Trace[1338509158]: [5.45541041s] [5.45541041s] END
* I1210 08:58:24.665049       1 trace.go:205] Trace[431073268]: "iptables Monitor CANARY check" (10-Dec-2020 08:58:19.270) (total time: 5394ms):
* Trace[431073268]: [5.39404937s] [5.39404937s] END
* 
* ==> kube-proxy [1c2a69f23869] <==
* I1210 08:19:29.254495       1 trace.go:205] Trace[1606941258]: "iptables Monitor CANARY check" (10-Dec-2020 08:19:11.769) (total time: 16133ms):
* Trace[1606941258]: [16.133109154s] [16.133109154s] END
* I1210 08:21:12.253997       1 trace.go:205] Trace[2032763386]: "iptables save" (10-Dec-2020 08:20:08.295) (total time: 59013ms):
* Trace[2032763386]: [59.013708868s] [59.013708868s] END
* I1210 08:21:41.396933       1 trace.go:205] Trace[442713710]: "iptables Monitor CANARY check" (10-Dec-2020 08:21:12.647) (total time: 6548ms):
* Trace[442713710]: [6.548046652s] [6.548046652s] END
* I1210 08:22:20.625150       1 trace.go:205] Trace[1705933123]: "iptables Monitor CANARY check" (10-Dec-2020 08:22:02.412) (total time: 17389ms):
* Trace[1705933123]: [17.389959117s] [17.389959117s] END
* I1210 08:24:32.283396       1 trace.go:205] Trace[1257929763]: "iptables restore" (10-Dec-2020 08:22:21.206) (total time: 77394ms):
* Trace[1257929763]: [1m17.394371742s] [1m17.394371742s] END
* I1210 08:25:14.934948       1 trace.go:205] Trace[82457319]: "iptables Monitor CANARY check" (10-Dec-2020 08:24:34.887) (total time: 39598ms):
* Trace[82457319]: [39.59828664s] [39.59828664s] END
* I1210 08:25:52.826194       1 trace.go:205] Trace[1529003266]: "iptables Monitor CANARY check" (10-Dec-2020 08:25:29.557) (total time: 14986ms):
* Trace[1529003266]: [14.986878467s] [14.986878467s] END
* I1210 08:26:41.833556       1 trace.go:205] Trace[824315921]: "iptables Monitor CANARY check" (10-Dec-2020 08:25:58.312) (total time: 35395ms):
* Trace[824315921]: [35.395955838s] [35.395955838s] END
* I1210 08:27:13.010202       1 trace.go:205] Trace[356046513]: "iptables Monitor CANARY check" (10-Dec-2020 08:27:01.263) (total time: 10073ms):
* Trace[356046513]: [10.073628207s] [10.073628207s] END
* I1210 08:27:36.603564       1 trace.go:205] Trace[1754103961]: "iptables Monitor CANARY check" (10-Dec-2020 08:27:28.323) (total time: 6429ms):
* Trace[1754103961]: [6.429998521s] [6.429998521s] END
* I1210 08:28:02.899016       1 trace.go:205] Trace[2019656647]: "iptables Monitor CANARY check" (10-Dec-2020 08:27:58.313) (total time: 4294ms):
* Trace[2019656647]: [4.294967975s] [4.294967975s] END
* I1210 08:28:37.144254       1 trace.go:205] Trace[1498401531]: "iptables Monitor CANARY check" (10-Dec-2020 08:28:28.312) (total time: 8617ms):
* Trace[1498401531]: [8.617230153s] [8.617230153s] END
* I1210 08:29:32.119745       1 trace.go:205] Trace[1511550882]: "iptables Monitor CANARY check" (10-Dec-2020 08:29:16.995) (total time: 11390ms):
* Trace[1511550882]: [11.390891356s] [11.390891356s] END
* I1210 08:30:08.132644       1 trace.go:205] Trace[1444828810]: "iptables Monitor CANARY check" (10-Dec-2020 08:30:00.140) (total time: 3704ms):
* Trace[1444828810]: [3.704739702s] [3.704739702s] END
* I1210 08:30:54.286260       1 trace.go:205] Trace[143958087]: "iptables Monitor CANARY check" (10-Dec-2020 08:30:29.624) (total time: 24105ms):
* Trace[143958087]: [24.105896041s] [24.105896041s] END
* I1210 08:31:10.426799       1 trace.go:205] Trace[962323657]: "iptables Monitor CANARY check" (10-Dec-2020 08:30:58.318) (total time: 11130ms):
* Trace[962323657]: [11.130432866s] [11.130432866s] END
* I1210 08:31:13.141112       1 trace.go:205] Trace[813852681]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:28:18.935) (total time: 173942ms):
* Trace[813852681]: ---"Objects listed" 171222ms (08:31:00.158)
* Trace[813852681]: ---"Objects extracted" 2028ms (08:31:00.187)
* Trace[813852681]: [2m53.942254293s] [2m53.942254293s] END
* I1210 08:31:35.241036       1 trace.go:205] Trace[638151091]: "iptables Monitor CANARY check" (10-Dec-2020 08:31:28.322) (total time: 6503ms):
* Trace[638151091]: [6.503279497s] [6.503279497s] END
* I1210 08:32:55.337379       1 trace.go:205] Trace[1536844848]: "iptables Monitor CANARY check" (10-Dec-2020 08:32:00.940) (total time: 38653ms):
* Trace[1536844848]: [38.653773609s] [38.653773609s] END
* I1210 08:33:20.885200       1 trace.go:205] Trace[1414107287]: "iptables Monitor CANARY check" (10-Dec-2020 08:33:04.564) (total time: 5442ms):
* Trace[1414107287]: [5.44213901s] [5.44213901s] END
* I1210 08:34:13.548138       1 trace.go:205] Trace[946104448]: "iptables Monitor CANARY check" (10-Dec-2020 08:33:58.329) (total time: 15037ms):
* Trace[946104448]: [15.037169668s] [15.037169668s] END
* I1210 08:34:31.345398       1 trace.go:205] Trace[175663141]: "iptables Monitor CANARY check" (10-Dec-2020 08:34:28.314) (total time: 3030ms):
* Trace[175663141]: [3.030810392s] [3.030810392s] END
* I1210 08:34:47.908016       1 trace.go:205] Trace[56971626]: "iptables restore" (10-Dec-2020 08:34:40.449) (total time: 2107ms):
* Trace[56971626]: [2.107572612s] [2.107572612s] END
* I1210 08:35:05.374031       1 trace.go:205] Trace[1258522673]: "iptables Monitor CANARY check" (10-Dec-2020 08:34:58.313) (total time: 5702ms):
* Trace[1258522673]: [5.702046014s] [5.702046014s] END
* I1210 08:37:42.825944       1 trace.go:205] Trace[1689351721]: "iptables Monitor CANARY check" (10-Dec-2020 08:37:29.818) (total time: 11357ms):
* Trace[1689351721]: [11.357098617s] [11.357098617s] END
* I1210 08:38:06.121769       1 trace.go:205] Trace[1552961587]: "iptables Monitor CANARY check" (10-Dec-2020 08:38:00.265) (total time: 5619ms):
* Trace[1552961587]: [5.619850908s] [5.619850908s] END
* I1210 08:39:24.509743       1 trace.go:205] Trace[934536800]: "iptables Monitor CANARY check" (10-Dec-2020 08:39:14.191) (total time: 7834ms):
* Trace[934536800]: [7.834903692s] [7.834903692s] END
* I1210 08:41:36.851519       1 trace.go:205] Trace[1452415980]: "iptables Monitor CANARY check" (10-Dec-2020 08:40:58.312) (total time: 13366ms):
* Trace[1452415980]: [13.366711746s] [13.366711746s] END
* I1210 08:42:34.270977       1 trace.go:205] Trace[1092740501]: "iptables Monitor CANARY check" (10-Dec-2020 08:41:58.312) (total time: 31405ms):
* Trace[1092740501]: [31.405010423s] [31.405010423s] END
* 
* ==> kube-scheduler [476a76f1583b] <==
* E1207 09:38:35.522011       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
* I1207 09:38:37.822079       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I1207 11:56:44.908379       1 trace.go:205] Trace[666183408]: "Scheduling" namespace:default,name:hue-kfjpj (07-Dec-2020 11:56:44.251) (total time: 654ms):
* Trace[666183408]: ---"Snapshotting scheduler cache and node infos done" 78ms (11:56:00.329)
* Trace[666183408]: ---"Computing predicates done" 314ms (11:56:00.643)
* Trace[666183408]: [654.649558ms] [654.649558ms] END
* I1207 12:16:15.374183       1 trace.go:205] Trace[768065693]: "Scheduling" namespace:default,name:hue-r4ph2 (07-Dec-2020 12:16:15.184) (total time: 158ms):
* Trace[768065693]: [158.498803ms] [158.498803ms] END
* I1207 12:27:14.040714       1 trace.go:205] Trace[1221006435]: "Scheduling" namespace:default,name:hive-server-0 (07-Dec-2020 12:27:13.745) (total time: 294ms):
* Trace[1221006435]: ---"Computing predicates done" 33ms (12:27:00.779)
* Trace[1221006435]: [294.883224ms] [294.883224ms] END
* I1208 12:20:32.444919       1 trace.go:205] Trace[1038095338]: "Scheduling" namespace:default,name:hive-hdfs-httpfs-6cd6bc65d9-wv4r2 (08-Dec-2020 12:20:31.962) (total time: 409ms):
* Trace[1038095338]: ---"Basic checks done" 26ms (12:20:00.988)
* Trace[1038095338]: ---"Computing predicates done" 78ms (12:20:00.067)
* Trace[1038095338]: [409.973001ms] [409.973001ms] END
* I1208 12:39:31.821886       1 trace.go:205] Trace[629732245]: "Scheduling" namespace:default,name:hive-hdfs-httpfs-6cd6bc65d9-fmmxf (08-Dec-2020 12:39:31.526) (total time: 295ms):
* Trace[629732245]: ---"Computing predicates done" 77ms (12:39:00.603)
* Trace[629732245]: [295.777206ms] [295.777206ms] END
* I1210 07:47:12.395846       1 trace.go:205] Trace[529640691]: "Scheduling" namespace:default,name:hive-hdfs-namenode-0 (10-Dec-2020 07:47:12.233) (total time: 162ms):
* Trace[529640691]: ---"Computing predicates done" 102ms (07:47:00.335)
* Trace[529640691]: [162.499393ms] [162.499393ms] END
* I1210 08:11:04.509198       1 trace.go:205] Trace[296532859]: "Scheduling" namespace:kube-system,name:kube-proxy-7wzw9 (10-Dec-2020 08:11:04.236) (total time: 189ms):
* Trace[296532859]: ---"Computing predicates done" 189ms (08:11:00.425)
* Trace[296532859]: [189.486956ms] [189.486956ms] END
* I1210 08:11:33.086818       1 trace.go:205] Trace[348250462]: "Scheduling" namespace:kube-system,name:kindnet-wzqft (10-Dec-2020 08:11:32.627) (total time: 201ms):
* Trace[348250462]: ---"Basic checks done" 185ms (08:11:00.812)
* Trace[348250462]: [201.770106ms] [201.770106ms] END
* I1210 08:29:40.681410       1 trace.go:205] Trace[1834003984]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:28:00.603) (total time: 99651ms):
* Trace[1834003984]: ---"Objects listed" 99489ms (08:29:00.093)
* Trace[1834003984]: [1m39.651842586s] [1m39.651842586s] END
* I1210 08:30:11.289748       1 trace.go:205] Trace[1831143731]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:29:38.676) (total time: 31768ms):
* Trace[1831143731]: ---"Objects listed" 31592ms (08:30:00.268)
* Trace[1831143731]: [31.768949549s] [31.768949549s] END
* I1210 08:31:06.036895       1 trace.go:205] Trace[789880583]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:30:11.815) (total time: 53539ms):
* Trace[789880583]: ---"Objects listed" 53539ms (08:31:00.355)
* Trace[789880583]: [53.539347253s] [53.539347253s] END
* I1210 08:34:12.686107       1 trace.go:205] Trace[561017287]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:31:30.011) (total time: 162674ms):
* Trace[561017287]: [2m42.674637052s] [2m42.674637052s] END
* E1210 08:34:12.686137       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: the server was unable to return a response in the time allotted, but may still be processing the request (get statefulsets.apps)
* I1210 08:34:12.686149       1 trace.go:205] Trace[1404922494]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:31:30.488) (total time: 161994ms):
* Trace[1404922494]: [2m41.994649502s] [2m41.994649502s] END
* E1210 08:34:12.686155       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes)
* I1210 08:34:12.769079       1 trace.go:205] Trace[107198569]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:33:39.696) (total time: 33073ms):
* Trace[107198569]: ---"Objects listed" 33072ms (08:34:00.768)
* Trace[107198569]: [33.073033457s] [33.073033457s] END
* I1210 08:34:29.503249       1 trace.go:205] Trace[852968660]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (10-Dec-2020 08:34:14.673) (total time: 14431ms):
* Trace[852968660]: ---"Objects listed" 14423ms (08:34:00.096)
* Trace[852968660]: [14.431816186s] [14.431816186s] END
* I1210 08:34:29.512056       1 trace.go:205] Trace[530687809]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:14.672) (total time: 14839ms):
* Trace[530687809]: ---"Objects listed" 14426ms (08:34:00.099)
* Trace[530687809]: [14.839135769s] [14.839135769s] END
* I1210 08:34:29.545762       1 trace.go:205] Trace[1947675615]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:13.840) (total time: 15704ms):
* Trace[1947675615]: ---"Objects listed" 15704ms (08:34:00.545)
* Trace[1947675615]: [15.704762479s] [15.704762479s] END
* I1210 08:34:29.550600       1 trace.go:205] Trace[1237261392]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:13.813) (total time: 15736ms):
* Trace[1237261392]: ---"Objects listed" 15736ms (08:34:00.550)
* Trace[1237261392]: [15.736651893s] [15.736651893s] END
* I1210 08:34:29.551005       1 trace.go:205] Trace[1669297276]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (10-Dec-2020 08:34:13.607) (total time: 15943ms):
* Trace[1669297276]: ---"Objects listed" 15943ms (08:34:00.550)
* Trace[1669297276]: [15.943070102s] [15.943070102s] END
* 
* ==> kube-scheduler [f1b156662850] <==
* I1210 08:47:03.439155       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:03.439192       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:04.170813       1 serving.go:331] Generated self-signed cert in-memory
* W1210 08:47:11.200901       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W1210 08:47:11.200949       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W1210 08:47:11.200957       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
* W1210 08:47:11.200961       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I1210 08:47:11.221895       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:11.221915       1 registry.go:173] Registering SelectorSpread plugin
* I1210 08:47:11.224944       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I1210 08:47:11.225412       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1210 08:47:11.225420       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1210 08:47:11.225438       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E1210 08:47:11.228063       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1210 08:47:11.228255       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1210 08:47:11.228327       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1210 08:47:11.228402       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1210 08:47:11.228474       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1210 08:47:11.228540       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1210 08:47:11.228638       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1210 08:47:11.228734       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1210 08:47:11.228804       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1210 08:47:11.229784       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1210 08:47:11.229864       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1210 08:47:11.230247       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1210 08:47:11.231607       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* I1210 08:47:12.550817       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I1210 08:51:21.046666       1 trace.go:205] Trace[759072766]: "Scheduling" namespace:kube-system,name:kube-proxy-mp89t (10-Dec-2020 08:51:20.645) (total time: 389ms):
* Trace[759072766]: ---"Basic checks done" 173ms (08:51:00.819)
* Trace[759072766]: ---"Computing predicates done" 212ms (08:51:00.031)
* Trace[759072766]: [389.49898ms] [389.49898ms] END
* I1210 08:54:16.826915       1 trace.go:205] Trace[1442715761]: "Scheduling" namespace:kube-system,name:kube-proxy-tklxj (10-Dec-2020 08:54:15.035) (total time: 979ms):
* Trace[1442715761]: ---"Computing predicates done" 896ms (08:54:00.931)
* Trace[1442715761]: [979.173237ms] [979.173237ms] END
* I1210 08:54:29.412297       1 request.go:645] Throttling request took 1.709821614s, request: POST:https://192.168.49.2:8443/api/v1/namespaces/kube-system/events
* 
* ==> kubelet <==
* -- Logs begin at Thu 2020-12-10 08:46:43 UTC, end at Thu 2020-12-10 11:59:41 UTC. --
* Dec 10 09:29:59 test kubelet[1151]: I1210 09:29:59.831855    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf
* Dec 10 09:29:59 test kubelet[1151]: E1210 09:29:59.832127    1151 pod_workers.go:191] Error syncing pod 0138dca9-01f3-4f18-8ec1-beccdf0458fd ("hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"), skipping: failed to "StartContainer" for "server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=server pod=hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"
* Dec 10 09:30:06 test kubelet[1151]: W1210 09:30:06.579759    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hive-metastore-0 through plugin: invalid network status for
* Dec 10 09:30:06 test kubelet[1151]: I1210 09:30:06.585659    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 03f4f446bb3a6bb90c13780fccb2e10bbf491fde32ee31567720df13440b45fa
* Dec 10 09:30:06 test kubelet[1151]: I1210 09:30:06.585880    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 541c51b9afb9d8f90efc3b3495b565125211ef27b7e63e82c767410acac2f17c
* Dec 10 09:30:06 test kubelet[1151]: E1210 09:30:06.586040    1151 pod_workers.go:191] Error syncing pod ea1fe809-303d-4119-8baf-780f818ab6ed ("hive-metastore-0_default(ea1fe809-303d-4119-8baf-780f818ab6ed)"), skipping: failed to "StartContainer" for "metastore" with CrashLoopBackOff: "back-off 1m20s restarting failed container=metastore pod=hive-metastore-0_default(ea1fe809-303d-4119-8baf-780f818ab6ed)"
* Dec 10 09:30:07 test kubelet[1151]: W1210 09:30:07.594434    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hive-metastore-0 through plugin: invalid network status for
* Dec 10 09:30:13 test kubelet[1151]: I1210 09:30:13.833176    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf
* Dec 10 09:30:13 test kubelet[1151]: E1210 09:30:13.834649    1151 pod_workers.go:191] Error syncing pod 0138dca9-01f3-4f18-8ec1-beccdf0458fd ("hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"), skipping: failed to "StartContainer" for "server" with CrashLoopBackOff: "back-off 1m20s restarting failed container=server pod=hive-server-0_default(0138dca9-01f3-4f18-8ec1-beccdf0458fd)"
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.230882    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/ea1fe809-303d-4119-8baf-780f818ab6ed-default-token-ldrct") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed")
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.230922    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "hive-config" (UniqueName: "kubernetes.io/configmap/ea1fe809-303d-4119-8baf-780f818ab6ed-hive-config") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed")
* Dec 10 09:30:19 test kubelet[1151]: W1210 09:30:19.231040    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/ea1fe809-303d-4119-8baf-780f818ab6ed/volumes/kubernetes.io~configmap/hive-config: ClearQuota called, but quotas disabled
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.231211    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1fe809-303d-4119-8baf-780f818ab6ed-hive-config" (OuterVolumeSpecName: "hive-config") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed"). InnerVolumeSpecName "hive-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.241174    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1fe809-303d-4119-8baf-780f818ab6ed-default-token-ldrct" (OuterVolumeSpecName: "default-token-ldrct") pod "ea1fe809-303d-4119-8baf-780f818ab6ed" (UID: "ea1fe809-303d-4119-8baf-780f818ab6ed"). InnerVolumeSpecName "default-token-ldrct". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.333209    1151 reconciler.go:319] Volume detached for volume "hive-config" (UniqueName: "kubernetes.io/configmap/ea1fe809-303d-4119-8baf-780f818ab6ed-hive-config") on node "test" DevicePath ""
* Dec 10 09:30:19 test kubelet[1151]: I1210 09:30:19.333235    1151 reconciler.go:319] Volume detached for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/ea1fe809-303d-4119-8baf-780f818ab6ed-default-token-ldrct") on node "test" DevicePath ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808374    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "hadoop-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hadoop-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd")
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808411    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/0138dca9-01f3-4f18-8ec1-beccdf0458fd-default-token-ldrct") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd")
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808432    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "hive-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hive-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd")
* Dec 10 09:30:21 test kubelet[1151]: W1210 09:30:21.808524    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/0138dca9-01f3-4f18-8ec1-beccdf0458fd/volumes/kubernetes.io~configmap/hive-config: ClearQuota called, but quotas disabled
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.808695    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hive-config" (OuterVolumeSpecName: "hive-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd"). InnerVolumeSpecName "hive-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:21 test kubelet[1151]: W1210 09:30:21.808736    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/0138dca9-01f3-4f18-8ec1-beccdf0458fd/volumes/kubernetes.io~configmap/hadoop-config: ClearQuota called, but quotas disabled
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.809021    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hadoop-config" (OuterVolumeSpecName: "hadoop-config") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd"). InnerVolumeSpecName "hadoop-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.820449    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0138dca9-01f3-4f18-8ec1-beccdf0458fd-default-token-ldrct" (OuterVolumeSpecName: "default-token-ldrct") pod "0138dca9-01f3-4f18-8ec1-beccdf0458fd" (UID: "0138dca9-01f3-4f18-8ec1-beccdf0458fd"). InnerVolumeSpecName "default-token-ldrct". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.911565    1151 reconciler.go:319] Volume detached for volume "hadoop-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hadoop-config") on node "test" DevicePath ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.911590    1151 reconciler.go:319] Volume detached for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/0138dca9-01f3-4f18-8ec1-beccdf0458fd-default-token-ldrct") on node "test" DevicePath ""
* Dec 10 09:30:21 test kubelet[1151]: I1210 09:30:21.911597    1151 reconciler.go:319] Volume detached for volume "hive-config" (UniqueName: "kubernetes.io/configmap/0138dca9-01f3-4f18-8ec1-beccdf0458fd-hive-config") on node "test" DevicePath ""
* Dec 10 09:30:23 test kubelet[1151]: I1210 09:30:23.210874    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 541c51b9afb9d8f90efc3b3495b565125211ef27b7e63e82c767410acac2f17c
* Dec 10 09:30:23 test kubelet[1151]: I1210 09:30:23.226423    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5d4c45f6e04020db3755d5dab08d4b367f2f13cc955b02c16f501321d27c60bf
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.547688    1151 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.678012    1151 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170-default-token-ldrct") pod "hue-nhx74" (UID: "b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170")
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.678060    1151 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170-config-volume") pod "hue-nhx74" (UID: "b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170")
* Dec 10 09:30:28 test kubelet[1151]: I1210 09:30:28.678080    1151 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume-extra" (UniqueName: "kubernetes.io/configmap/b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170-config-volume-extra") pod "hue-nhx74" (UID: "b6f0f873-f00c-4f1c-84d3-ffd2eb9e2170")
* Dec 10 09:30:29 test kubelet[1151]: W1210 09:30:29.662575    1151 pod_container_deletor.go:79] Container "a23a0e9a5d81822a5f227882cbbb0aa6668f8a00effec9fc14a7c9d1029899a6" not found in pod's containers
* Dec 10 09:30:29 test kubelet[1151]: W1210 09:30:29.664055    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hue-nhx74 through plugin: invalid network status for
* Dec 10 09:30:30 test kubelet[1151]: W1210 09:30:30.685128    1151 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/hue-nhx74 through plugin: invalid network status for
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.404362    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.425857    1151 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: E1210 09:30:51.426486    1151 remote_runtime.go:329] ContainerStatus "a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: W1210 09:30:51.426523    1151 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6}): failed to get container status "a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6": rpc error: code = Unknown desc = Error: No such container: a822a6539a4dc8bd866efbaf165b9a0a861827515acf45cd5e5fc02f9007cbd6
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.530868    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5")
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.530903    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-default-token-ldrct") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5")
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.530924    1151 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume-extra" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume-extra") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5")
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.536458    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-default-token-ldrct" (OuterVolumeSpecName: "default-token-ldrct") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5"). InnerVolumeSpecName "default-token-ldrct". PluginName "kubernetes.io/secret", VolumeGidValue ""
* Dec 10 09:30:51 test kubelet[1151]: W1210 09:30:51.537338    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5/volumes/kubernetes.io~configmap/config-volume-extra: ClearQuota called, but quotas disabled
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.537448    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume-extra" (OuterVolumeSpecName: "config-volume-extra") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5"). InnerVolumeSpecName "config-volume-extra". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:51 test kubelet[1151]: W1210 09:30:51.540247    1151 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5/volumes/kubernetes.io~configmap/config-volume: ClearQuota called, but quotas disabled
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.540408    1151 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5" (UID: "d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.631030    1151 reconciler.go:319] Volume detached for volume "config-volume-extra" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume-extra") on node "test" DevicePath ""
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.631050    1151 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-config-volume") on node "test" DevicePath ""
* Dec 10 09:30:51 test kubelet[1151]: I1210 09:30:51.631057    1151 reconciler.go:319] Volume detached for volume "default-token-ldrct" (UniqueName: "kubernetes.io/secret/d85dfa5f-9bd5-432e-a9bb-94986c8fb1b5-default-token-ldrct") on node "test" DevicePath ""
* Dec 10 09:30:54 test kubelet[1151]: E1210 09:30:54.279509    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : container not running (81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5)
* Dec 10 09:31:06 test kubelet[1151]: E1210 09:31:06.463036    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:08 test kubelet[1151]: E1210 09:31:08.824356    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:08 test kubelet[1151]: E1210 09:31:08.824356    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:09 test kubelet[1151]: E1210 09:31:09.346393    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:09 test kubelet[1151]: E1210 09:31:09.346668    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:09 test kubelet[1151]: E1210 09:31:09.530802    1151 httpstream.go:251] error forwarding port 8888 to pod 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5, uid : Error: No such container: 81131dbf6f621ce55b86e3c9713e65dec8f77e5af480fc843184c6bff246f7a5
* Dec 10 09:31:40 test kubelet[1151]: E1210 09:31:40.017924    1151 httpstream.go:143] (conn=&{0xc0009eda20 [0xc001668c80] {0 0} 0x1d6e540}, request=16) timed out waiting for streams
* Dec 10 09:31:40 test kubelet[1151]: E1210 09:31:40.412144    1151 httpstream.go:143] (conn=&{0xc0009eda20 [0xc001668c80] {0 0} 0x1d6e540}, request=17) timed out waiting for streams
* 
* ==> kubernetes-dashboard [1a408ee50d62] <==
* 2020/12/10 08:47:15 Using namespace: kubernetes-dashboard
* 2020/12/10 08:47:15 Using in-cluster config to connect to apiserver
* 2020/12/10 08:47:15 Using secret token for csrf signing
* 2020/12/10 08:47:15 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2020/12/10 08:47:15 Starting overwatch
* panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
* 
* goroutine 1 [running]:
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0005243e0)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00019d500)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00019d500)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
* github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
* 	/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
* main.main()
* 	/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
* 
* ==> kubernetes-dashboard [8f8c7c337878] <==
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:54:05 [2020-12-10T11:54:05Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/10 11:57:42 Getting list of namespaces
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/10 11:57:42 Getting list of all pods in the cluster
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 Getting pod metrics
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:42 [2020-12-10T11:57:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/10 11:57:47 Getting list of namespaces
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/10 11:57:47 Getting list of all pods in the cluster
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 Getting pod metrics
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:47 [2020-12-10T11:57:47Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/10 11:57:48 Getting list of namespaces
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/10 11:57:48 Getting list of all pods in the cluster
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 Getting pod metrics
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 received 0 resources from sidecar instead of 2
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 Skipping metric because of error: Metric label not set.
* 2020/12/10 11:57:48 [2020-12-10T11:57:48Z] Outcoming response to 192.168.33.1 with 200 status code
* 
* ==> storage-provisioner [d438a591e9f6] <==
* I1210 08:50:27.634802       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
* I1210 08:50:46.345596       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1210 08:50:46.377278       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_test_10558a67-fa69-4df2-a92f-8a742b67e889!
* I1210 08:50:46.399083       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e31e7ce0-0694-419e-a1e9-fbb2b8121025", APIVersion:"v1", ResourceVersion:"553457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test_10558a67-fa69-4df2-a92f-8a742b67e889 became leader
* I1210 08:50:46.478337       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_test_10558a67-fa69-4df2-a92f-8a742b67e889!
* I1210 08:53:25.144639       1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded
* I1210 08:53:25.127909       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e31e7ce0-0694-419e-a1e9-fbb2b8121025", APIVersion:"v1", ResourceVersion:"553631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test_10558a67-fa69-4df2-a92f-8a742b67e889 stopped leading
* F1210 08:53:25.785482       1 controller.go:877] leaderelection lost
* 
* ==> storage-provisioner [d6a179f3b8e5] <==
* I1210 08:54:35.741710       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
* I1210 08:54:56.990381       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1210 08:54:56.990732       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_test_78af1074-193b-4c52-92ee-2a823a574973!
* I1210 08:54:56.990676       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e31e7ce0-0694-419e-a1e9-fbb2b8121025", APIVersion:"v1", ResourceVersion:"553771", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test_78af1074-193b-4c52-92ee-2a823a574973 became leader
* I1210 08:54:57.891081       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_test_78af1074-193b-4c52-92ee-2a823a574973!
* I1210 09:22:16.191174       1 controller.go:1284] provision "default/data-hive-postgresql-0" class "standard": started
* I1210 09:22:16.220351       1 controller.go:1392] provision "default/data-hive-postgresql-0" class "standard": volume "pvc-4f2491ec-d503-43f3-9694-89dee83f488e" provisioned
* I1210 09:22:16.220370       1 controller.go:1409] provision "default/data-hive-postgresql-0" class "standard": succeeded
* I1210 09:22:16.220374       1 volume_store.go:212] Trying to save persistentvolume "pvc-4f2491ec-d503-43f3-9694-89dee83f488e"
* I1210 09:22:16.228338       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-hive-postgresql-0", UID:"4f2491ec-d503-43f3-9694-89dee83f488e", APIVersion:"v1", ResourceVersion:"555466", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-hive-postgresql-0"
* I1210 09:22:16.229065       1 volume_store.go:219] persistentvolume "pvc-4f2491ec-d503-43f3-9694-89dee83f488e" saved
* I1210 09:22:16.230433       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-hive-postgresql-0", UID:"4f2491ec-d503-43f3-9694-89dee83f488e", APIVersion:"v1", ResourceVersion:"555466", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4f2491ec-d503-43f3-9694-89dee83f488e

使用的操作系统版本:windows10

其他

minikube node add #添加了一个节点
@LY1806620741 LY1806620741 added the l/zh-CN Issues in or relating to Chinese label Dec 10, 2020
@priyawadhwa
Copy link

priyawadhwa commented Dec 23, 2020

Hey @LY1806620741 thank you for opening this issue. I believe it may have been fixed by #9875. Could you please try upgrading to our latest release of minikube, v1.16.0, to see if that resolves this issue?

Latest release: https://github.com/kubernetes/minikube/releases/tag/v1.16.0

@LY1806620741 感谢您打开此问题。我相信它可能已经通过 #9875 修复了。您能否尝试升级到最新版本的minikube v1.16.0,以查看是否可以解决此问题?

最新版本: https://github.com/kubernetes/minikube/releases/tag/v1.16.0

@priyawadhwa priyawadhwa added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Dec 23, 2020
@LY1806620741
Copy link
Author

Hey @priyawadhwa , minikube 1.16.0 still has this problem.
now,I have one master name "minikube" and "m2"、 "m3" node,
hue runing on m2,postgres on m3
now,hue throw:

conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: could not translate host name "hue-postgres" to address: Temporary failure in name resolution

others info:

[vagrant@control-plane ~]$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1
[vagrant@control-plane ~]$ minikube node list
minikube        192.168.49.2
minikube-m02    192.168.49.3
minikube-m03    192.168.49.4

minikube logs命令的输出

* ==> Docker <==
* -- Logs begin at Fri 2020-12-25 08:22:51 UTC, end at Fri 2020-12-25 10:20:28 UTC. --
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.399834121Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.410239527Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.412648867Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.412670822Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.412767308Z" level=info msg="Loading containers: start."
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.739587406Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.783568983Z" level=info msg="Loading containers: done."
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.801735982Z" level=info msg="Docker daemon" commit=eeddea2 graphdriver(s)=overlay2 version=20.10.0
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.801781008Z" level=info msg="Daemon has completed initialization"
* Dec 25 08:22:55 minikube systemd[1]: Started Docker Application Container Engine.
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.814374225Z" level=info msg="API listen on [::]:2376"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.823237928Z" level=info msg="API listen on /var/run/docker.sock"
* Dec 25 08:22:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:57:27 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:59:03 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:59:13 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:59:45 minikube dockerd[415]: time="2020-12-25T08:59:45.946407570Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 08:59:45 minikube dockerd[415]: time="2020-12-25T08:59:45.946430521Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:00:01 minikube dockerd[415]: time="2020-12-25T09:00:01.324239615Z" level=info msg="ignoring event" container=63b0555973b1b7a2cc1703888a21ffe3764abdee705f7106ff20da04fe63d6b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 25 09:00:02 minikube dockerd[415]: time="2020-12-25T09:00:02.007922572Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:00:02 minikube dockerd[415]: time="2020-12-25T09:00:02.007947793Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:00:29 minikube dockerd[415]: time="2020-12-25T09:00:29.879231843Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:00:29 minikube dockerd[415]: time="2020-12-25T09:00:29.879274086Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:01:19 minikube dockerd[415]: time="2020-12-25T09:01:19.873499847Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:01:19 minikube dockerd[415]: time="2020-12-25T09:01:19.873532924Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:02:49 minikube dockerd[415]: time="2020-12-25T09:02:49.929314063Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:02:49 minikube dockerd[415]: time="2020-12-25T09:02:49.929363083Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:05:34 minikube dockerd[415]: time="2020-12-25T09:05:34.889785486Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:05:34 minikube dockerd[415]: time="2020-12-25T09:05:34.889814338Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:10:39 minikube dockerd[415]: time="2020-12-25T09:10:39.094206128Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:10:39 minikube dockerd[415]: time="2020-12-25T09:10:39.094235252Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:15:39 minikube dockerd[415]: time="2020-12-25T09:15:39.875274071Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:15:39 minikube dockerd[415]: time="2020-12-25T09:15:39.875323655Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:20:40 minikube dockerd[415]: time="2020-12-25T09:20:40.865380838Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:20:40 minikube dockerd[415]: time="2020-12-25T09:20:40.865421573Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:25:43 minikube dockerd[415]: time="2020-12-25T09:25:43.875940786Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:25:43 minikube dockerd[415]: time="2020-12-25T09:25:43.876042893Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:30:55 minikube dockerd[415]: time="2020-12-25T09:30:55.182845140Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:30:55 minikube dockerd[415]: time="2020-12-25T09:30:55.183273814Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:36:11 minikube dockerd[415]: time="2020-12-25T09:36:11.651939428Z" level=warning msg="Error getting v2 registry: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: net/http: TLS handshake timeout"
* Dec 25 09:36:11 minikube dockerd[415]: time="2020-12-25T09:36:11.651977801Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: net/http: TLS handshake timeout"
* Dec 25 09:36:11 minikube dockerd[415]: time="2020-12-25T09:36:11.654835887Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: net/http: TLS handshake timeout"
* Dec 25 09:37:02 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 09:41:18 minikube dockerd[415]: time="2020-12-25T09:41:18.992311067Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:41:18 minikube dockerd[415]: time="2020-12-25T09:41:18.992487171Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:46:27 minikube dockerd[415]: time="2020-12-25T09:46:27.776277668Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:46:27 minikube dockerd[415]: time="2020-12-25T09:46:27.776305080Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:46:45 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 09:51:35 minikube dockerd[415]: time="2020-12-25T09:51:35.880871726Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:51:35 minikube dockerd[415]: time="2020-12-25T09:51:35.880982509Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:56:55 minikube dockerd[415]: time="2020-12-25T09:56:55.807288459Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:56:55 minikube dockerd[415]: time="2020-12-25T09:56:55.807346164Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:02:04 minikube dockerd[415]: time="2020-12-25T10:02:04.952718088Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:02:04 minikube dockerd[415]: time="2020-12-25T10:02:04.952746820Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:07:13 minikube dockerd[415]: time="2020-12-25T10:07:13.498153562Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:07:13 minikube dockerd[415]: time="2020-12-25T10:07:13.498255638Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:12:17 minikube dockerd[415]: time="2020-12-25T10:12:17.096423306Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:12:17 minikube dockerd[415]: time="2020-12-25T10:12:17.096451834Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:17:24 minikube dockerd[415]: time="2020-12-25T10:17:24.899367546Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:17:24 minikube dockerd[415]: time="2020-12-25T10:17:24.899395713Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* 
* ==> container status <==
* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
* 1879fc1818339       85069258b98ac       About an hour ago   Running             storage-provisioner         1                   943e200199bd1
* d31f0f45948cb       9a07b5b4bfac0       About an hour ago   Running             kubernetes-dashboard        0                   071ffe6aa0955
* a718845cc380f       86262685d9abb       About an hour ago   Running             dashboard-metrics-scraper   0                   754a072dd87ad
* 5a7d6a66dfebc       bfe3a36ebd252       About an hour ago   Running             coredns                     0                   d81480de78a98
* 63b0555973b1b       85069258b98ac       About an hour ago   Exited              storage-provisioner         0                   943e200199bd1
* cd4536fe11fd5       10cc881966cfd       About an hour ago   Running             kube-proxy                  0                   55d7c9e3ade3f
* 4cd0e8f1c3535       3138b6e3d4712       About an hour ago   Running             kube-scheduler              0                   14d045b09f404
* 2e6b808290108       b9fa1895dcaa6       About an hour ago   Running             kube-controller-manager     0                   87fa40cbe5af7
* 5fc501398d4e4       ca9843d3b5454       About an hour ago   Running             kube-apiserver              0                   319691f086281
* cdc94c530673a       0369cf4303ffd       About an hour ago   Running             etcd                        0                   becef1905a44c
* 
* ==> coredns [5a7d6a66dfeb] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
* 
* ==> describe nodes <==
* Name:               minikube
* Roles:              control-plane,master
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=minikube
*                     kubernetes.io/os=linux
*                     minikube.k8s.io/commit=9f1e482427589ff8451c4723b6ba53bb9742fbb1
*                     minikube.k8s.io/name=minikube
*                     minikube.k8s.io/updated_at=2020_12_25T08_59_14_0700
*                     minikube.k8s.io/version=v1.16.0
*                     node-role.kubernetes.io/control-plane=
*                     node-role.kubernetes.io/master=
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Fri, 25 Dec 2020 08:59:11 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  minikube
*   AcquireTime:     <unset>
*   RenewTime:       Fri, 25 Dec 2020 10:20:20 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Fri, 25 Dec 2020 10:19:50 +0000   Fri, 25 Dec 2020 08:59:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Fri, 25 Dec 2020 10:19:50 +0000   Fri, 25 Dec 2020 08:59:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Fri, 25 Dec 2020 10:19:50 +0000   Fri, 25 Dec 2020 08:59:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Fri, 25 Dec 2020 10:19:50 +0000   Fri, 25 Dec 2020 08:59:24 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.2
*   Hostname:    minikube
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 553cd13426dc4769a8829227ba19e489
*   System UUID:                fa536e36-071b-4889-b289-f0922b238888
*   Boot ID:                    b0451519-dcbb-4fc9-9cc2-3b7811ecdd5a
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://20.10.0
*   Kubelet Version:            v1.20.0
*   Kube-Proxy Version:         v1.20.0
* PodCIDR:                      10.244.0.0/24
* PodCIDRs:                     10.244.0.0/24
* Non-terminated Pods:          (10 in total)
*   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
*   kube-system                 coredns-54d67798b7-kgncc                    100m (2%)     0 (0%)      70Mi (1%)        170Mi (4%)     80m
*   kube-system                 etcd-minikube                               100m (2%)     0 (0%)      100Mi (2%)       0 (0%)         81m
*   kube-system                 kindnet-r925s                               100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      80m
*   kube-system                 kube-apiserver-minikube                     250m (6%)     0 (0%)      0 (0%)           0 (0%)         81m
*   kube-system                 kube-controller-manager-minikube            200m (5%)     0 (0%)      0 (0%)           0 (0%)         81m
*   kube-system                 kube-proxy-wq5bt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         80m
*   kube-system                 kube-scheduler-minikube                     100m (2%)     0 (0%)      0 (0%)           0 (0%)         81m
*   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81m
*   kubernetes-dashboard        dashboard-metrics-scraper-c85578d8-26mkb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         80m
*   kubernetes-dashboard        kubernetes-dashboard-7db476d994-dcrqf       0 (0%)        0 (0%)      0 (0%)           0 (0%)         80m
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests    Limits
*   --------           --------    ------
*   cpu                850m (21%)  100m (2%)
*   memory             220Mi (5%)  220Mi (5%)
*   ephemeral-storage  100Mi (0%)  0 (0%)
*   hugepages-2Mi      0 (0%)      0 (0%)
* Events:              <none>
* 
* 
* Name:               minikube-m02
* Roles:              <none>
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=minikube-m02
*                     kubernetes.io/os=linux
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Fri, 25 Dec 2020 08:59:39 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  minikube-m02
*   AcquireTime:     <unset>
*   RenewTime:       Fri, 25 Dec 2020 10:20:20 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Fri, 25 Dec 2020 10:19:51 +0000   Fri, 25 Dec 2020 09:47:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Fri, 25 Dec 2020 10:19:51 +0000   Fri, 25 Dec 2020 09:47:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Fri, 25 Dec 2020 10:19:51 +0000   Fri, 25 Dec 2020 09:47:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Fri, 25 Dec 2020 10:19:51 +0000   Fri, 25 Dec 2020 09:47:42 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.3
*   Hostname:    minikube-m02
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 1a5bc7f3d2e845b4b6edadec7dec31fe
*   System UUID:                75a80ab2-1a8d-417e-84d1-cfea07407f53
*   Boot ID:                    b0451519-dcbb-4fc9-9cc2-3b7811ecdd5a
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://20.10.0
*   Kubelet Version:            v1.20.0
*   Kube-Proxy Version:         v1.20.0
* PodCIDR:                      10.244.1.0/24
* PodCIDRs:                     10.244.1.0/24
* Non-terminated Pods:          (3 in total)
*   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                ------------  ----------  ---------------  -------------  ---
*   default                     hue-s22bs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
*   kube-system                 kindnet-mxjdb       100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      80m
*   kube-system                 kube-proxy-74bg6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         80m
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests   Limits
*   --------           --------   ------
*   cpu                100m (2%)  100m (2%)
*   memory             50Mi (1%)  50Mi (1%)
*   ephemeral-storage  0 (0%)     0 (0%)
*   hugepages-2Mi      0 (0%)     0 (0%)
* Events:
*   Type     Reason                   Age                From        Message
*   ----     ------                   ----               ----        -------
*   Warning  readOnlySysFS            43m                kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 43m                kube-proxy  Starting kube-proxy.
*   Normal   Starting                 42m                kubelet     Starting kubelet.
*   Normal   NodeAllocatableEnforced  42m                kubelet     Updated Node Allocatable limit across pods
*   Normal   NodeHasSufficientMemory  42m (x2 over 42m)  kubelet     Node minikube-m02 status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    42m (x2 over 42m)  kubelet     Node minikube-m02 status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     42m (x2 over 42m)  kubelet     Node minikube-m02 status is now: NodeHasSufficientPID
*   Warning  readOnlySysFS            42m                kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 42m                kube-proxy  Starting kube-proxy.
*   Normal   NodeReady                42m                kubelet     Node minikube-m02 status is now: NodeReady
*   Normal   Starting                 32m                kubelet     Starting kubelet.
*   Normal   NodeAllocatableEnforced  32m                kubelet     Updated Node Allocatable limit across pods
*   Normal   NodeHasSufficientMemory  32m (x2 over 32m)  kubelet     Node minikube-m02 status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    32m (x2 over 32m)  kubelet     Node minikube-m02 status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     32m (x2 over 32m)  kubelet     Node minikube-m02 status is now: NodeHasSufficientPID
*   Warning  readOnlySysFS            32m                kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 32m                kube-proxy  Starting kube-proxy.
*   Normal   NodeReady                32m                kubelet     Node minikube-m02 status is now: NodeReady
* 
* 
* Name:               minikube-m03
* Roles:              <none>
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=minikube-m03
*                     kubernetes.io/os=linux
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Fri, 25 Dec 2020 10:12:00 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  minikube-m03
*   AcquireTime:     <unset>
*   RenewTime:       Fri, 25 Dec 2020 10:20:20 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Fri, 25 Dec 2020 10:18:02 +0000   Fri, 25 Dec 2020 10:12:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Fri, 25 Dec 2020 10:18:02 +0000   Fri, 25 Dec 2020 10:12:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Fri, 25 Dec 2020 10:18:02 +0000   Fri, 25 Dec 2020 10:12:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Fri, 25 Dec 2020 10:18:02 +0000   Fri, 25 Dec 2020 10:12:10 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  192.168.49.4
*   Hostname:    minikube-m03
* Capacity:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* Allocatable:
*   cpu:                4
*   ephemeral-storage:  52417516Ki
*   hugepages-2Mi:      0
*   memory:             4035080Ki
*   pods:               110
* System Info:
*   Machine ID:                 fddba6aab8d14415add634756904efc6
*   System UUID:                0ceabf4f-6998-4dc5-a5c6-c5a66e622d21
*   Boot ID:                    b0451519-dcbb-4fc9-9cc2-3b7811ecdd5a
*   Kernel Version:             4.18.0-80.el8.x86_64
*   OS Image:                   Ubuntu 20.04.1 LTS
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://20.10.0
*   Kubelet Version:            v1.20.0
*   Kube-Proxy Version:         v1.20.0
* PodCIDR:                      10.244.3.0/24
* PodCIDRs:                     10.244.3.0/24
* Non-terminated Pods:          (3 in total)
*   Namespace                   Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                  ------------  ----------  ---------------  -------------  ---
*   default                     hue-postgres-9ghk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
*   kube-system                 kindnet-j6tnw         100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      8m27s
*   kube-system                 kube-proxy-rpsw7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests   Limits
*   --------           --------   ------
*   cpu                100m (2%)  100m (2%)
*   memory             50Mi (1%)  50Mi (1%)
*   ephemeral-storage  0 (0%)     0 (0%)
*   hugepages-2Mi      0 (0%)     0 (0%)
* Events:
*   Type     Reason                   Age                    From        Message
*   ----     ------                   ----                   ----        -------
*   Normal   Starting                 8m28s                  kubelet     Starting kubelet.
*   Normal   NodeHasSufficientMemory  8m28s (x2 over 8m28s)  kubelet     Node minikube-m03 status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    8m28s (x2 over 8m28s)  kubelet     Node minikube-m03 status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     8m28s (x2 over 8m28s)  kubelet     Node minikube-m03 status is now: NodeHasSufficientPID
*   Normal   NodeAllocatableEnforced  8m28s                  kubelet     Updated Node Allocatable limit across pods
*   Normal   NodeReady                8m18s                  kubelet     Node minikube-m03 status is now: NodeReady
*   Warning  readOnlySysFS            8m10s                  kube-proxy  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 8m10s                  kube-proxy  Starting kube-proxy.
* 
* ==> dmesg <==
* [Dec25 07:42] NOTE: The elevator= kernel parameter is deprecated.
* [  +0.000000] APIC calibration not consistent with PM-Timer: 145ms instead of 100ms
* [  +0.026064]  #2
* [  +0.002993]  #3
* [  +0.109949] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [  +2.881559] e1000: E1000 MODULE IS NOT SUPPORTED
* [  +1.452254] systemd: 18 output lines suppressed due to ratelimiting
* [  +7.140587] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
* 
* ==> etcd [cdc94c530673] <==
* 2020-12-25 10:11:50.507921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:11:58.235685 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (397.815116ms) to execute
* 2020-12-25 10:11:58.505268 W | etcdserver: request "header:<ID:8128001827527213568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/minikube-m02\" mod_revision:4565 > success:<request_put:<key:\"/registry/leases/kube-node-lease/minikube-m02\" value_size:548 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/minikube-m02\" > >>" with result "size:16" took too long (129.988925ms) to execute
* 2020-12-25 10:11:58.505504 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (266.596888ms) to execute
* 2020-12-25 10:11:58.505623 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (160.936661ms) to execute
* 2020-12-25 10:12:00.517090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:10.508145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:20.507996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:30.517234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:40.508684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:50.508192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:00.508482 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:10.508356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:20.508299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:30.509289 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:40.510232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:50.508506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:00.508645 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:06.518293 I | mvcc: store.index: compact 4429
* 2020-12-25 10:14:06.526226 I | mvcc: finished scheduled compaction at 4429 (took 7.647619ms)
* 2020-12-25 10:14:10.508572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:20.509112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:30.509072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:40.509455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:50.507929 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:00.509798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:10.507844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:20.508650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:30.510461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:40.509901 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:50.510257 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:00.508752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:10.517386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:20.510447 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:30.508504 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:40.508193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:50.508450 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:00.508625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:10.509423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:20.509176 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:30.508004 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:40.508530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:50.508636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:00.508441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:10.508346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:20.509295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:30.508425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:40.511118 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:50.507949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:00.507948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:06.528377 I | mvcc: store.index: compact 4843
* 2020-12-25 10:19:06.536228 I | mvcc: finished scheduled compaction at 4843 (took 7.631845ms)
* 2020-12-25 10:19:10.508707 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:20.508200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:30.508700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:40.508915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:50.511160 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:20:00.508288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:20:10.508091 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:20:20.507948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 
* ==> kernel <==
*  10:20:28 up  2:38,  0 users,  load average: 0.52, 0.56, 0.66
* Linux minikube 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 20.04.1 LTS"
* 
* ==> kube-apiserver [5fc501398d4e] <==
* I1225 10:10:21.157120       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:10:21.157150       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:10:21.157185       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:10:53.923410       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:10:53.923472       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:10:53.923480       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:11:24.460670       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:11:24.460700       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:11:24.460706       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:11:55.286710       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:11:55.286741       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:11:55.286747       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:11:58.506206       1 trace.go:205] Trace[805863097]: "GuaranteedUpdate etcd3" type:*coordination.Lease (25-Dec-2020 10:11:57.837) (total time: 668ms):
* Trace[805863097]: ---"Transaction committed" 668ms (10:11:00.506)
* Trace[805863097]: [668.479091ms] [668.479091ms] END
* I1225 10:11:58.506273       1 trace.go:205] Trace[562181312]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube-m02,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.49.3 (25-Dec-2020 10:11:57.837) (total time: 668ms):
* Trace[562181312]: ---"Object stored in database" 668ms (10:11:00.506)
* Trace[562181312]: [668.664945ms] [668.664945ms] END
* I1225 10:12:35.337963       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:12:35.338039       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:12:35.338051       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:13:18.650036       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:13:18.650065       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:13:18.650071       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:13:56.419747       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:13:56.419774       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:13:56.419780       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:14:29.543703       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:14:29.543748       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:14:29.543757       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:15:06.718671       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:15:06.718701       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:15:06.718707       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:15:43.130799       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:15:43.130848       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:15:43.130856       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:16:13.692826       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:16:13.692853       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:16:13.692858       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:16:45.425059       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:16:45.425092       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:16:45.425098       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:17:20.726672       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:17:20.726731       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:17:20.726741       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:17:54.033414       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:17:54.033440       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:17:54.033446       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:18:28.538751       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:18:28.538782       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:18:28.538788       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:18:59.133650       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:18:59.133864       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:18:59.133897       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:19:39.575244       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:19:39.575280       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:19:39.575286       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:20:23.189192       1 client.go:360] parsed scheme: "passthrough"
* I1225 10:20:23.189221       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:20:23.189228       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* 
* ==> kube-controller-manager [2e6b80829010] <==
* I1225 08:59:30.198554       1 shared_informer.go:247] Caches are synced for resource quota 
* I1225 08:59:30.200029       1 shared_informer.go:247] Caches are synced for disruption 
* I1225 08:59:30.200040       1 disruption.go:339] Sending events to api server.
* I1225 08:59:30.306510       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
* I1225 08:59:30.606823       1 shared_informer.go:247] Caches are synced for garbage collector 
* I1225 08:59:30.651046       1 shared_informer.go:247] Caches are synced for garbage collector 
* I1225 08:59:30.651076       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* W1225 08:59:39.422661       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m02" does not exist
* I1225 08:59:39.431480       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-74bg6"
* I1225 08:59:39.431717       1 range_allocator.go:373] Set node minikube-m02 PodCIDR to [10.244.1.0/24]
* E1225 08:59:39.431887       1 range_allocator.go:361] Node minikube-m02 already has a CIDR allocated [10.244.1.0/24]. Releasing the new one.
* W1225 08:59:40.085173       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube-m02. Assuming now as a timestamp.
* I1225 08:59:40.085317       1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m02 event: Registered Node minikube-m02 in Controller"
* I1225 08:59:44.235608       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r925s"
* I1225 08:59:44.244422       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mxjdb"
* E1225 08:59:44.271093       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2de57f33-1d62-4188-91f8-80a5050605fc", ResourceVersion:"491", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744483584, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018bf1c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018bf1e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018bf200), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018bf220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018bf2a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018bf2c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0018bf2e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0018bf320)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000eda540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e76b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000bfb730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e9f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000e76be0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* I1225 08:59:46.255859       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c85578d8 to 1"
* I1225 08:59:46.262954       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.268131       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.268397       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-7db476d994 to 1"
* E1225 08:59:46.276627       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.277310       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* I1225 08:59:46.277325       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.280285       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.280339       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.280515       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* E1225 08:59:46.289726       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.289889       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.291383       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.291426       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.293820       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.293863       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.303458       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.303508       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* I1225 08:59:46.340365       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c85578d8-26mkb"
* I1225 08:59:46.353941       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-7db476d994-dcrqf"
* I1225 09:01:15.757748       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-t5hpt"
* I1225 09:01:15.771577       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-mrh94"
* I1225 09:16:45.643240       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-p8wdj"
* I1225 09:16:45.643255       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-m9sxn"
* I1225 09:37:45.980140       1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node minikube-m02 status is now: NodeNotReady"
* I1225 09:37:45.989334       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-74bg6" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:37:45.997269       1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:38:01.006222       1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-postgres-p8wdj"
* I1225 09:38:01.006240       1 event.go:291] "Event occurred" object="default/hue-m9sxn" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-m9sxn"
* I1225 09:47:31.224977       1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node minikube-m02 status is now: NodeNotReady"
* I1225 09:47:31.228704       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-74bg6" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:47:31.234411       1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:47:46.245269       1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-postgres-p8wdj"
* I1225 09:47:46.245304       1 event.go:291] "Event occurred" object="default/hue-m9sxn" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-m9sxn"
* W1225 10:12:00.954056       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m03" does not exist
* I1225 10:12:01.202509       1 range_allocator.go:373] Set node minikube-m03 PodCIDR to [10.244.3.0/24]
* I1225 10:12:01.238310       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rpsw7"
* I1225 10:12:01.243436       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j6tnw"
* E1225 10:12:01.259472       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"175c3adb-dbe1-4f63-9d86-3a77fad8f5b8", ResourceVersion:"3330", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744483554, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002564d00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002564d20)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002564d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002564d60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002564d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002507380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002564da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002564dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002564e00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002529200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016e41b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c072d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00247c1c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016e4248)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* E1225 10:12:01.353919       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2de57f33-1d62-4188-91f8-80a5050605fc", ResourceVersion:"500", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744483584, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001501c80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001501ca0)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001501cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001501ce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001501d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001501d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001501d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001501d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001501d80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001501dc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00109f260), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000529e68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00001cd20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0004871a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00011c090)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:0, NumberUnavailable:2, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* W1225 10:12:01.753802       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube-m03. Assuming now as a timestamp.
* I1225 10:12:01.754013       1 event.go:291] "Event occurred" object="minikube-m03" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m03 event: Registered Node minikube-m03 in Controller"
* I1225 10:12:18.475999       1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-s22bs"
* I1225 10:12:18.504479       1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-9ghk6"
* 
* ==> kube-proxy [cd4536fe11fd] <==
* I1225 08:59:31.387118       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
* I1225 08:59:31.387169       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
* W1225 08:59:31.558686       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
* I1225 08:59:31.558740       1 server_others.go:185] Using iptables Proxier.
* I1225 08:59:31.558940       1 server.go:650] Version: v1.20.0
* I1225 08:59:31.559209       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1225 08:59:31.559227       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E1225 08:59:31.559467       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I1225 08:59:31.559518       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1225 08:59:31.559538       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1225 08:59:31.559736       1 config.go:315] Starting service config controller
* I1225 08:59:31.559745       1 shared_informer.go:240] Waiting for caches to sync for service config
* I1225 08:59:31.559756       1 config.go:224] Starting endpoint slice config controller
* I1225 08:59:31.559758       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1225 08:59:31.660245       1 shared_informer.go:247] Caches are synced for endpoint slice config 
* I1225 08:59:31.660310       1 shared_informer.go:247] Caches are synced for service config 
* 
* ==> kube-scheduler [4cd0e8f1c353] <==
* I1225 08:59:06.855927       1 serving.go:331] Generated self-signed cert in-memory
* W1225 08:59:11.236091       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W1225 08:59:11.236122       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W1225 08:59:11.236130       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
* W1225 08:59:11.236137       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I1225 08:59:11.321735       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1225 08:59:11.321779       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1225 08:59:11.322086       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I1225 08:59:11.322119       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E1225 08:59:11.326265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1225 08:59:11.326359       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1225 08:59:11.326421       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1225 08:59:11.326473       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1225 08:59:11.326574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1225 08:59:11.326678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1225 08:59:11.326694       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1225 08:59:11.326832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1225 08:59:11.326892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1225 08:59:11.327015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1225 08:59:11.327101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1225 08:59:11.343168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1225 08:59:12.189683       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1225 08:59:12.205816       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1225 08:59:12.213164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1225 08:59:12.290263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1225 08:59:12.463141       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I1225 08:59:14.721875       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I1225 10:12:01.425872       1 trace.go:205] Trace[405008256]: "Scheduling" namespace:kube-system,name:kindnet-j6tnw (25-Dec-2020 10:12:01.316) (total time: 102ms):
* Trace[405008256]: ---"Snapshotting scheduler cache and node infos done" 49ms (10:12:00.366)
* Trace[405008256]: ---"Computing predicates done" 53ms (10:12:00.419)
* Trace[405008256]: [102.57507ms] [102.57507ms] END
* 
* ==> kubelet <==
* -- Logs begin at Fri 2020-12-25 08:22:51 UTC, end at Fri 2020-12-25 10:20:29 UTC. --
* Dec 25 10:09:16 minikube kubelet[3023]: E1225 10:09:16.611546    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:09:29 minikube kubelet[3023]: E1225 10:09:29.611243    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:09:41 minikube kubelet[3023]: E1225 10:09:41.643887    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:09:53 minikube kubelet[3023]: E1225 10:09:53.611358    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:07 minikube kubelet[3023]: E1225 10:10:07.611763    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:19 minikube kubelet[3023]: E1225 10:10:19.611856    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:31 minikube kubelet[3023]: E1225 10:10:31.611167    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:46 minikube kubelet[3023]: E1225 10:10:46.611864    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:59 minikube kubelet[3023]: E1225 10:10:59.611957    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:11 minikube kubelet[3023]: E1225 10:11:11.612107    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:24 minikube kubelet[3023]: E1225 10:11:24.611518    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:35 minikube kubelet[3023]: E1225 10:11:35.611827    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:48 minikube kubelet[3023]: E1225 10:11:48.615436    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:03 minikube kubelet[3023]: E1225 10:12:03.611416    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098599    3023 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098622    3023 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098706    3023 kuberuntime_manager.go:829] container &Container{Name:kindnet-cni,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kindnet-token-gglld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098728    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
* Dec 25 10:12:29 minikube kubelet[3023]: E1225 10:12:29.621243    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:41 minikube kubelet[3023]: E1225 10:12:41.618778    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:52 minikube kubelet[3023]: E1225 10:12:52.612289    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:07 minikube kubelet[3023]: E1225 10:13:07.611259    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:19 minikube kubelet[3023]: E1225 10:13:19.611967    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:31 minikube kubelet[3023]: E1225 10:13:31.611675    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:42 minikube kubelet[3023]: E1225 10:13:42.615332    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:53 minikube kubelet[3023]: E1225 10:13:53.613091    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:05 minikube kubelet[3023]: E1225 10:14:05.612990    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:18 minikube kubelet[3023]: E1225 10:14:18.616127    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:33 minikube kubelet[3023]: E1225 10:14:33.616300    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:44 minikube kubelet[3023]: E1225 10:14:44.614191    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:56 minikube kubelet[3023]: E1225 10:14:56.611432    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:07 minikube kubelet[3023]: E1225 10:15:07.612030    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:19 minikube kubelet[3023]: E1225 10:15:19.613067    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:30 minikube kubelet[3023]: E1225 10:15:30.611022    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:42 minikube kubelet[3023]: E1225 10:15:42.611352    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:56 minikube kubelet[3023]: E1225 10:15:56.612199    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:09 minikube kubelet[3023]: E1225 10:16:09.617449    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:23 minikube kubelet[3023]: E1225 10:16:23.612835    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:34 minikube kubelet[3023]: E1225 10:16:34.616837    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:46 minikube kubelet[3023]: E1225 10:16:46.611752    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:58 minikube kubelet[3023]: E1225 10:16:58.616872    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:17:12 minikube kubelet[3023]: E1225 10:17:12.612091    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906478    3023 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906506    3023 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906649    3023 kuberuntime_manager.go:829] container &Container{Name:kindnet-cni,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kindnet-token-gglld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906674    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
* Dec 25 10:17:37 minikube kubelet[3023]: E1225 10:17:37.614648    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:17:51 minikube kubelet[3023]: E1225 10:17:51.611493    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:05 minikube kubelet[3023]: E1225 10:18:05.613256    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:16 minikube kubelet[3023]: E1225 10:18:16.611989    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:31 minikube kubelet[3023]: E1225 10:18:31.611938    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:46 minikube kubelet[3023]: E1225 10:18:46.612531    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:00 minikube kubelet[3023]: E1225 10:19:00.613019    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:12 minikube kubelet[3023]: E1225 10:19:12.616422    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:23 minikube kubelet[3023]: E1225 10:19:23.612637    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:35 minikube kubelet[3023]: E1225 10:19:35.613401    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:49 minikube kubelet[3023]: E1225 10:19:49.611071    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:20:02 minikube kubelet[3023]: E1225 10:20:02.615652    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:20:15 minikube kubelet[3023]: E1225 10:20:15.611303    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:20:26 minikube kubelet[3023]: E1225 10:20:26.611506    3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* 
* ==> kubernetes-dashboard [d31f0f45948c] <==
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 192.168.33.1: 
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1: 
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/systembanner request from 192.168.33.1: 
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1: 
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/25 10:14:35 Getting list of namespaces
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/25 10:14:35 Getting list of all services in the cluster
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 192.168.33.1: 
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/settings/pinner request from 192.168.33.1: 
* 2020/12/25 10:14:37 Getting application global configuration
* 2020/12/25 10:14:37 Application configuration {"serverTime":1608891277199}
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/plugin/config request from 192.168.33.1: 
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 192.168.33.1: 
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1: 
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/systembanner request from 192.168.33.1: 
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1: 
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/25 10:14:37 Getting list of namespaces
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/25 10:14:37 Getting list of all services in the cluster
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/25 10:14:42 Getting list of all services in the cluster
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/25 10:14:42 Getting list of namespaces
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/25 10:14:45 Getting list of namespaces
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/25 10:14:45 Getting list of all services in the cluster
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/25 10:14:58 Getting list of namespaces
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/25 10:14:58 Getting list of all services in the cluster
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1: 
* 2020/12/25 10:15:04 Getting list of namespaces
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1: 
* 2020/12/25 10:15:04 Getting list of all services in the cluster
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Outcoming response to 192.168.33.1 with 200 status code
* 
* ==> storage-provisioner [1879fc181833] <==
* I1225 09:00:01.799437       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
* I1225 09:00:01.809449       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
* I1225 09:00:01.809497       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
* I1225 09:00:01.817949       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1225 09:00:01.818061       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_f155d0ed-e63d-46a1-8815-c3dd12638e20!
* I1225 09:00:01.818229       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb976407-2d34-4eb3-8b0f-47a300cfd32a", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_f155d0ed-e63d-46a1-8815-c3dd12638e20 became leader
* I1225 09:00:01.918529       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_f155d0ed-e63d-46a1-8815-c3dd12638e20!
* 
* ==> storage-provisioner [63b0555973b1] <==
* I1225 08:59:31.300143       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
* F1225 09:00:01.302919       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

@priyawadhwa priyawadhwa removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Jan 27, 2021
@medyagh
Copy link
Member

medyagh commented Mar 3, 2021

@LY1806620741 do u have the same problem without multi node ? on a single node?

@LY1806620741
Copy link
Author

LY1806620741 commented Mar 6, 2021

@medyagh

单台物理机,minikube多节点会存在网络问题,单节点时没有问题。在我上一条回复时问题还存在,但是这个问题已经提出了很久,这意味着这个问题可能已经不再有效了。
A single physical machine, minikube network has problems in multi node , single node is no problem. The problem still exists in my last reply, but it has been raised for a long time, which means it may no longer be valid.

@LY1806620741
Copy link
Author

LY1806620741 commented Mar 10, 2021

今天我重试了一次,问题仍然存在,现在版本为
I tried again today, the problem still exists, now,version is

[vagrant@control-plane ~]$ cat /etc/redhat-release
CentOS Linux release 8.0.1905 (Core) 

uname -a
Linux control-plane.minikube.internal 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

minikube version: v1.18.1
commit: 09ee84d530de4a92f00f1c5dbc34cead092b95bc

docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:03:45 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:02:21 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

@LY1806620741 LY1806620741 reopened this Mar 10, 2021
@LY1806620741
Copy link
Author

LY1806620741 commented Mar 22, 2021

ls pods:

[vagrant@control-plane ~]$ kubectl get po -o wide
NAME                 READY   STATUS             RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
hue-postgres-9mw74   1/1     Running            0          8m36s   172.17.0.4   minikube-m02   <none>           <none>
hue-qsg6s            0/1     CrashLoopBackOff   5          8m36s   172.17.0.5   minikube-m02   <none>           <none>

get error log:

  File "/usr/share/hue/build/env/lib/python3.6/site-packages/psycopg2/__init__.py", line 127, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "hue-postgres" to address: Temporary failure in name resolution

[22/Mar/2021 09:22:07 ] supervisor   INFO     Starting process /usr/share/hue/build/env/bin/hue kt_renewer
[22/Mar/2021 09:22:07 ] supervisor   INFO     Starting process /usr/share/hue/build/env/bin/hue runcpserver
[22/Mar/2021 09:22:07 ] supervisor   INFO     Started proceses (pid 16) /usr/share/hue/build/env/bin/hue kt_renewer
[22/Mar/2021 09:22:07 ] supervisor   INFO     Started proceses (pid 18) /usr/share/hue/build/env/bin/hue runcpserver
[22/Mar/2021 09:22:07 ] settings     INFO     Welcome to Hue 4.9.0
[22/Mar/2021 09:22:07 ] settings     INFO     Welcome to Hue 4.9.0

ls svc:

[vagrant@control-plane ~]$ kubectl get svc -o wide
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE     SELECTOR
hue            NodePort    10.96.144.0      <none>        8888:30473/TCP   9m12s   app=hue
hue-postgres   NodePort    10.107.249.180   <none>        5432:31096/TCP   9m12s   app=hue-postgres
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP          11d     <none>

into m02:

[vagrant@control-plane ~]$ minikube ssh -n m02
docker@minikube-m02:~$ docker version
Client: Docker Engine - Community
 Version:           20.10.3
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        48d30b5
 Built:             Fri Jan 29 14:33:21 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.3
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       46229ca
  Built:            Fri Jan 29 14:31:32 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker@minikube-m02:~$ docker inspect k8s_POD_hue-qsg6s_default_c0cd8244-c5fb-4a55-af39-9d1698bc593f_0         
[
    {
        "Id": "ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf",
        "Created": "2021-03-22T09:12:17.843694918Z",
        "Path": "/pause",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 31106,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-03-22T09:12:18.413175581Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
        "ResolvConfPath": "/var/lib/docker/containers/ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf/hostname",
        "HostsPath": "/var/lib/docker/containers/ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf/hosts",
        "LogPath": "/var/lib/docker/containers/ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf/ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf-json.log",
        "Name": "/k8s_POD_hue-qsg6s_default_c0cd8244-c5fb-4a55-af39-9d1698bc593f_0",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "host",
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": -998,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "no-new-privileges"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "/kubepods/besteffort/podc0cd8244-c5fb-4a55-af39-9d1698bc593f",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": [
                {
                    "Name": "nofile",
                    "Hard": 1048576,
                    "Soft": 1048576
                }
            ],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/fda243759a7d8c2b4bc6257eaf6850f0a2bca9843031cbdfea9418fb851d3b31-init/diff:/var/lib/docker/overlay2/123f4d043f1a497d2475251fe3f4226bb7842e4f35d2b9ade98b96a5bf30ee03/diff",                "MergedDir": "/var/lib/docker/overlay2/fda243759a7d8c2b4bc6257eaf6850f0a2bca9843031cbdfea9418fb851d3b31/merged",
                "UpperDir": "/var/lib/docker/overlay2/fda243759a7d8c2b4bc6257eaf6850f0a2bca9843031cbdfea9418fb851d3b31/diff",
                "WorkDir": "/var/lib/docker/overlay2/fda243759a7d8c2b4bc6257eaf6850f0a2bca9843031cbdfea9418fb851d3b31/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "hue-qsg6s",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": null,
            "Image": "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": [
                "/pause"
            ],
            "OnBuild": null,
            "Labels": {
                "annotation.kubernetes.io/config.seen": "2021-03-22T09:12:17.536216852Z",
                "annotation.kubernetes.io/config.source": "api",
                "app": "hue",
                "io.kubernetes.container.name": "POD",
                "io.kubernetes.docker.type": "podsandbox",
                "io.kubernetes.pod.name": "hue-qsg6s",
                "io.kubernetes.pod.namespace": "default",
                "io.kubernetes.pod.uid": "c0cd8244-c5fb-4a55-af39-9d1698bc593f"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "099a919ce725e15bc56e4cb8ee22c1c2c1896d383a578246ef87de020280a4e2",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/099a919ce725",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "28617879bfecb26f8fe6a521b80a286c647548147fdd1162b0875a8482aedaea",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.5",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:05",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "c7e762d0b12e6e79c5f20d77b5d275ceea581993d564171e66985c60754f48d4",
                    "EndpointID": "28617879bfecb26f8fe6a521b80a286c647548147fdd1162b0875a8482aedaea",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.5",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:05",
                    "DriverOpts": null
                }
            }
        }
    }
]

docker@minikube-m02:~$ docker network inspect c7e762d0b12e
[
    {
        "Name": "bridge",
        "Id": "c7e762d0b12e6e79c5f20d77b5d275ceea581993d564171e66985c60754f48d4",
        "Created": "2021-03-22T07:28:18.817351697Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "59e6fe82e3dd0906b8fa4aeab4722e572e8ce587082906395968066ca0bdc22a": {
                "Name": "k8s_POD_dashboard-metrics-scraper-8554f74445-v82xx_kubernetes-dashboard_91a588ff-a13a-4313-9c57-04f98d689087_0",
                "EndpointID": "bf32da1b5eb21af4d4a16442d7a0cdfbe913937368872ef0bf27de7b45ee8975",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "ad0d19e256db05eda57413b33b21c182e71d4c5621ca268224206d4bcc971adf": {
                "Name": "k8s_POD_hue-qsg6s_default_c0cd8244-c5fb-4a55-af39-9d1698bc593f_0",
                "EndpointID": "28617879bfecb26f8fe6a521b80a286c647548147fdd1162b0875a8482aedaea",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "172.17.0.5/16",
                "IPv6Address": ""
            },
            "d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668": {
                "Name": "k8s_POD_hue-postgres-9mw74_default_9565aebb-f1e1-493e-a2b5-7b460c07d815_0",
                "EndpointID": "c7755d4563116d20b4048bb178ca2ed8364ce85efe3e3e3d42c31f9ce6993337",
                "MacAddress": "02:42:ac:11:00:04",
                "IPv4Address": "172.17.0.4/16",
                "IPv6Address": ""
            },
            "edc390add0a882db69285ac188a1d8aa91084817b3896e1178d2cee82f723fbd": {
                "Name": "k8s_POD_kubernetes-dashboard-6c87f58d7c-fvr8x_kubernetes-dashboard_7678ae90-5d3b-4f0e-b44c-37edb79fb3aa_0",
                "EndpointID": "71bc2274bba8d7d8a7923ed1601e01dde655b89667272f346742ee89a8b36f40",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

docker@minikube-m02:~$ docker inspect k8s_POD_hue-postgres-9mw74_default_9565aebb-f1e1-493e-a2b5-7b460c07d815_0           
[
    {
        "Id": "d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668",
        "Created": "2021-03-22T09:12:17.840285442Z",
        "Path": "/pause",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 31063,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-03-22T09:12:18.41211366Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c",
        "ResolvConfPath": "/var/lib/docker/containers/d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668/hostname",
        "HostsPath": "/var/lib/docker/containers/d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668/hosts",
        "LogPath": "/var/lib/docker/containers/d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668/d6f78c2899b60783ef3e3bcd9140967e35a84a37076160448f1b9ee90caca668-json.log",
        "Name": "/k8s_POD_hue-postgres-9mw74_default_9565aebb-f1e1-493e-a2b5-7b460c07d815_0",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "host",
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": -998,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "no-new-privileges"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "/kubepods/besteffort/pod9565aebb-f1e1-493e-a2b5-7b460c07d815",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": [
                {
                    "Name": "nofile",
                    "Hard": 1048576,
                    "Soft": 1048576
                }
            ],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/540b3c10e698d9eff9e74fe3d0cf8e9f741d9f21d0460372a25adf26ddcfeaea-init/diff:/var/lib/docker/overlay2/123f4d043f1a497d2475251fe3f4226bb7842e4f35d2b9ade98b96a5bf30ee03/diff",                "MergedDir": "/var/lib/docker/overlay2/540b3c10e698d9eff9e74fe3d0cf8e9f741d9f21d0460372a25adf26ddcfeaea/merged",
                "UpperDir": "/var/lib/docker/overlay2/540b3c10e698d9eff9e74fe3d0cf8e9f741d9f21d0460372a25adf26ddcfeaea/diff",
                "WorkDir": "/var/lib/docker/overlay2/540b3c10e698d9eff9e74fe3d0cf8e9f741d9f21d0460372a25adf26ddcfeaea/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "hue-postgres-9mw74",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": null,
            "Image": "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": [
                "/pause"
            ],
            "OnBuild": null,
            "Labels": {
                "annotation.kubernetes.io/config.seen": "2021-03-22T09:12:17.531357097Z",
                "annotation.kubernetes.io/config.source": "api",
                "app": "hue-postgres",
                "io.kubernetes.container.name": "POD",
                "io.kubernetes.docker.type": "podsandbox",
                "io.kubernetes.pod.name": "hue-postgres-9mw74",
                "io.kubernetes.pod.namespace": "default",
                "io.kubernetes.pod.uid": "9565aebb-f1e1-493e-a2b5-7b460c07d815"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "d356730c0b450ba9a47d69ab68bf43c803fc45785c26564c6ed30f986baf4915",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/d356730c0b45",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "c7755d4563116d20b4048bb178ca2ed8364ce85efe3e3e3d42c31f9ce6993337",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.4",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:04",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "c7e762d0b12e6e79c5f20d77b5d275ceea581993d564171e66985c60754f48d4",
                    "EndpointID": "c7755d4563116d20b4048bb178ca2ed8364ce85efe3e3e3d42c31f9ce6993337",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.4",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:04",
                    "DriverOpts": null
                }
            }
        }
    }
]

@LY1806620741
Copy link
Author

bright network is not set alias

@spowelljr spowelljr added long-term-support Long-term support issues that can't be fixed in code and removed triage/long-term-support labels May 19, 2021
@sharifelgamal
Copy link
Collaborator

minikube 1.22 should have a few fixes for mulitnode networking, could you check and see if it's still an issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 18, 2021
@spowelljr
Copy link
Member

Hi @LY1806620741, we haven't heard back from you, if you have a chance please try this again with the latest version of minikube. Feel free to reopen this issue again if it's not fixed, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code
Projects
None yet
Development

No branches or pull requests

7 participants