-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube多节点pod间网络不互通 #9921
Comments
Hey @LY1806620741 thank you for opening this issue. I believe it may have been fixed by #9875. Could you please try upgrading to our latest release of minikube, v1.16.0, to see if that resolves this issue? Latest release: https://github.com/kubernetes/minikube/releases/tag/v1.16.0 嘿 @LY1806620741 感谢您打开此问题。我相信它可能已经通过 #9875 修复了。您能否尝试升级到最新版本的minikube v1.16.0,以查看是否可以解决此问题? 最新版本: https://github.com/kubernetes/minikube/releases/tag/v1.16.0 |
Hey @priyawadhwa , minikube 1.16.0 still has this problem. conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: could not translate host name "hue-postgres" to address: Temporary failure in name resolution others info: [vagrant@control-plane ~]$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1
[vagrant@control-plane ~]$ minikube node list
minikube 192.168.49.2
minikube-m02 192.168.49.3
minikube-m03 192.168.49.4
* ==> Docker <==
* -- Logs begin at Fri 2020-12-25 08:22:51 UTC, end at Fri 2020-12-25 10:20:28 UTC. --
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.399834121Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.410239527Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.412648867Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.412670822Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.412767308Z" level=info msg="Loading containers: start."
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.739587406Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.783568983Z" level=info msg="Loading containers: done."
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.801735982Z" level=info msg="Docker daemon" commit=eeddea2 graphdriver(s)=overlay2 version=20.10.0
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.801781008Z" level=info msg="Daemon has completed initialization"
* Dec 25 08:22:55 minikube systemd[1]: Started Docker Application Container Engine.
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.814374225Z" level=info msg="API listen on [::]:2376"
* Dec 25 08:22:55 minikube dockerd[415]: time="2020-12-25T08:22:55.823237928Z" level=info msg="API listen on /var/run/docker.sock"
* Dec 25 08:22:56 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:57:27 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:59:03 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:59:13 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 08:59:45 minikube dockerd[415]: time="2020-12-25T08:59:45.946407570Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 08:59:45 minikube dockerd[415]: time="2020-12-25T08:59:45.946430521Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:00:01 minikube dockerd[415]: time="2020-12-25T09:00:01.324239615Z" level=info msg="ignoring event" container=63b0555973b1b7a2cc1703888a21ffe3764abdee705f7106ff20da04fe63d6b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* Dec 25 09:00:02 minikube dockerd[415]: time="2020-12-25T09:00:02.007922572Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:00:02 minikube dockerd[415]: time="2020-12-25T09:00:02.007947793Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:00:29 minikube dockerd[415]: time="2020-12-25T09:00:29.879231843Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:00:29 minikube dockerd[415]: time="2020-12-25T09:00:29.879274086Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:01:19 minikube dockerd[415]: time="2020-12-25T09:01:19.873499847Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:01:19 minikube dockerd[415]: time="2020-12-25T09:01:19.873532924Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:02:49 minikube dockerd[415]: time="2020-12-25T09:02:49.929314063Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:02:49 minikube dockerd[415]: time="2020-12-25T09:02:49.929363083Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:05:34 minikube dockerd[415]: time="2020-12-25T09:05:34.889785486Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:05:34 minikube dockerd[415]: time="2020-12-25T09:05:34.889814338Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:10:39 minikube dockerd[415]: time="2020-12-25T09:10:39.094206128Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:10:39 minikube dockerd[415]: time="2020-12-25T09:10:39.094235252Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:15:39 minikube dockerd[415]: time="2020-12-25T09:15:39.875274071Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:15:39 minikube dockerd[415]: time="2020-12-25T09:15:39.875323655Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:20:40 minikube dockerd[415]: time="2020-12-25T09:20:40.865380838Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:20:40 minikube dockerd[415]: time="2020-12-25T09:20:40.865421573Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:25:43 minikube dockerd[415]: time="2020-12-25T09:25:43.875940786Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:25:43 minikube dockerd[415]: time="2020-12-25T09:25:43.876042893Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:30:55 minikube dockerd[415]: time="2020-12-25T09:30:55.182845140Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:30:55 minikube dockerd[415]: time="2020-12-25T09:30:55.183273814Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:36:11 minikube dockerd[415]: time="2020-12-25T09:36:11.651939428Z" level=warning msg="Error getting v2 registry: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: net/http: TLS handshake timeout"
* Dec 25 09:36:11 minikube dockerd[415]: time="2020-12-25T09:36:11.651977801Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: net/http: TLS handshake timeout"
* Dec 25 09:36:11 minikube dockerd[415]: time="2020-12-25T09:36:11.654835887Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: net/http: TLS handshake timeout"
* Dec 25 09:37:02 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 09:41:18 minikube dockerd[415]: time="2020-12-25T09:41:18.992311067Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:41:18 minikube dockerd[415]: time="2020-12-25T09:41:18.992487171Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:46:27 minikube dockerd[415]: time="2020-12-25T09:46:27.776277668Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:46:27 minikube dockerd[415]: time="2020-12-25T09:46:27.776305080Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:46:45 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
* Dec 25 09:51:35 minikube dockerd[415]: time="2020-12-25T09:51:35.880871726Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:51:35 minikube dockerd[415]: time="2020-12-25T09:51:35.880982509Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 09:56:55 minikube dockerd[415]: time="2020-12-25T09:56:55.807288459Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 09:56:55 minikube dockerd[415]: time="2020-12-25T09:56:55.807346164Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:02:04 minikube dockerd[415]: time="2020-12-25T10:02:04.952718088Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:02:04 minikube dockerd[415]: time="2020-12-25T10:02:04.952746820Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:07:13 minikube dockerd[415]: time="2020-12-25T10:07:13.498153562Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:07:13 minikube dockerd[415]: time="2020-12-25T10:07:13.498255638Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:12:17 minikube dockerd[415]: time="2020-12-25T10:12:17.096423306Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:12:17 minikube dockerd[415]: time="2020-12-25T10:12:17.096451834Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
* Dec 25 10:17:24 minikube dockerd[415]: time="2020-12-25T10:17:24.899367546Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
* Dec 25 10:17:24 minikube dockerd[415]: time="2020-12-25T10:17:24.899395713Z" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 1879fc1818339 85069258b98ac About an hour ago Running storage-provisioner 1 943e200199bd1
* d31f0f45948cb 9a07b5b4bfac0 About an hour ago Running kubernetes-dashboard 0 071ffe6aa0955
* a718845cc380f 86262685d9abb About an hour ago Running dashboard-metrics-scraper 0 754a072dd87ad
* 5a7d6a66dfebc bfe3a36ebd252 About an hour ago Running coredns 0 d81480de78a98
* 63b0555973b1b 85069258b98ac About an hour ago Exited storage-provisioner 0 943e200199bd1
* cd4536fe11fd5 10cc881966cfd About an hour ago Running kube-proxy 0 55d7c9e3ade3f
* 4cd0e8f1c3535 3138b6e3d4712 About an hour ago Running kube-scheduler 0 14d045b09f404
* 2e6b808290108 b9fa1895dcaa6 About an hour ago Running kube-controller-manager 0 87fa40cbe5af7
* 5fc501398d4e4 ca9843d3b5454 About an hour ago Running kube-apiserver 0 319691f086281
* cdc94c530673a 0369cf4303ffd About an hour ago Running etcd 0 becef1905a44c
*
* ==> coredns [5a7d6a66dfeb] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
* CoreDNS-1.7.0
* linux/amd64, go1.14.4, f59c03d
*
* ==> describe nodes <==
* Name: minikube
* Roles: control-plane,master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=minikube
* kubernetes.io/os=linux
* minikube.k8s.io/commit=9f1e482427589ff8451c4723b6ba53bb9742fbb1
* minikube.k8s.io/name=minikube
* minikube.k8s.io/updated_at=2020_12_25T08_59_14_0700
* minikube.k8s.io/version=v1.16.0
* node-role.kubernetes.io/control-plane=
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Fri, 25 Dec 2020 08:59:11 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: minikube
* AcquireTime: <unset>
* RenewTime: Fri, 25 Dec 2020 10:20:20 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Fri, 25 Dec 2020 10:19:50 +0000 Fri, 25 Dec 2020 08:59:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Fri, 25 Dec 2020 10:19:50 +0000 Fri, 25 Dec 2020 08:59:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Fri, 25 Dec 2020 10:19:50 +0000 Fri, 25 Dec 2020 08:59:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Fri, 25 Dec 2020 10:19:50 +0000 Fri, 25 Dec 2020 08:59:24 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.49.2
* Hostname: minikube
* Capacity:
* cpu: 4
* ephemeral-storage: 52417516Ki
* hugepages-2Mi: 0
* memory: 4035080Ki
* pods: 110
* Allocatable:
* cpu: 4
* ephemeral-storage: 52417516Ki
* hugepages-2Mi: 0
* memory: 4035080Ki
* pods: 110
* System Info:
* Machine ID: 553cd13426dc4769a8829227ba19e489
* System UUID: fa536e36-071b-4889-b289-f0922b238888
* Boot ID: b0451519-dcbb-4fc9-9cc2-3b7811ecdd5a
* Kernel Version: 4.18.0-80.el8.x86_64
* OS Image: Ubuntu 20.04.1 LTS
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://20.10.0
* Kubelet Version: v1.20.0
* Kube-Proxy Version: v1.20.0
* PodCIDR: 10.244.0.0/24
* PodCIDRs: 10.244.0.0/24
* Non-terminated Pods: (10 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system coredns-54d67798b7-kgncc 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 80m
* kube-system etcd-minikube 100m (2%) 0 (0%) 100Mi (2%) 0 (0%) 81m
* kube-system kindnet-r925s 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 80m
* kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 81m
* kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 81m
* kube-system kube-proxy-wq5bt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80m
* kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 81m
* kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 81m
* kubernetes-dashboard dashboard-metrics-scraper-c85578d8-26mkb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80m
* kubernetes-dashboard kubernetes-dashboard-7db476d994-dcrqf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80m
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 850m (21%) 100m (2%)
* memory 220Mi (5%) 220Mi (5%)
* ephemeral-storage 100Mi (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events: <none>
*
*
* Name: minikube-m02
* Roles: <none>
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=minikube-m02
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Fri, 25 Dec 2020 08:59:39 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: minikube-m02
* AcquireTime: <unset>
* RenewTime: Fri, 25 Dec 2020 10:20:20 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Fri, 25 Dec 2020 10:19:51 +0000 Fri, 25 Dec 2020 09:47:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Fri, 25 Dec 2020 10:19:51 +0000 Fri, 25 Dec 2020 09:47:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Fri, 25 Dec 2020 10:19:51 +0000 Fri, 25 Dec 2020 09:47:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Fri, 25 Dec 2020 10:19:51 +0000 Fri, 25 Dec 2020 09:47:42 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.49.3
* Hostname: minikube-m02
* Capacity:
* cpu: 4
* ephemeral-storage: 52417516Ki
* hugepages-2Mi: 0
* memory: 4035080Ki
* pods: 110
* Allocatable:
* cpu: 4
* ephemeral-storage: 52417516Ki
* hugepages-2Mi: 0
* memory: 4035080Ki
* pods: 110
* System Info:
* Machine ID: 1a5bc7f3d2e845b4b6edadec7dec31fe
* System UUID: 75a80ab2-1a8d-417e-84d1-cfea07407f53
* Boot ID: b0451519-dcbb-4fc9-9cc2-3b7811ecdd5a
* Kernel Version: 4.18.0-80.el8.x86_64
* OS Image: Ubuntu 20.04.1 LTS
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://20.10.0
* Kubelet Version: v1.20.0
* Kube-Proxy Version: v1.20.0
* PodCIDR: 10.244.1.0/24
* PodCIDRs: 10.244.1.0/24
* Non-terminated Pods: (3 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* default hue-s22bs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m10s
* kube-system kindnet-mxjdb 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 80m
* kube-system kube-proxy-74bg6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80m
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (2%) 100m (2%)
* memory 50Mi (1%) 50Mi (1%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Warning readOnlySysFS 43m kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
* Normal Starting 43m kube-proxy Starting kube-proxy.
* Normal Starting 42m kubelet Starting kubelet.
* Normal NodeAllocatableEnforced 42m kubelet Updated Node Allocatable limit across pods
* Normal NodeHasSufficientMemory 42m (x2 over 42m) kubelet Node minikube-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 42m (x2 over 42m) kubelet Node minikube-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 42m (x2 over 42m) kubelet Node minikube-m02 status is now: NodeHasSufficientPID
* Warning readOnlySysFS 42m kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
* Normal Starting 42m kube-proxy Starting kube-proxy.
* Normal NodeReady 42m kubelet Node minikube-m02 status is now: NodeReady
* Normal Starting 32m kubelet Starting kubelet.
* Normal NodeAllocatableEnforced 32m kubelet Updated Node Allocatable limit across pods
* Normal NodeHasSufficientMemory 32m (x2 over 32m) kubelet Node minikube-m02 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 32m (x2 over 32m) kubelet Node minikube-m02 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 32m (x2 over 32m) kubelet Node minikube-m02 status is now: NodeHasSufficientPID
* Warning readOnlySysFS 32m kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
* Normal Starting 32m kube-proxy Starting kube-proxy.
* Normal NodeReady 32m kubelet Node minikube-m02 status is now: NodeReady
*
*
* Name: minikube-m03
* Roles: <none>
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=minikube-m03
* kubernetes.io/os=linux
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Fri, 25 Dec 2020 10:12:00 +0000
* Taints: <none>
* Unschedulable: false
* Lease:
* HolderIdentity: minikube-m03
* AcquireTime: <unset>
* RenewTime: Fri, 25 Dec 2020 10:20:20 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Fri, 25 Dec 2020 10:18:02 +0000 Fri, 25 Dec 2020 10:12:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Fri, 25 Dec 2020 10:18:02 +0000 Fri, 25 Dec 2020 10:12:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Fri, 25 Dec 2020 10:18:02 +0000 Fri, 25 Dec 2020 10:12:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready True Fri, 25 Dec 2020 10:18:02 +0000 Fri, 25 Dec 2020 10:12:10 +0000 KubeletReady kubelet is posting ready status
* Addresses:
* InternalIP: 192.168.49.4
* Hostname: minikube-m03
* Capacity:
* cpu: 4
* ephemeral-storage: 52417516Ki
* hugepages-2Mi: 0
* memory: 4035080Ki
* pods: 110
* Allocatable:
* cpu: 4
* ephemeral-storage: 52417516Ki
* hugepages-2Mi: 0
* memory: 4035080Ki
* pods: 110
* System Info:
* Machine ID: fddba6aab8d14415add634756904efc6
* System UUID: 0ceabf4f-6998-4dc5-a5c6-c5a66e622d21
* Boot ID: b0451519-dcbb-4fc9-9cc2-3b7811ecdd5a
* Kernel Version: 4.18.0-80.el8.x86_64
* OS Image: Ubuntu 20.04.1 LTS
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://20.10.0
* Kubelet Version: v1.20.0
* Kube-Proxy Version: v1.20.0
* PodCIDR: 10.244.3.0/24
* PodCIDRs: 10.244.3.0/24
* Non-terminated Pods: (3 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* default hue-postgres-9ghk6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m10s
* kube-system kindnet-j6tnw 100m (2%) 100m (2%) 50Mi (1%) 50Mi (1%) 8m27s
* kube-system kube-proxy-rpsw7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m27s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 100m (2%) 100m (2%)
* memory 50Mi (1%) 50Mi (1%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal Starting 8m28s kubelet Starting kubelet.
* Normal NodeHasSufficientMemory 8m28s (x2 over 8m28s) kubelet Node minikube-m03 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 8m28s (x2 over 8m28s) kubelet Node minikube-m03 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 8m28s (x2 over 8m28s) kubelet Node minikube-m03 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 8m28s kubelet Updated Node Allocatable limit across pods
* Normal NodeReady 8m18s kubelet Node minikube-m03 status is now: NodeReady
* Warning readOnlySysFS 8m10s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
* Normal Starting 8m10s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [Dec25 07:42] NOTE: The elevator= kernel parameter is deprecated.
* [ +0.000000] APIC calibration not consistent with PM-Timer: 145ms instead of 100ms
* [ +0.026064] #2
* [ +0.002993] #3
* [ +0.109949] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +2.881559] e1000: E1000 MODULE IS NOT SUPPORTED
* [ +1.452254] systemd: 18 output lines suppressed due to ratelimiting
* [ +7.140587] snd_intel8x0 0000:00:05.0: measure - unreliable DMA position..
*
* ==> etcd [cdc94c530673] <==
* 2020-12-25 10:11:50.507921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:11:58.235685 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (397.815116ms) to execute
* 2020-12-25 10:11:58.505268 W | etcdserver: request "header:<ID:8128001827527213568 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/minikube-m02\" mod_revision:4565 > success:<request_put:<key:\"/registry/leases/kube-node-lease/minikube-m02\" value_size:548 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/minikube-m02\" > >>" with result "size:16" took too long (129.988925ms) to execute
* 2020-12-25 10:11:58.505504 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (266.596888ms) to execute
* 2020-12-25 10:11:58.505623 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (160.936661ms) to execute
* 2020-12-25 10:12:00.517090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:10.508145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:20.507996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:30.517234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:40.508684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:12:50.508192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:00.508482 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:10.508356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:20.508299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:30.509289 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:40.510232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:13:50.508506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:00.508645 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:06.518293 I | mvcc: store.index: compact 4429
* 2020-12-25 10:14:06.526226 I | mvcc: finished scheduled compaction at 4429 (took 7.647619ms)
* 2020-12-25 10:14:10.508572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:20.509112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:30.509072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:40.509455 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:14:50.507929 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:00.509798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:10.507844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:20.508650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:30.510461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:40.509901 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:15:50.510257 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:00.508752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:10.517386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:20.510447 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:30.508504 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:40.508193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:16:50.508450 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:00.508625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:10.509423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:20.509176 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:30.508004 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:40.508530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:17:50.508636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:00.508441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:10.508346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:20.509295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:30.508425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:40.511118 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:18:50.507949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:00.507948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:06.528377 I | mvcc: store.index: compact 4843
* 2020-12-25 10:19:06.536228 I | mvcc: finished scheduled compaction at 4843 (took 7.631845ms)
* 2020-12-25 10:19:10.508707 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:20.508200 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:30.508700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:40.508915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:19:50.511160 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:20:00.508288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:20:10.508091 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-12-25 10:20:20.507948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
*
* ==> kernel <==
* 10:20:28 up 2:38, 0 users, load average: 0.52, 0.56, 0.66
* Linux minikube 4.18.0-80.el8.x86_64 #1 SMP Tue Jun 4 09:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 20.04.1 LTS"
*
* ==> kube-apiserver [5fc501398d4e] <==
* I1225 10:10:21.157120 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:10:21.157150 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:10:21.157185 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:10:53.923410 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:10:53.923472 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:10:53.923480 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:11:24.460670 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:11:24.460700 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:11:24.460706 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:11:55.286710 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:11:55.286741 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:11:55.286747 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:11:58.506206 1 trace.go:205] Trace[805863097]: "GuaranteedUpdate etcd3" type:*coordination.Lease (25-Dec-2020 10:11:57.837) (total time: 668ms):
* Trace[805863097]: ---"Transaction committed" 668ms (10:11:00.506)
* Trace[805863097]: [668.479091ms] [668.479091ms] END
* I1225 10:11:58.506273 1 trace.go:205] Trace[562181312]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube-m02,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.49.3 (25-Dec-2020 10:11:57.837) (total time: 668ms):
* Trace[562181312]: ---"Object stored in database" 668ms (10:11:00.506)
* Trace[562181312]: [668.664945ms] [668.664945ms] END
* I1225 10:12:35.337963 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:12:35.338039 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:12:35.338051 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:13:18.650036 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:13:18.650065 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:13:18.650071 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:13:56.419747 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:13:56.419774 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:13:56.419780 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:14:29.543703 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:14:29.543748 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:14:29.543757 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:15:06.718671 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:15:06.718701 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:15:06.718707 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:15:43.130799 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:15:43.130848 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:15:43.130856 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:16:13.692826 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:16:13.692853 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:16:13.692858 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:16:45.425059 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:16:45.425092 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:16:45.425098 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:17:20.726672 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:17:20.726731 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:17:20.726741 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:17:54.033414 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:17:54.033440 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:17:54.033446 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:18:28.538751 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:18:28.538782 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:18:28.538788 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:18:59.133650 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:18:59.133864 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:18:59.133897 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:19:39.575244 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:19:39.575280 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:19:39.575286 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
* I1225 10:20:23.189192 1 client.go:360] parsed scheme: "passthrough"
* I1225 10:20:23.189221 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
* I1225 10:20:23.189228 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
*
* ==> kube-controller-manager [2e6b80829010] <==
* I1225 08:59:30.198554 1 shared_informer.go:247] Caches are synced for resource quota
* I1225 08:59:30.200029 1 shared_informer.go:247] Caches are synced for disruption
* I1225 08:59:30.200040 1 disruption.go:339] Sending events to api server.
* I1225 08:59:30.306510 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
* I1225 08:59:30.606823 1 shared_informer.go:247] Caches are synced for garbage collector
* I1225 08:59:30.651046 1 shared_informer.go:247] Caches are synced for garbage collector
* I1225 08:59:30.651076 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* W1225 08:59:39.422661 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m02" does not exist
* I1225 08:59:39.431480 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-74bg6"
* I1225 08:59:39.431717 1 range_allocator.go:373] Set node minikube-m02 PodCIDR to [10.244.1.0/24]
* E1225 08:59:39.431887 1 range_allocator.go:361] Node minikube-m02 already has a CIDR allocated [10.244.1.0/24]. Releasing the new one.
* W1225 08:59:40.085173 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube-m02. Assuming now as a timestamp.
* I1225 08:59:40.085317 1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m02 event: Registered Node minikube-m02 in Controller"
* I1225 08:59:44.235608 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-r925s"
* I1225 08:59:44.244422 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mxjdb"
* E1225 08:59:44.271093 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2de57f33-1d62-4188-91f8-80a5050605fc", ResourceVersion:"491", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744483584, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018bf1c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018bf1e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018bf200), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018bf220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018bf2a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018bf2c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0018bf2e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0018bf320)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000eda540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e76b98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000bfb730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e9f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000e76be0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* I1225 08:59:46.255859 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c85578d8 to 1"
* I1225 08:59:46.262954 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.268131 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.268397 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-7db476d994 to 1"
* E1225 08:59:46.276627 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.277310 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* I1225 08:59:46.277325 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.280285 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.280339 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.280515 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* E1225 08:59:46.289726 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.289889 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.291383 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.291426 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.293820 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.293863 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* E1225 08:59:46.303458 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
* I1225 08:59:46.303508 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
* I1225 08:59:46.340365 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c85578d8-26mkb"
* I1225 08:59:46.353941 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-7db476d994-dcrqf"
* I1225 09:01:15.757748 1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-t5hpt"
* I1225 09:01:15.771577 1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-mrh94"
* I1225 09:16:45.643240 1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-p8wdj"
* I1225 09:16:45.643255 1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-m9sxn"
* I1225 09:37:45.980140 1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node minikube-m02 status is now: NodeNotReady"
* I1225 09:37:45.989334 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-74bg6" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:37:45.997269 1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:38:01.006222 1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-postgres-p8wdj"
* I1225 09:38:01.006240 1 event.go:291] "Event occurred" object="default/hue-m9sxn" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-m9sxn"
* I1225 09:47:31.224977 1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node minikube-m02 status is now: NodeNotReady"
* I1225 09:47:31.228704 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-74bg6" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:47:31.234411 1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
* I1225 09:47:46.245269 1 event.go:291] "Event occurred" object="default/hue-postgres-p8wdj" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-postgres-p8wdj"
* I1225 09:47:46.245304 1 event.go:291] "Event occurred" object="default/hue-m9sxn" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/hue-m9sxn"
* W1225 10:12:00.954056 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m03" does not exist
* I1225 10:12:01.202509 1 range_allocator.go:373] Set node minikube-m03 PodCIDR to [10.244.3.0/24]
* I1225 10:12:01.238310 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rpsw7"
* I1225 10:12:01.243436 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j6tnw"
* E1225 10:12:01.259472 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"175c3adb-dbe1-4f63-9d86-3a77fad8f5b8", ResourceVersion:"3330", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744483554, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002564d00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002564d20)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc002564d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002564d60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002564d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002507380), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002564da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002564dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002564e00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002529200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016e41b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c072d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00247c1c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016e4248)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:2, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:2, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* E1225 10:12:01.353919 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2de57f33-1d62-4188-91f8-80a5050605fc", ResourceVersion:"500", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744483584, loc:(*time.Location)(0x6f2f340)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001501c80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001501ca0)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001501cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001501ce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001501d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001501d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001501d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001501d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001501d80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001501dc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00109f260), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000529e68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00001cd20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0004871a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00011c090)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:0, NumberUnavailable:2, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
* W1225 10:12:01.753802 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube-m03. Assuming now as a timestamp.
* I1225 10:12:01.754013 1 event.go:291] "Event occurred" object="minikube-m03" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m03 event: Registered Node minikube-m03 in Controller"
* I1225 10:12:18.475999 1 event.go:291] "Event occurred" object="default/hue" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-s22bs"
* I1225 10:12:18.504479 1 event.go:291] "Event occurred" object="default/hue-postgres" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hue-postgres-9ghk6"
*
* ==> kube-proxy [cd4536fe11fd] <==
* I1225 08:59:31.387118 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
* I1225 08:59:31.387169 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
* W1225 08:59:31.558686 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
* I1225 08:59:31.558740 1 server_others.go:185] Using iptables Proxier.
* I1225 08:59:31.558940 1 server.go:650] Version: v1.20.0
* I1225 08:59:31.559209 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1225 08:59:31.559227 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E1225 08:59:31.559467 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I1225 08:59:31.559518 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1225 08:59:31.559538 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1225 08:59:31.559736 1 config.go:315] Starting service config controller
* I1225 08:59:31.559745 1 shared_informer.go:240] Waiting for caches to sync for service config
* I1225 08:59:31.559756 1 config.go:224] Starting endpoint slice config controller
* I1225 08:59:31.559758 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1225 08:59:31.660245 1 shared_informer.go:247] Caches are synced for endpoint slice config
* I1225 08:59:31.660310 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [4cd0e8f1c353] <==
* I1225 08:59:06.855927 1 serving.go:331] Generated self-signed cert in-memory
* W1225 08:59:11.236091 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W1225 08:59:11.236122 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W1225 08:59:11.236130 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
* W1225 08:59:11.236137 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I1225 08:59:11.321735 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1225 08:59:11.321779 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1225 08:59:11.322086 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
* I1225 08:59:11.322119 1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E1225 08:59:11.326265 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1225 08:59:11.326359 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1225 08:59:11.326421 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1225 08:59:11.326473 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1225 08:59:11.326574 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1225 08:59:11.326678 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1225 08:59:11.326694 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1225 08:59:11.326832 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1225 08:59:11.326892 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1225 08:59:11.327015 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1225 08:59:11.327101 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1225 08:59:11.343168 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1225 08:59:12.189683 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1225 08:59:12.205816 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1225 08:59:12.213164 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1225 08:59:12.290263 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1225 08:59:12.463141 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* I1225 08:59:14.721875 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I1225 10:12:01.425872 1 trace.go:205] Trace[405008256]: "Scheduling" namespace:kube-system,name:kindnet-j6tnw (25-Dec-2020 10:12:01.316) (total time: 102ms):
* Trace[405008256]: ---"Snapshotting scheduler cache and node infos done" 49ms (10:12:00.366)
* Trace[405008256]: ---"Computing predicates done" 53ms (10:12:00.419)
* Trace[405008256]: [102.57507ms] [102.57507ms] END
*
* ==> kubelet <==
* -- Logs begin at Fri 2020-12-25 08:22:51 UTC, end at Fri 2020-12-25 10:20:29 UTC. --
* Dec 25 10:09:16 minikube kubelet[3023]: E1225 10:09:16.611546 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:09:29 minikube kubelet[3023]: E1225 10:09:29.611243 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:09:41 minikube kubelet[3023]: E1225 10:09:41.643887 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:09:53 minikube kubelet[3023]: E1225 10:09:53.611358 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:07 minikube kubelet[3023]: E1225 10:10:07.611763 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:19 minikube kubelet[3023]: E1225 10:10:19.611856 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:31 minikube kubelet[3023]: E1225 10:10:31.611167 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:46 minikube kubelet[3023]: E1225 10:10:46.611864 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:10:59 minikube kubelet[3023]: E1225 10:10:59.611957 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:11 minikube kubelet[3023]: E1225 10:11:11.612107 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:24 minikube kubelet[3023]: E1225 10:11:24.611518 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:35 minikube kubelet[3023]: E1225 10:11:35.611827 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:11:48 minikube kubelet[3023]: E1225 10:11:48.615436 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:03 minikube kubelet[3023]: E1225 10:12:03.611416 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098599 3023 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098622 3023 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098706 3023 kuberuntime_manager.go:829] container &Container{Name:kindnet-cni,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kindnet-token-gglld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:12:17 minikube kubelet[3023]: E1225 10:12:17.098728 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
* Dec 25 10:12:29 minikube kubelet[3023]: E1225 10:12:29.621243 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:41 minikube kubelet[3023]: E1225 10:12:41.618778 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:12:52 minikube kubelet[3023]: E1225 10:12:52.612289 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:07 minikube kubelet[3023]: E1225 10:13:07.611259 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:19 minikube kubelet[3023]: E1225 10:13:19.611967 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:31 minikube kubelet[3023]: E1225 10:13:31.611675 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:42 minikube kubelet[3023]: E1225 10:13:42.615332 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:13:53 minikube kubelet[3023]: E1225 10:13:53.613091 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:05 minikube kubelet[3023]: E1225 10:14:05.612990 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:18 minikube kubelet[3023]: E1225 10:14:18.616127 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:33 minikube kubelet[3023]: E1225 10:14:33.616300 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:44 minikube kubelet[3023]: E1225 10:14:44.614191 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:14:56 minikube kubelet[3023]: E1225 10:14:56.611432 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:07 minikube kubelet[3023]: E1225 10:15:07.612030 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:19 minikube kubelet[3023]: E1225 10:15:19.613067 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:30 minikube kubelet[3023]: E1225 10:15:30.611022 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:42 minikube kubelet[3023]: E1225 10:15:42.611352 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:15:56 minikube kubelet[3023]: E1225 10:15:56.612199 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:09 minikube kubelet[3023]: E1225 10:16:09.617449 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:23 minikube kubelet[3023]: E1225 10:16:23.612835 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:34 minikube kubelet[3023]: E1225 10:16:34.616837 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:46 minikube kubelet[3023]: E1225 10:16:46.611752 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:16:58 minikube kubelet[3023]: E1225 10:16:58.616872 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:17:12 minikube kubelet[3023]: E1225 10:17:12.612091 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906478 3023 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906506 3023 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906649 3023 kuberuntime_manager.go:829] container &Container{Name:kindnet-cni,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kindnet-token-gglld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
* Dec 25 10:17:24 minikube kubelet[3023]: E1225 10:17:24.906674 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"
* Dec 25 10:17:37 minikube kubelet[3023]: E1225 10:17:37.614648 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:17:51 minikube kubelet[3023]: E1225 10:17:51.611493 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:05 minikube kubelet[3023]: E1225 10:18:05.613256 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:16 minikube kubelet[3023]: E1225 10:18:16.611989 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:31 minikube kubelet[3023]: E1225 10:18:31.611938 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:18:46 minikube kubelet[3023]: E1225 10:18:46.612531 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:00 minikube kubelet[3023]: E1225 10:19:00.613019 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:12 minikube kubelet[3023]: E1225 10:19:12.616422 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:23 minikube kubelet[3023]: E1225 10:19:23.612637 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:35 minikube kubelet[3023]: E1225 10:19:35.613401 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:19:49 minikube kubelet[3023]: E1225 10:19:49.611071 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:20:02 minikube kubelet[3023]: E1225 10:20:02.615652 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:20:15 minikube kubelet[3023]: E1225 10:20:15.611303 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
* Dec 25 10:20:26 minikube kubelet[3023]: E1225 10:20:26.611506 3023 pod_workers.go:191] Error syncing pod 3b3159cb-3223-48e8-80c9-f82c01cf1df6 ("kindnet-r925s_kube-system(3b3159cb-3223-48e8-80c9-f82c01cf1df6)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:0.5.4\""
*
* ==> kubernetes-dashboard [d31f0f45948c] <==
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 192.168.33.1:
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1:
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/systembanner request from 192.168.33.1:
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1:
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1:
* 2020/12/25 10:14:35 Getting list of namespaces
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1:
* 2020/12/25 10:14:35 Getting list of all services in the cluster
* 2020/12/25 10:14:35 [2020-12-25T10:14:35Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 192.168.33.1:
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/settings/pinner request from 192.168.33.1:
* 2020/12/25 10:14:37 Getting application global configuration
* 2020/12/25 10:14:37 Application configuration {"serverTime":1608891277199}
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/plugin/config request from 192.168.33.1:
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 192.168.33.1:
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1:
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/systembanner request from 192.168.33.1:
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 192.168.33.1:
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1:
* 2020/12/25 10:14:37 Getting list of namespaces
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1:
* 2020/12/25 10:14:37 Getting list of all services in the cluster
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:37 [2020-12-25T10:14:37Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1:
* 2020/12/25 10:14:42 Getting list of all services in the cluster
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1:
* 2020/12/25 10:14:42 Getting list of namespaces
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:42 [2020-12-25T10:14:42Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1:
* 2020/12/25 10:14:45 Getting list of namespaces
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1:
* 2020/12/25 10:14:45 Getting list of all services in the cluster
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:45 [2020-12-25T10:14:45Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1:
* 2020/12/25 10:14:58 Getting list of namespaces
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1:
* 2020/12/25 10:14:58 Getting list of all services in the cluster
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:14:58 [2020-12-25T10:14:58Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 192.168.33.1:
* 2020/12/25 10:15:04 Getting list of namespaces
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 192.168.33.1:
* 2020/12/25 10:15:04 Getting list of all services in the cluster
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Outcoming response to 192.168.33.1 with 200 status code
* 2020/12/25 10:15:04 [2020-12-25T10:15:04Z] Outcoming response to 192.168.33.1 with 200 status code
*
* ==> storage-provisioner [1879fc181833] <==
* I1225 09:00:01.799437 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
* I1225 09:00:01.809449 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
* I1225 09:00:01.809497 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
* I1225 09:00:01.817949 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
* I1225 09:00:01.818061 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_f155d0ed-e63d-46a1-8815-c3dd12638e20!
* I1225 09:00:01.818229 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb976407-2d34-4eb3-8b0f-47a300cfd32a", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_f155d0ed-e63d-46a1-8815-c3dd12638e20 became leader
* I1225 09:00:01.918529 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_f155d0ed-e63d-46a1-8815-c3dd12638e20!
*
* ==> storage-provisioner [63b0555973b1] <==
* I1225 08:59:31.300143 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
* F1225 09:00:01.302919 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
|
@LY1806620741 do u have the same problem without multi node ? on a single node? |
单台物理机,minikube多节点会存在网络问题,单节点时没有问题。在我上一条回复时问题还存在,但是这个问题已经提出了很久,这意味着这个问题可能已经不再有效了。 |
今天我重试了一次,问题仍然存在,现在版本为
|
ls pods:
get error log:
ls svc:
into m02:
|
bright network is not set alias |
minikube 1.22 should have a few fixes for mulitnode networking, could you check and see if it's still an issue? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Hi @LY1806620741, we haven't heard back from you, if you have a chance please try this again with the latest version of minikube. Feel free to reopen this issue again if it's not fixed, thanks! |
重现问题所需的命令:
helm install hue gethue/hue #部署任意服务
失败的命令的完整输出:
命令无失败,但是hue-postgres-kdfvf、hue-j6vbx分别运行在两个节点,他们无法互相访问tcp端口
hue的报错:
pods信息:
minikube logs
命令的输出:使用的操作系统版本:windows10
其他
The text was updated successfully, but these errors were encountered: