-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add warning for --network-plugin=cni (CNI has to be provided, see --cni) #8445
Comments
@AurelienGasser is there a reason you provide -network-plugin=cni ? docker runtime does not need a cni and comes with cni by default |
@AurelienGasser I also noticed:
you are using a k8s version with upstream problems maybe try a newer k8s verison? |
@medyagh I provide I get the same error if I use either
or
or both. With kubernetes
|
hm... I have not tested the docker driver with CNI, could u plz try with KVM driver and see if u have same issue? |
@medyagh No issue with the KVM driver (using |
ok thanks for confirming this , this is a bug and we should fix it ! could u plz confirm something else ? can you try docker driver with containerd runtime and see if it works with that ? start --container-runtime=containerd |
@medmedchiheb No error with containerd |
thanks for confriming this, @AurelienGasser |
@AurelienGasser I am curious does this error happen in v1.12.0 ? |
Hi @medyagh, the error persists in v1.12.0. |
The TL;DR here is that if you specify Please note that this flag is deprecated - but for some reason, we hide the deprecation notice in the logs instead of showing it to the user. The new equivalent is
I can verify that Renaming the issue to capture the primary remaining issue. |
Fixing the title because I was a bit in err here: the deprecated flag is actually --enable-default-cni, which should show a warning. --network-plugin isn't deprecated. That said, this could be unexpected behavior, so we should show a warning. |
Steps to reproduce the issue:
Full output of failed command:
coredns
pods stay in theContainerCreating
state forever.Describe
coredns
pod:Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0759f69a4af6e 4689081edb103 4 minutes ago Running storage-provisioner 1 c6817888babc9
81c45417ced3b 4689081edb103 4 minutes ago Exited storage-provisioner 0 c6817888babc9
16300c620458b 0ee1b8a3ebe00 4 minutes ago Running kube-proxy 0 e268b92dde244
51ec15cc07c28 b4d073a9efda2 4 minutes ago Running kube-scheduler 0 145f9d2603a98
a6b51cde8ba92 441835dd23012 4 minutes ago Running kube-controller-manager 0 f0d726bc763c3
edebbd0259832 fc838b21afbb7 4 minutes ago Running kube-apiserver 0 d2f0e05c11c3e
f2beb8df3fb53 b2756210eeabf 4 minutes ago Running etcd 0 df60f9799cf8d
==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_06_10T15_51_39_0700
minikube.k8s.io/version=v1.11.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 10 Jun 2020 19:51:36 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.3
Hostname: minikube
Capacity:
cpu: 16
ephemeral-storage: 321488636Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 31751Mi
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 321488636Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 31751Mi
pods: 110
System Info:
Machine ID: d6e045164ee9466eabde0715cb5c092f
System UUID: 68c5f04d-6006-4544-88ec-1fc44b5d524b
Boot ID: 64937377-cd63-4e9c-ac64-6a2113f9c4cc
Kernel Version: 5.4.0-33-generic
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.16.6-beta.0
Kube-Proxy Version: v1.16.6-beta.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system coredns-5644d7b6d9-bs8j2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4m34s
kube-system coredns-5644d7b6d9-dhzdb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4m34s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m33s
kube-system kube-apiserver-minikube 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3m36s
kube-system kube-controller-manager-minikube 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3m49s
kube-system kube-proxy-mbrr6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m34s
kube-system kube-scheduler-minikube 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3m39s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 750m (4%) 0 (0%)
memory 140Mi (0%) 340Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal Starting 4m58s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 4m57s (x8 over 4m58s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m57s (x8 over 4m58s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m57s (x7 over 4m58s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m57s kubelet, minikube Updated Node Allocatable limit across pods
Warning readOnlySysFS 4m33s kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
Normal Starting 4m33s kube-proxy, minikube Starting kube-proxy.
==> dmesg <==
[ +0.000001] mce: CPU3: Core temperature above threshold, cpu clock throttled (total events = 144050)
[ +0.000040] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU15: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU9: Package temperature above threshold, cpu clock throttled (total events = 437594)
[ +0.000000] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000002] mce: CPU8: Package temperature above threshold, cpu clock throttled (total events = 437595)
[ +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU10: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 437595)
[ +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU12: Package temperature above threshold, cpu clock throttled (total events = 437595)
[ +0.000001] mce: CPU13: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU14: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000000] mce: CPU11: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 437594)
[Jun 5 01:23] vboxdrv: 0000000000000000 VMMR0.r0
[ +0.049031] VBoxNetFlt: attached to 'vboxnet0' / 0a:00:27:00:00:00
[ +0.072080] vboxdrv: 0000000000000000 VBoxDDR0.r0
[ +0.021942] VMMR0InitVM: eflags=246 fKernelFeatures=0x0 (SUPKERNELFEATURES_SMAP=0)
[Jun 5 01:27] [drm:intel_pipe_update_end [i915]] ERROR Atomic update failure on pipe B (start=1459097 end=1459098) time 258 us, min 1590, max 1599, scanline start 1588, end 1612
[Jun 5 01:29] mce: CPU1: Core temperature above threshold, cpu clock throttled (total events = 45333)
[ +0.000001] mce: CPU9: Core temperature above threshold, cpu clock throttled (total events = 45333)
[ +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU14: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000000] mce: CPU9: Package temperature above threshold, cpu clock throttled (total events = 439855)
[ +0.000002] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 439854)
[ +0.000050] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU12: Package temperature above threshold, cpu clock throttled (total events = 439856)
[ +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU10: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000027] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000002] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 439856)
[ +0.000001] mce: CPU11: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU13: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU15: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU8: Package temperature above threshold, cpu clock throttled (total events = 439856)
[ +37.760471] vboxnetflt: 1641 out of 1674 packets were not sent (directed to host)
[Jun 5 01:32] [drm:intel_pipe_update_end [i915]] ERROR Atomic update failure on pipe B (start=1478655 end=1478656) time 232 us, min 1590, max 1599, scanline start 1579, end 1601
[Jun 5 01:36] mce: CPU3: Core temperature above threshold, cpu clock throttled (total events = 144987)
[ +0.000001] mce: CPU11: Core temperature above threshold, cpu clock throttled (total events = 144987)
[ +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 440378)
[ +0.000001] mce: CPU9: Package temperature above threshold, cpu clock throttled (total events = 440379)
[ +0.000000] mce: CPU11: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000054] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU8: Package temperature above threshold, cpu clock throttled (total events = 440380)
[ +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU12: Package temperature above threshold, cpu clock throttled (total events = 440380)
[ +0.000001] mce: CPU10: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU14: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU15: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU13: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 440380)
==> etcd [f2beb8df3fb5] <==
2020-06-10 19:51:33.670683 I | etcdmain: etcd Version: 3.3.15
2020-06-10 19:51:33.670711 I | etcdmain: Git SHA: 94745a4ee
2020-06-10 19:51:33.670713 I | etcdmain: Go Version: go1.12.9
2020-06-10 19:51:33.670715 I | etcdmain: Go OS/Arch: linux/amd64
2020-06-10 19:51:33.670717 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
2020-06-10 19:51:33.670873 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-06-10 19:51:33.671223 I | embed: listening for peers on https://172.17.0.3:2380
2020-06-10 19:51:33.671253 I | embed: listening for client requests on 127.0.0.1:2379
2020-06-10 19:51:33.671271 I | embed: listening for client requests on 172.17.0.3:2379
2020-06-10 19:51:33.674156 I | etcdserver: name = minikube
2020-06-10 19:51:33.674175 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-06-10 19:51:33.674181 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-06-10 19:51:33.674183 I | etcdserver: heartbeat = 100ms
2020-06-10 19:51:33.674185 I | etcdserver: election = 1000ms
2020-06-10 19:51:33.674188 I | etcdserver: snapshot count = 10000
2020-06-10 19:51:33.674201 I | etcdserver: advertise client URLs = https://172.17.0.3:2379
2020-06-10 19:51:33.674204 I | etcdserver: initial advertise peer URLs = https://172.17.0.3:2380
2020-06-10 19:51:33.674209 I | etcdserver: initial cluster = minikube=https://172.17.0.3:2380
2020-06-10 19:51:33.680788 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
2020-06-10 19:51:33.680813 I | raft: b273bc7741bcb020 became follower at term 0
2020-06-10 19:51:33.680821 I | raft: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-06-10 19:51:33.680825 I | raft: b273bc7741bcb020 became follower at term 1
2020-06-10 19:51:33.687231 W | auth: simple token is not cryptographically signed
2020-06-10 19:51:33.690399 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2020-06-10 19:51:33.690467 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-06-10 19:51:33.690693 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2020-06-10 19:51:33.691661 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-06-10 19:51:33.691781 I | embed: listening for metrics on http://127.0.0.1:2381
2020-06-10 19:51:33.691839 I | embed: listening for metrics on http://172.17.0.3:2381
2020-06-10 19:51:34.381109 I | raft: b273bc7741bcb020 is starting a new election at term 1
2020-06-10 19:51:34.381124 I | raft: b273bc7741bcb020 became candidate at term 2
2020-06-10 19:51:34.381160 I | raft: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
2020-06-10 19:51:34.381166 I | raft: b273bc7741bcb020 became leader at term 2
2020-06-10 19:51:34.381192 I | raft: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-06-10 19:51:34.381432 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-06-10 19:51:34.381440 I | embed: ready to serve client requests
2020-06-10 19:51:34.381480 I | embed: ready to serve client requests
2020-06-10 19:51:34.381965 I | etcdserver: setting up the initial cluster version to 3.3
2020-06-10 19:51:34.382722 N | etcdserver/membership: set the initial cluster version to 3.3
2020-06-10 19:51:34.382746 I | etcdserver/api: enabled capabilities for version 3.3
2020-06-10 19:51:34.382944 I | embed: serving client requests on 127.0.0.1:2379
2020-06-10 19:51:34.383101 I | embed: serving client requests on 172.17.0.3:2379
==> kernel <==
19:56:30 up 8 days, 3:37, 0 users, load average: 3.10, 2.81, 2.88
Linux minikube 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"
==> kube-apiserver [edebbd025983] <==
I0610 19:51:34.768894 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.768936 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:34.817551 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.817568 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:34.822342 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.822354 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W0610 19:51:34.895330 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0610 19:51:34.905566 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.915849 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.917855 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.929601 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.947463 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0610 19:51:34.947475 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0610 19:51:34.954688 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0610 19:51:34.954696 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0610 19:51:34.956345 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.956369 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:34.961626 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.961640 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:35.132074 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:35.132096 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:36.339796 1 secure_serving.go:123] Serving securely on [::]:8443
I0610 19:51:36.339825 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0610 19:51:36.339828 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0610 19:51:36.339952 1 autoregister_controller.go:140] Starting autoregister controller
I0610 19:51:36.339964 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0610 19:51:36.340039 1 crd_finalizer.go:274] Starting CRDFinalizer
I0610 19:51:36.340048 1 naming_controller.go:288] Starting NamingConditionController
I0610 19:51:36.340056 1 establishing_controller.go:73] Starting EstablishingController
I0610 19:51:36.340059 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0610 19:51:36.340072 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0610 19:51:36.340096 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0610 19:51:36.340125 1 controller.go:81] Starting OpenAPI AggregationController
I0610 19:51:36.340138 1 available_controller.go:383] Starting AvailableConditionController
I0610 19:51:36.340150 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0610 19:51:36.340173 1 controller.go:85] Starting OpenAPI controller
I0610 19:51:36.340178 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0610 19:51:36.340182 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E0610 19:51:36.343151 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg:
I0610 19:51:36.408355 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0610 19:51:36.440016 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0610 19:51:36.440071 1 cache.go:39] Caches are synced for autoregister controller
I0610 19:51:36.440263 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0610 19:51:36.440265 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0610 19:51:37.339952 1 controller.go:107] OpenAPI AggregationController: Processing item
I0610 19:51:37.340021 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0610 19:51:37.340027 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0610 19:51:37.342374 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0610 19:51:37.344546 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0610 19:51:37.344556 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0610 19:51:37.501981 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0610 19:51:37.517827 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0610 19:51:37.569775 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
I0610 19:51:37.570041 1 controller.go:606] quota admission added evaluator for: endpoints
I0610 19:51:38.636929 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0610 19:51:38.645725 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0610 19:51:39.021210 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0610 19:51:43.778343 1 log.go:172] http: TLS handshake error from 172.17.0.1:49598: EOF
I0610 19:51:56.818662 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0610 19:51:56.890956 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [a6b51cde8ba9] <==
I0610 19:51:55.038318 1 certificate_controller.go:113] Starting certificate controller
I0610 19:51:55.038321 1 shared_informer.go:197] Waiting for caches to sync for certificate
I0610 19:51:55.738279 1 controllermanager.go:534] Started "horizontalpodautoscaling"
I0610 19:51:55.738320 1 horizontal.go:156] Starting HPA controller
I0610 19:51:55.738326 1 shared_informer.go:197] Waiting for caches to sync for HPA
I0610 19:51:55.991365 1 controllermanager.go:534] Started "namespace"
I0610 19:51:55.991410 1 namespace_controller.go:186] Starting namespace controller
I0610 19:51:55.991453 1 shared_informer.go:197] Waiting for caches to sync for namespace
I0610 19:51:56.793108 1 garbagecollector.go:130] Starting garbage collector controller
I0610 19:51:56.793119 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0610 19:51:56.793130 1 graph_builder.go:282] GraphBuilder running
I0610 19:51:56.793232 1 controllermanager.go:534] Started "garbagecollector"
I0610 19:51:56.794532 1 shared_informer.go:197] Waiting for caches to sync for resource quota
W0610 19:51:56.796966 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0610 19:51:56.810367 1 shared_informer.go:204] Caches are synced for GC
I0610 19:51:56.817624 1 shared_informer.go:204] Caches are synced for deployment
I0610 19:51:56.819711 1 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"37a13869-78a5-475e-a8eb-7035902abaae", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I0610 19:51:56.825434 1 shared_informer.go:204] Caches are synced for expand
I0610 19:51:56.838303 1 shared_informer.go:204] Caches are synced for disruption
I0610 19:51:56.838329 1 disruption.go:338] Sending events to api server.
I0610 19:51:56.838400 1 shared_informer.go:204] Caches are synced for certificate
I0610 19:51:56.838478 1 shared_informer.go:204] Caches are synced for job
I0610 19:51:56.838697 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0610 19:51:56.838753 1 shared_informer.go:204] Caches are synced for PVC protection
I0610 19:51:56.838866 1 shared_informer.go:204] Caches are synced for certificate
I0610 19:51:56.838909 1 shared_informer.go:204] Caches are synced for ReplicationController
I0610 19:51:56.838946 1 shared_informer.go:204] Caches are synced for attach detach
I0610 19:51:56.848534 1 log.go:172] [INFO] signed certificate with serial number 448076519053696427626673704938515373589762678104
I0610 19:51:56.859440 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0610 19:51:56.862072 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"ed7bb0ec-c9b8-4d80-ba11-354253bcebed", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-bs8j2
I0610 19:51:56.866460 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"ed7bb0ec-c9b8-4d80-ba11-354253bcebed", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-dhzdb
I0610 19:51:56.867211 1 shared_informer.go:204] Caches are synced for stateful set
I0610 19:51:56.874881 1 shared_informer.go:204] Caches are synced for TTL
I0610 19:51:56.887795 1 shared_informer.go:204] Caches are synced for node
I0610 19:51:56.887811 1 range_allocator.go:172] Starting range CIDR allocator
I0610 19:51:56.887814 1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0610 19:51:56.887817 1 shared_informer.go:204] Caches are synced for cidrallocator
I0610 19:51:56.888715 1 shared_informer.go:204] Caches are synced for daemon sets
I0610 19:51:56.888899 1 shared_informer.go:204] Caches are synced for service account
I0610 19:51:56.888945 1 shared_informer.go:204] Caches are synced for persistent volume
I0610 19:51:56.889170 1 shared_informer.go:204] Caches are synced for PV protection
I0610 19:51:56.889811 1 range_allocator.go:359] Set node minikube PodCIDR to [10.244.0.0/24]
I0610 19:51:56.890187 1 shared_informer.go:204] Caches are synced for taint
I0610 19:51:56.890243 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W0610 19:51:56.890265 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0610 19:51:56.890266 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0610 19:51:56.890289 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I0610 19:51:56.890339 1 event.go:274] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"83cc614b-1fd0-473d-a267-b9ceddc0a501", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0610 19:51:56.891545 1 shared_informer.go:204] Caches are synced for namespace
I0610 19:51:56.894247 1 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"813cabc7-b1ed-4ce0-a1cf-9f260a3aed5a", APIVersion:"apps/v1", ResourceVersion:"191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mbrr6
E0610 19:51:56.905082 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"813cabc7-b1ed-4ce0-a1cf-9f260a3aed5a", ResourceVersion:"191", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727415499, loc:(*time.Location)(0x6c143a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000a15d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00132e640), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a15d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a15d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.5", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000a15d80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a2c500), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001788a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0016bc480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000118158)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001788ad8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0610 19:51:56.988990 1 shared_informer.go:204] Caches are synced for endpoint
I0610 19:51:57.039200 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0610 19:51:57.238504 1 shared_informer.go:204] Caches are synced for HPA
I0610 19:51:57.393281 1 shared_informer.go:204] Caches are synced for garbage collector
I0610 19:51:57.393296 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0610 19:51:57.394708 1 shared_informer.go:204] Caches are synced for resource quota
I0610 19:51:57.440830 1 shared_informer.go:204] Caches are synced for resource quota
I0610 19:51:58.289853 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0610 19:51:58.289954 1 shared_informer.go:204] Caches are synced for garbage collector
==> kube-proxy [16300c620458] <==
W0610 19:51:57.458248 1 server_others.go:330] Flag proxy-mode="" unknown, assuming iptables proxy
I0610 19:51:57.462596 1 node.go:135] Successfully retrieved node IP: 172.17.0.3
I0610 19:51:57.462612 1 server_others.go:150] Using iptables Proxier.
I0610 19:51:57.462866 1 server.go:529] Version: v1.16.6-beta.0
I0610 19:51:57.463183 1 conntrack.go:52] Setting nf_conntrack_max to 524288
E0610 19:51:57.463414 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0610 19:51:57.463536 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0610 19:51:57.463586 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0610 19:51:57.463695 1 config.go:313] Starting service config controller
I0610 19:51:57.463703 1 shared_informer.go:197] Waiting for caches to sync for service config
I0610 19:51:57.463752 1 config.go:131] Starting endpoints config controller
I0610 19:51:57.463979 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0610 19:51:57.563808 1 shared_informer.go:204] Caches are synced for service config
I0610 19:51:57.564080 1 shared_informer.go:204] Caches are synced for endpoints config
==> kube-scheduler [51ec15cc07c2] <==
I0610 19:51:34.187103 1 serving.go:319] Generated self-signed cert in-memory
W0610 19:51:36.351521 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0610 19:51:36.351658 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0610 19:51:36.351748 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W0610 19:51:36.351813 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0610 19:51:36.355338 1 server.go:148] Version: v1.16.6-beta.0
I0610 19:51:36.355397 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0610 19:51:36.361868 1 authorization.go:47] Authorization is disabled
W0610 19:51:36.361880 1 authentication.go:79] Authentication is disabled
I0610 19:51:36.361887 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0610 19:51:36.362232 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E0610 19:51:36.363622 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0610 19:51:36.363724 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0610 19:51:36.363737 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:36.363743 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0610 19:51:36.363764 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0610 19:51:36.363768 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0610 19:51:36.363770 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:36.363767 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0610 19:51:36.363767 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0610 19:51:36.363982 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0610 19:51:36.364007 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0610 19:51:37.364392 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0610 19:51:37.365393 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0610 19:51:37.366199 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:37.367350 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0610 19:51:37.368655 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0610 19:51:37.369684 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:37.370714 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0610 19:51:37.371993 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0610 19:51:37.372975 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0610 19:51:37.374138 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0610 19:51:37.375233 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0610 19:51:38.462467 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler...
I0610 19:51:38.465089 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
E0610 19:51:41.679409 1 factory.go:585] pod is already present in the activeQ
==> kubelet <==
-- Logs begin at Wed 2020-06-10 19:51:17 UTC, end at Wed 2020-06-10 19:56:31 UTC. --
Jun 10 19:56:26 minikube kubelet[1100]: W0610 19:56:26.025781 1100 pod_container_deletor.go:75] Container "c588b942ff9ef0bcb4a04c3f7ff584bb909799c42af5824271d167527be59e77" not found in pod's containers
Jun 10 19:56:26 minikube kubelet[1100]: W0610 19:56:26.026755 1100 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c588b942ff9ef0bcb4a04c3f7ff584bb909799c42af5824271d167527be59e77"
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.525670 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-bs8j2/13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.624934 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-bs8j2/13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.706200 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-dhzdb/78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721026 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721059 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721067 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721100 1100 pod_workers.go:191] Error syncing pod 676c28c4-99ce-4e58-9189-ce25a87be14a ("coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-11d8a3d015e6c4572e2e383e':No such file or directory\n\nTry
iptables -h' or 'iptables --help' for more information.\n]"Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.773467 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-dhzdb/78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863281 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863309 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863317 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863357 1100 pod_workers.go:191] Error syncing pod f7b5822f-3077-458b-a7ae-cd685b48b09a ("coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-ab775a86f094eaf654e6e888':No such file or directory\n\nTry
iptables -h' or 'iptables --help' for more information.\n]"Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.074160 1100 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-5644d7b6d9-bs8j2_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.087167 1100 pod_container_deletor.go:75] Container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" not found in pod's containers
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.088640 1100 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.091954 1100 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-5644d7b6d9-dhzdb_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.101109 1100 pod_container_deletor.go:75] Container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" not found in pod's containers
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.101998 1100 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90"
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.069714 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-dhzdb/00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.157058 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-dhzdb/00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.189997 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-bs8j2/9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259299 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259333 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259341 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259397 1100 pod_workers.go:191] Error syncing pod f7b5822f-3077-458b-a7ae-cd685b48b09a ("coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-193654f39dd86ca648a09336':No such file or directory\n\nTry
iptables -h' or 'iptables --help' for more information.\n]"Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.273235 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-bs8j2/9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372122 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372164 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372175 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try
iptables -h' or 'iptables --help' for more information.Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372220 1100 pod_workers.go:191] Error syncing pod 676c28c4-99ce-4e58-9189-ce25a87be14a ("coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target
CNI-b69b1b39772ef93b00789b1b':No such file or directory\n\nTry
iptables -h' or 'iptables --help' for more information.\n]"==> storage-provisioner [0759f69a4af6] <==
==> storage-provisioner [81c45417ced3] <==
F0610 19:52:27.508017 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
The text was updated successfully, but these errors were encountered: