Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreDNS pods do not start with CNI enabled #7354

Closed
irizzant opened this issue Apr 1, 2020 · 8 comments
Closed

CoreDNS pods do not start with CNI enabled #7354

irizzant opened this issue Apr 1, 2020 · 8 comments
Assignees
Labels
area/cni CNI support kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@irizzant
Copy link

irizzant commented Apr 1, 2020

Steps to reproduce the issue:

  1. start minikube with:
    minikube start --memory 12000 --cpus 8 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 --extra-config=apiserver.authorization-mode=Node,RBAC --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key --extra-config=apiserver.service-account-issuer=kubernetes/serviceaccount --extra-config=apiserver.service-account-api-audiences=api --network-plugin=cni --enable-default-cni --alsologtostderr
    
  2. Check pods status in kube-system:
    NAME                               READY   STATUS              RESTARTS   AGE
    coredns-66bff467f8-z74w2           0/1     ContainerCreating   0          45s
    coredns-66bff467f8-zg7r6           0/1     ContainerCreating   0          45s
    etcd-minikube                      1/1     Running             0          42s
    kindnet-p54hr                      1/1     Running             0          46s
    kube-apiserver-minikube            1/1     Running             0          42s
    kube-controller-manager-minikube   1/1     Running             0          42s
    kube-proxy-dgfzp                   1/1     Running             0          46s
    kube-scheduler-minikube            1/1     Running             0          42s
    storage-provisioner                1/1     Running             0          37s
    
  3. Describe the CoreDNS pods:
      Normal   Scheduled               119s                  default-scheduler  Successfully assigned kube-system/coredns-66bff467f8-z74w2 to minikube
      Warning  FailedCreatePodSandBox  108s                  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "183b7485f29d74fcca0e3e31b367621a4df3b0ac2bf4ff12f8a6c8f874411055" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "183b7485f29d74fcca0e3e31b367621a4df3b0ac2bf4ff12f8a6c8f874411055" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.2 -j CNI-4f9f5426d273ec48648560fb -m comment --comment name: "crio-bridge" id: "183b7485f29d74fcca0e3e31b367621a4df3b0ac2bf4ff12f8a6c8f874411055" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-4f9f5426d273ec48648560fb':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  99s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "6f575a367b050c1225ad44a1e7a47fbbfc85d3f92a602fa5e1297e80ff38aa3b" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6f575a367b050c1225ad44a1e7a47fbbfc85d3f92a602fa5e1297e80ff38aa3b" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.5 -j CNI-c08e800aa04a454893c61a57 -m comment --comment name: "crio-bridge" id: "6f575a367b050c1225ad44a1e7a47fbbfc85d3f92a602fa5e1297e80ff38aa3b" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-c08e800aa04a454893c61a57':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  92s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "1b5a75c973b1fea24e63c129644cd2cc59e6bf56479afde9855b55a40a2a1bb1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "1b5a75c973b1fea24e63c129644cd2cc59e6bf56479afde9855b55a40a2a1bb1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.7 -j CNI-e573c6f3d6787116659417a2 -m comment --comment name: "crio-bridge" id: "1b5a75c973b1fea24e63c129644cd2cc59e6bf56479afde9855b55a40a2a1bb1" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-e573c6f3d6787116659417a2':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  84s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "b1fa52d756c9e93c0aca7a058120e8052da1139046153069906936153968ac7c" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "b1fa52d756c9e93c0aca7a058120e8052da1139046153069906936153968ac7c" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.9 -j CNI-5d08a8309232aa1b8848aba2 -m comment --comment name: "crio-bridge" id: "b1fa52d756c9e93c0aca7a058120e8052da1139046153069906936153968ac7c" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-5d08a8309232aa1b8848aba2':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  75s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "2727302b343e3aa10e170e2dbe29a99690a1a505143e557c88ae93a02b4fd126" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "2727302b343e3aa10e170e2dbe29a99690a1a505143e557c88ae93a02b4fd126" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.11 -j CNI-17f10a041095c868705e32aa -m comment --comment name: "crio-bridge" id: "2727302b343e3aa10e170e2dbe29a99690a1a505143e557c88ae93a02b4fd126" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-17f10a041095c868705e32aa':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  69s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "200ef6aa664dd46912f9b65cdda7e32f53cf77ce7fe8431a70c8c0e0cb600b9e" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "200ef6aa664dd46912f9b65cdda7e32f53cf77ce7fe8431a70c8c0e0cb600b9e" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.13 -j CNI-aa77f7ea540afa0c63b8391d -m comment --comment name: "crio-bridge" id: "200ef6aa664dd46912f9b65cdda7e32f53cf77ce7fe8431a70c8c0e0cb600b9e" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-aa77f7ea540afa0c63b8391d':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  62s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "3a78504fb11940e73be1943c8177f5db352820b5b56e9e25ec6ff8fe4812b7ee" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "3a78504fb11940e73be1943c8177f5db352820b5b56e9e25ec6ff8fe4812b7ee" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.15 -j CNI-1c71d365fe080056ebd5b176 -m comment --comment name: "crio-bridge" id: "3a78504fb11940e73be1943c8177f5db352820b5b56e9e25ec6ff8fe4812b7ee" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-1c71d365fe080056ebd5b176':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  56s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "9b4fffdf175906ab007358057a9fdaf8988f7f5812ba78fa89e1734da57cc434" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9b4fffdf175906ab007358057a9fdaf8988f7f5812ba78fa89e1734da57cc434" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.17 -j CNI-079971b10804b39a6ab8bbe8 -m comment --comment name: "crio-bridge" id: "9b4fffdf175906ab007358057a9fdaf8988f7f5812ba78fa89e1734da57cc434" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-079971b10804b39a6ab8bbe8':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Warning  FailedCreatePodSandBox  50s  kubelet, minikube  Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "ecc94c37b51d60d44d53ad1cbfa7a46d06c7e4c0628b988531d8c0175ed9504a" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ecc94c37b51d60d44d53ad1cbfa7a46d06c7e4c0628b988531d8c0175ed9504a" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.19 -j CNI-73f9f4d74b3006aad0c3406c -m comment --comment name: "crio-bridge" id: "ecc94c37b51d60d44d53ad1cbfa7a46d06c7e4c0628b988531d8c0175ed9504a" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-73f9f4d74b3006aad0c3406c':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
      Normal   SandboxChanged          28s (x12 over 107s)  kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
      Warning  FailedCreatePodSandBox  22s (x4 over 42s)    kubelet, minikube  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "5a5685e1d547326e40c0aafde92255587e675b5e6e3cf3120dad8682425d5a94" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "5a5685e1d547326e40c0aafde92255587e675b5e6e3cf3120dad8682425d5a94" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.27 -j CNI-73713a8e9b80438fa8993e9c -m comment --comment name: "crio-bridge" id: "5a5685e1d547326e40c0aafde92255587e675b5e6e3cf3120dad8682425d5a94" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-73713a8e9b80438fa8993e9c':No such file or directory
    
    Try `iptables -h' or 'iptables --help' for more information.
    ]
    
    

Full output of failed command:
Minikube start:

I0401 11:46:05.019703 8239 start.go:259] hostinfo: {"hostname":"pclnxdev22","uptime":1131506,"bootTime":1584602859,"procs":453,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-91-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"b12dcb95-5fe8-4950-aede-efd99cccf35a"} I0401 11:46:05.020228 8239 start.go:269] virtualization: kvm host 😄 minikube v1.9.0 on Ubuntu 18.04 I0401 11:46:05.024186 8239 driver.go:226] Setting default libvirt URI to qemu:///system I0401 11:46:05.024203 8239 global.go:98] Querying for installed drivers using PATH=/home/local/INTRANET/ivan.rizzante/.minikube/bin:/home/local/INTRANET/ivan.rizzante/.local/bin:/home/local/INTRANET/ivan.rizzante/bin:/home/local/INTRANET/ivan.rizzante/bin:/home/local/INTRANET/ivan.rizzante/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/local/INTRANET/ivan.rizzante/odev/bin I0401 11:46:05.024248 8239 global.go:106] podman priority: 2, state: {Installed:false Healthy:false Error:exec: "podman": executable file not found in $PATH Fix:Podman is required. Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/podman/} I0401 11:46:05.265850 8239 global.go:106] virtualbox priority: 5, state: {Installed:true Healthy:true Error: Fix: Doc:} I0401 11:46:05.265934 8239 global.go:106] vmware priority: 6, state: {Installed:false Healthy:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0401 11:46:05.343337 8239 global.go:106] docker priority: 7, state: {Installed:true Healthy:true Error: Fix: Doc:} I0401 11:46:05.379722 8239 global.go:106] kvm2 priority: 7, state: {Installed:true Healthy:true Error: Fix: Doc:} I0401 11:46:05.379771 8239 global.go:106] none priority: 3, state: {Installed:true Healthy:true Error: Fix: Doc:} I0401 11:46:05.379783 8239 driver.go:191] not recommending "none" due to priority: 3 I0401 11:46:05.379790 8239 driver.go:209] Picked: docker I0401 11:46:05.379796 8239 driver.go:210] Alternatives: [kvm2 virtualbox none] ✨ Automatically selected the docker driver. Other choices: kvm2, virtualbox, none I0401 11:46:05.382170 8239 start.go:307] selected driver: docker I0401 11:46:05.382176 8239 start.go:596] validating driver "docker" against I0401 11:46:05.382182 8239 start.go:602] status for docker: {Installed:true Healthy:true Error: Fix: Doc:} 🚜 Pulling base image ... I0401 11:46:05.461442 8239 cache.go:104] Beginning downloading kic artifacts I0401 11:46:05.461455 8239 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker I0401 11:46:05.461477 8239 preload.go:97] Found local preload: /home/local/INTRANET/ivan.rizzante/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 I0401 11:46:05.461484 8239 cache.go:46] Caching tarball of preloaded images I0401 11:46:05.461495 8239 preload.go:123] Found /home/local/INTRANET/ivan.rizzante/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0401 11:46:05.461501 8239 cache.go:49] Finished downloading the preloaded tar for v1.18.0 on docker I0401 11:46:05.461530 8239 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon I0401 11:46:05.461544 8239 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon I0401 11:46:05.461647 8239 profile.go:138] Saving config to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/config.json ... I0401 11:46:05.461844 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/config.json: {Name:mkc9f11ed8b9cf9e44687667f02144032960775f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0401 11:46:05.503998 8239 image.go:90] Found gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 in local docker daemon, skipping pull I0401 11:46:05.504052 8239 cache.go:117] Successfully downloaded all kic artifacts I0401 11:46:05.504096 8239 start.go:260] acquiring machines lock for minikube: {Name:mk36622d7696ff5b563c9732b62ca26e05aa9acb Clock:{} Delay:500ms Timeout:15m0s Cancel:} I0401 11:46:05.504211 8239 start.go:264] acquired machines lock for "minikube" in 99.363µs I0401 11:46:05.504253 8239 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:12000 CPUs:8 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[maven-repo.sdb.it:18081 maven-repo.sdb.it:18080] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[registry-mirror=https://maven-repo.sdb.it:18080] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubelet Key:authentication-token-webhook Value:true} {Component:kubelet Key:authorization-mode Value:Webhook} {Component:scheduler Key:address Value:0.0.0.0} {Component:controller-manager Key:address Value:0.0.0.0} {Component:apiserver Key:authorization-mode Value:Node,RBAC} {Component:apiserver Key:service-account-signing-key-file Value:/var/lib/minikube/certs/sa.key} {Component:apiserver Key:service-account-issuer Value:kubernetes/serviceaccount} {Component:apiserver Key:service-account-api-audiences Value:api}] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true} I0401 11:46:05.504332 8239 start.go:107] createHost starting for "m01" (driver="docker") 🔥 Creating Kubernetes in docker container with (CPUs=8) (8 available), Memory=12000MB (31955MB available) ... I0401 11:46:05.583449 8239 start.go:143] libmachine.API.Create for "minikube" (driver="docker") I0401 11:46:05.583487 8239 client.go:169] LocalClient.Create starting I0401 11:46:05.583529 8239 main.go:110] libmachine: Reading certificate data from /home/local/INTRANET/ivan.rizzante/.minikube/certs/ca.pem I0401 11:46:05.583583 8239 main.go:110] libmachine: Decoding PEM data... I0401 11:46:05.583611 8239 main.go:110] libmachine: Parsing certificate... I0401 11:46:05.583744 8239 main.go:110] libmachine: Reading certificate data from /home/local/INTRANET/ivan.rizzante/.minikube/certs/cert.pem I0401 11:46:05.583775 8239 main.go:110] libmachine: Decoding PEM data... I0401 11:46:05.583787 8239 main.go:110] libmachine: Parsing certificate... I0401 11:46:05.628515 8239 volumes.go:118] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true] I0401 11:46:05.665432 8239 oci.go:127] Successfully created a docker volume minikube I0401 11:46:07.820679 8239 oci.go:159] the created container "minikube" has a running status. I0401 11:46:07.820713 8239 kic.go:143] Creating ssh key for kic: /home/local/INTRANET/ivan.rizzante/.minikube/machines/minikube/id_rsa... I0401 11:46:09.024569 8239 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0401 11:46:09.208660 8239 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker I0401 11:46:09.208720 8239 preload.go:97] Found local preload: /home/local/INTRANET/ivan.rizzante/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 I0401 11:46:09.208747 8239 kic.go:129] Starting extracting preloaded images to volume I0401 11:46:09.208866 8239 volumes.go:106] executing: [docker run --rm --entrypoint /usr/bin/tar -v /home/local/INTRANET/ivan.rizzante/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 -I lz4 -xvf /preloaded.tar -C /extractDir] I0401 11:46:16.898511 8239 kic.go:134] Took 7.689757 seconds to extract preloaded images to volume I0401 11:46:16.937575 8239 machine.go:86] provisioning docker machine ... I0401 11:46:16.937607 8239 ubuntu.go:166] provisioning hostname "minikube" I0401 11:46:16.978073 8239 main.go:110] libmachine: Using SSH client type: native I0401 11:46:16.978239 8239 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32782 } I0401 11:46:16.978265 8239 main.go:110] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0401 11:46:17.781899 8239 main.go:110] libmachine: SSH cmd err, output: : minikube

I0401 11:46:17.839497 8239 main.go:110] libmachine: Using SSH client type: native
I0401 11:46:17.839617 8239 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32782 }
I0401 11:46:17.839635 8239 main.go:110] libmachine: About to run SSH command:

            if ! grep -xq '.*\sminikube' /etc/hosts; then
                    if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                    else 
                            echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                    fi
            fi

I0401 11:46:17.989389 8239 main.go:110] libmachine: SSH cmd err, output: :
I0401 11:46:17.989450 8239 ubuntu.go:172] set auth options {CertDir:/home/local/INTRANET/ivan.rizzante/.minikube CaCertPath:/home/local/INTRANET/ivan.rizzante/.minikube/certs/ca.pem CaPrivateKeyPath:/home/local/INTRANET/ivan.rizzante/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/local/INTRANET/ivan.rizzante/.minikube/machines/server.pem ServerKeyPath:/home/local/INTRANET/ivan.rizzante/.minikube/machines/server-key.pem ClientKeyPath:/home/local/INTRANET/ivan.rizzante/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/local/INTRANET/ivan.rizzante/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/local/INTRANET/ivan.rizzante/.minikube}
I0401 11:46:17.989513 8239 ubuntu.go:174] setting up certificates
I0401 11:46:17.989556 8239 provision.go:83] configureAuth start
I0401 11:46:18.060114 8239 provision.go:132] copyHostCerts
I0401 11:46:18.060565 8239 provision.go:106] generating server cert: /home/local/INTRANET/ivan.rizzante/.minikube/machines/server.pem ca-key=/home/local/INTRANET/ivan.rizzante/.minikube/certs/ca.pem private-key=/home/local/INTRANET/ivan.rizzante/.minikube/certs/ca-key.pem org=ivan.rizzante.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0401 11:46:18.221499 8239 provision.go:160] copyRemoteCerts
I0401 11:46:18.292737 8239 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0401 11:46:18.549450 8239 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1
I0401 11:46:18.549979 8239 ssh_runner.go:174] Transferring 1054 bytes to /etc/docker/ca.pem
I0401 11:46:18.551493 8239 ssh_runner.go:193] ca.pem: copied 1054 bytes
I0401 11:46:18.761278 8239 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1
I0401 11:46:18.761694 8239 ssh_runner.go:174] Transferring 1135 bytes to /etc/docker/server.pem
I0401 11:46:18.763027 8239 ssh_runner.go:193] server.pem: copied 1135 bytes
I0401 11:46:18.901226 8239 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1
I0401 11:46:18.901335 8239 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem
I0401 11:46:18.901686 8239 ssh_runner.go:193] server-key.pem: copied 1679 bytes
I0401 11:46:19.112144 8239 provision.go:86] configureAuth took 1.122543478s
I0401 11:46:19.112199 8239 ubuntu.go:190] setting minikube options for container-runtime
I0401 11:46:19.174094 8239 main.go:110] libmachine: Using SSH client type: native
I0401 11:46:19.174329 8239 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32782 }
I0401 11:46:19.174346 8239 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0401 11:46:19.505302 8239 main.go:110] libmachine: SSH cmd err, output: : overlay

I0401 11:46:19.505360 8239 ubuntu.go:71] root file system type: overlay
I0401 11:46:19.505865 8239 provision.go:295] Updating docker unit: /lib/systemd/system/docker.service ...
I0401 11:46:19.566232 8239 main.go:110] libmachine: Using SSH client type: native
I0401 11:46:19.566369 8239 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32782 }
I0401 11:46:19.566458 8239 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --insecure-registry maven-repo.sdb.it:18081 --insecure-registry maven-repo.sdb.it:18080 --registry-mirror=https://maven-repo.sdb.it:18080
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0401 11:46:19.736992 8239 main.go:110] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --insecure-registry maven-repo.sdb.it:18081 --insecure-registry maven-repo.sdb.it:18080 --registry-mirror=https://maven-repo.sdb.it:18080
ExecReload=/bin/kill -s HUP

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0401 11:46:19.809086 8239 main.go:110] libmachine: Using SSH client type: native
I0401 11:46:19.809297 8239 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 [] 0s} 127.0.0.1 32782 }
I0401 11:46:19.809331 8239 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
I0401 11:46:29.024543 8239 main.go:110] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2020-04-01 09:46:19.727474615 +0000
@@ -8,24 +8,22 @@

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --insecure-registry maven-repo.sdb.it:18081 --insecure-registry maven-repo.sdb.it:18080 --registry-mirror=https://maven-repo.sdb.it:18080
+ExecReload=/bin/kill -s HUP

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -33,9 +31,10 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

I0401 11:46:29.024627 8239 machine.go:89] provisioned docker machine in 12.087028241s
I0401 11:46:29.024639 8239 client.go:172] LocalClient.Create took 23.441146019s
I0401 11:46:29.024653 8239 start.go:148] libmachine.API.Create for "minikube" took 23.441205285s
I0401 11:46:29.024663 8239 start.go:189] post-start starting for "minikube" (driver="docker")
I0401 11:46:29.024670 8239 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0401 11:46:29.024685 8239 start.go:234] Returning KICRunner for "docker" driver
I0401 11:46:29.024761 8239 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0401 11:46:29.605966 8239 filesync.go:118] Scanning /home/local/INTRANET/ivan.rizzante/.minikube/addons for local assets ...
I0401 11:46:29.606011 8239 filesync.go:118] Scanning /home/local/INTRANET/ivan.rizzante/.minikube/files for local assets ...
I0401 11:46:29.606027 8239 start.go:192] post-start completed in 581.357369ms
I0401 11:46:29.606216 8239 start.go:110] createHost completed in 24.101874874s
I0401 11:46:29.606224 8239 start.go:77] releasing machines lock for "minikube", held for 24.101991936s
I0401 11:46:29.643715 8239 kic_runner.go:91] Run: nslookup kubernetes.io -type=ns
I0401 11:46:30.554505 8239 kic_runner.go:91] Run: curl -sS https://k8s.gcr.io/
I0401 11:46:31.913654 8239 kic_runner.go:118] Done: [docker exec --privileged minikube curl -sS https://k8s.gcr.io/]: (1.359128741s)
I0401 11:46:31.913799 8239 profile.go:138] Saving config to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/config.json ...
I0401 11:46:31.914025 8239 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0401 11:46:32.502997 8239 kic_runner.go:91] Run: sudo systemctl stop -f containerd
I0401 11:46:33.038448 8239 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0401 11:46:33.902316 8239 kic_runner.go:91] Run: sudo systemctl is-active --quiet service crio
I0401 11:46:34.451928 8239 kic_runner.go:91] Run: sudo systemctl start docker
I0401 11:46:36.757972 8239 kic_runner.go:118] Done: [docker exec --privileged minikube sudo systemctl start docker]: (2.306022276s)
I0401 11:46:36.758048 8239 kic_runner.go:91] Run: docker version --format {{.Server.Version}}
I0401 11:46:38.402416 8239 kic_runner.go:118] Done: [docker exec --privileged minikube docker version --format {{.Server.Version}}]: (1.644348341s)
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
▪ opt registry-mirror=https://maven-repo.sdb.it:18080
I0401 11:46:38.464925 8239 settings.go:123] acquiring lock: {Name:mka0d5622787b53e3a561313c483a19546534381 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:38.465033 8239 settings.go:131] Updating kubeconfig: /home/local/INTRANET/ivan.rizzante/.kube/config
I0401 11:46:38.468735 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.kube/config: {Name:mk3912b399407fd27aab978296122165d32936b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
▪ kubelet.authentication-token-webhook=true
▪ kubelet.authorization-mode=Webhook
▪ scheduler.address=0.0.0.0
▪ controller-manager.address=0.0.0.0
▪ apiserver.authorization-mode=Node,RBAC
▪ apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key
▪ apiserver.service-account-issuer=kubernetes/serviceaccount
▪ apiserver.service-account-api-audiences=api
I0401 11:46:38.478083 8239 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0401 11:46:38.478117 8239 preload.go:97] Found local preload: /home/local/INTRANET/ivan.rizzante/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0401 11:46:38.478209 8239 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0401 11:46:40.035883 8239 kic_runner.go:118] Done: [docker exec --privileged minikube docker images --format {{.Repository}}:{{.Tag}}]: (1.557658101s)
I0401 11:46:40.035916 8239 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0401 11:46:40.035935 8239 docker.go:305] Images already preloaded, skipping extraction
I0401 11:46:40.035988 8239 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0401 11:46:42.030239 8239 kic_runner.go:118] Done: [docker exec --privileged minikube docker images --format {{.Repository}}:{{.Tag}}]: (1.99423748s)
I0401 11:46:42.030285 8239 docker.go:367] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0401 11:46:42.030310 8239 cache_images.go:68] Images are preloaded, skipping loading
I0401 11:46:42.030348 8239 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[authorization-mode:Node,RBAC enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota service-account-api-audiences:api service-account-issuer:kubernetes/serviceaccount service-account-signing-key-file:/var/lib/minikube/certs/sa.key] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[address:0.0.0.0] Pairs:map[]} {Component:scheduler ExtraArgs:map[address:0.0.0.0] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:172.17.0.2}
I0401 11:46:42.030441 8239 kubeadm.go:129] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 172.17.0.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
extraArgs:
authorization-mode: "Node,RBAC"
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
service-account-api-audiences: "api"
service-account-issuer: "kubernetes/serviceaccount"
service-account-signing-key-file: "/var/lib/minikube/certs/sa.key"
controllerManager:
extraArgs:
address: "0.0.0.0"
scheduler:
extraArgs:
address: "0.0.0.0"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 172.17.0.2:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12

I0401 11:46:42.030496 8239 extraconfig.go:106] Overwriting default authorization-mode=Webhook with user provided authorization-mode=Webhook for component kubelet
I0401 11:46:42.030550 8239 kic_runner.go:91] Run: docker info --format {{.CgroupDriver}}
I0401 11:46:43.893844 8239 kic_runner.go:118] Done: [docker exec --privileged minikube docker info --format {{.CgroupDriver}}]: (1.863280088s)
I0401 11:46:43.893955 8239 kubeadm.go:659] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authentication-token-webhook=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubelet Key:authentication-token-webhook Value:true} {Component:kubelet Key:authorization-mode Value:Webhook} {Component:scheduler Key:address Value:0.0.0.0} {Component:controller-manager Key:address Value:0.0.0.0} {Component:apiserver Key:authorization-mode Value:Node,RBAC} {Component:apiserver Key:service-account-signing-key-file Value:/var/lib/minikube/certs/sa.key} {Component:apiserver Key:service-account-issuer Value:kubernetes/serviceaccount} {Component:apiserver Key:service-account-api-audiences Value:api}] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:}
I0401 11:46:43.894043 8239 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0401 11:46:44.271830 8239 binaries.go:42] Found k8s binaries, skipping transfer
I0401 11:46:44.271922 8239 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/cni/net.d
I0401 11:46:47.608566 8239 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0401 11:46:48.073293 8239 kic_runner.go:91] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet"
I0401 11:46:49.422388 8239 kic_runner.go:118] Done: [docker exec --privileged minikube /bin/bash -c sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet]: (1.349068788s)
I0401 11:46:49.422430 8239 certs.go:51] Setting up /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube for IP: 172.17.0.2
I0401 11:46:49.422489 8239 certs.go:167] skipping minikubeCA CA generation: /home/local/INTRANET/ivan.rizzante/.minikube/ca.key
I0401 11:46:49.422520 8239 certs.go:167] skipping proxyClientCA CA generation: /home/local/INTRANET/ivan.rizzante/.minikube/proxy-client-ca.key
I0401 11:46:49.422558 8239 certs.go:265] generating minikube-user signed cert: /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/client.key
I0401 11:46:49.422568 8239 crypto.go:69] Generating cert /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/client.crt with IP's: []
I0401 11:46:49.543775 8239 crypto.go:157] Writing cert to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/client.crt ...
I0401 11:46:49.543815 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/client.crt: {Name:mk1873dbd355e35bed2dcf9c128cb63d1f6fb30d Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:49.544201 8239 crypto.go:165] Writing key to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/client.key ...
I0401 11:46:49.544212 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/client.key: {Name:mka829141d544430e7c19ff8b006e4637cf43834 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:49.544340 8239 certs.go:265] generating minikube signed cert: /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.key.eaa33411
I0401 11:46:49.544347 8239 crypto.go:69] Generating cert /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.crt.eaa33411 with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0401 11:46:49.687901 8239 crypto.go:157] Writing cert to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.crt.eaa33411 ...
I0401 11:46:49.687930 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.crt.eaa33411: {Name:mkd1527005c47709bf146a14d74676ad000015b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:49.688139 8239 crypto.go:165] Writing key to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.key.eaa33411 ...
I0401 11:46:49.688166 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.key.eaa33411: {Name:mkc989586debfed43b3b83b06f2dba65f3e0f81d Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:49.688298 8239 certs.go:276] copying /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.crt.eaa33411 -> /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.crt
I0401 11:46:49.688403 8239 certs.go:280] copying /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.key.eaa33411 -> /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/apiserver.key
I0401 11:46:49.688507 8239 certs.go:265] generating aggregator signed cert: /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/proxy-client.key
I0401 11:46:49.688516 8239 crypto.go:69] Generating cert /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0401 11:46:49.732291 8239 crypto.go:157] Writing cert to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/proxy-client.crt ...
I0401 11:46:49.732311 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/proxy-client.crt: {Name:mk4afd5af44493219b35f15bacd8e21303906133 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:49.732506 8239 crypto.go:165] Writing key to /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/proxy-client.key ...
I0401 11:46:49.732515 8239 lock.go:35] WriteFile acquiring /home/local/INTRANET/ivan.rizzante/.minikube/profiles/minikube/proxy-client.key: {Name:mkbf23867ff53138240c1f6adfda456cae3bfe80 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0401 11:46:56.960781 8239 kic_runner.go:91] Run: openssl version
I0401 11:46:57.299983 8239 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0401 11:46:57.794741 8239 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0401 11:46:58.159359 8239 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0401 11:46:58.665832 8239 kubeadm.go:279] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:12000 CPUs:8 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[maven-repo.sdb.it:18081 maven-repo.sdb.it:18080] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[registry-mirror=https://maven-repo.sdb.it:18080] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubelet Key:authentication-token-webhook Value:true} {Component:kubelet Key:authorization-mode Value:Webhook} {Component:scheduler Key:address Value:0.0.0.0} {Component:controller-manager Key:address Value:0.0.0.0} {Component:apiserver Key:authorization-mode Value:Node,RBAC} {Component:apiserver Key:service-account-signing-key-file Value:/var/lib/minikube/certs/sa.key} {Component:apiserver Key:service-account-issuer Value:kubernetes/serviceaccount} {Component:apiserver Key:service-account-api-audiences Value:api}] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[]}
I0401 11:46:58.665983 8239 kic_runner.go:91] Run: docker ps --filter status=paused --filter=name=k8s_.(kube-system) --format={{.ID}}
I0401 11:47:00.142070 8239 kic_runner.go:118] Done: [docker exec --privileged minikube docker ps --filter status=paused --filter=name=k8s_.
(kube-system) --format={{.ID}}]: (1.476069608s)
I0401 11:47:00.142163 8239 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0401 11:47:00.585436 8239 kic_runner.go:91] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0401 11:47:00.976594 8239 kubeadm.go:215] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0401 11:47:00.976664 8239 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0401 11:47:01.457485 8239 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0401 11:47:01.972809 8239 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0401 11:47:02.496379 8239 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://172.17.0.2:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0401 11:47:03.018214 8239 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0401 11:47:32.874887 8239 kic_runner.go:118] Done: [docker exec --privileged minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: (29.856648507s)
I0401 11:47:32.875080 8239 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create --kubeconfig=/var/lib/minikube/kubeconfig -f -
I0401 11:47:35.027600 8239 kic_runner.go:118] Done: [docker exec --privileged -i minikube sudo /var/lib/minikube/binaries/v1.18.0/kubectl create --kubeconfig=/var/lib/minikube/kubeconfig -f -]: (2.152499987s)
I0401 11:47:35.027784 8239 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl label nodes minikube.k8s.io/version=v1.9.0 minikube.k8s.io/commit=48fefd43444d2f8852f527c78f0141b377b1e42a minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_04_01T11_47_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0401 11:47:35.918245 8239 kic_runner.go:91] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0401 11:47:36.231181 8239 ops.go:35] apiserver oom_adj: -16
I0401 11:47:36.231273 8239 kic_runner.go:91] Run: sudo /var/lib/minikube/binaries/v1.18.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0401 11:47:37.006330 8239 kubeadm.go:782] duration metric: took 775.127745ms to wait for elevateKubeSystemPrivileges.
I0401 11:47:37.006350 8239 kubeadm.go:281] StartCluster complete in 38.340526222s
I0401 11:47:37.006401 8239 addons.go:280] enableAddons start: toEnable=map[], additional=[]
🌟 Enabling addons: default-storageclass, storage-provisioner
I0401 11:47:37.008958 8239 addons.go:45] Setting default-storageclass=true in profile "minikube"
I0401 11:47:37.009094 8239 addons.go:230] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0401 11:47:37.061632 8239 addons.go:104] Setting addon default-storageclass=true in "minikube"
W0401 11:47:37.061794 8239 addons.go:119] addon default-storageclass should already be in state true
I0401 11:47:37.061936 8239 host.go:65] Checking if "minikube" exists ...
I0401 11:47:37.100525 8239 addons.go:197] installing /etc/kubernetes/addons/storageclass.yaml
I0401 11:47:38.287136 8239 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0401 11:47:41.321348 8239 kic_runner.go:118] Done: [docker exec --privileged minikube sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml]: (3.034166728s)
I0401 11:47:41.321408 8239 addons.go:70] Writing out "minikube" config to set default-storageclass=true...
I0401 11:47:41.321779 8239 addons.go:45] Setting storage-provisioner=true in profile "minikube"
I0401 11:47:41.321911 8239 addons.go:104] Setting addon storage-provisioner=true in "minikube"
W0401 11:47:41.322002 8239 addons.go:119] addon storage-provisioner should already be in state true
I0401 11:47:41.322013 8239 host.go:65] Checking if "minikube" exists ...
I0401 11:47:41.372253 8239 addons.go:197] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0401 11:47:42.617351 8239 kic_runner.go:91] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0401 11:47:46.573884 8239 kic_runner.go:118] Done: [docker exec --privileged minikube sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml]: (3.956503893s)
I0401 11:47:46.573920 8239 addons.go:70] Writing out "minikube" config to set storage-provisioner=true...
I0401 11:47:46.574136 8239 addons.go:282] enableAddons completed in 9.567739382s
I0401 11:47:46.574163 8239 kverify.go:52] waiting for apiserver process to appear ...
I0401 11:47:46.574223 8239 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0401 11:47:50.848231 8239 kic_runner.go:118] Done: [docker exec --privileged minikube sudo pgrep -xnf kube-apiserver.minikube.]: (4.273994204s)
I0401 11:47:50.848260 8239 kverify.go:72] duration metric: took 4.274097712s to wait for apiserver process to appear ...
I0401 11:47:50.895419 8239 kverify.go:187] waiting for apiserver healthz status ...
I0401 11:47:50.895439 8239 kverify.go:297] Checking apiserver healthz at https://127.0.0.1:32780/healthz ...
I0401 11:47:50.900038 8239 kverify.go:239] control plane version: v1.18.0
I0401 11:47:50.900057 8239 kverify.go:230] duration metric: took 4.622312ms to wait for apiserver health ...
I0401 11:47:50.900076 8239 kverify.go:150] waiting for kube-system pods to appear ...
I0401 11:47:50.906757 8239 kverify.go:168] 9 kube-system pods found
I0401 11:47:50.906784 8239 kverify.go:170] "coredns-66bff467f8-z74w2" [8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0401 11:47:50.906794 8239 kverify.go:170] "coredns-66bff467f8-zg7r6" [4971bb50-edc1-48cc-8d8a-99c7f22f1b89] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0401 11:47:50.906803 8239 kverify.go:170] "etcd-minikube" [77bd7f5b-a4cb-434e-becd-aa0096851112] Pending
I0401 11:47:50.906813 8239 kverify.go:170] "kindnet-p54hr" [c129c33f-a524-4e08-9403-cea282f99fcf] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0401 11:47:50.906820 8239 kverify.go:170] "kube-apiserver-minikube" [880dcb1b-c36a-4fcd-b8ea-a6ad50c5c27f] Pending
I0401 11:47:50.906828 8239 kverify.go:170] "kube-controller-manager-minikube" [17a2bc37-152a-46ac-833c-f027e572a7b7] Running
I0401 11:47:50.906838 8239 kverify.go:170] "kube-proxy-dgfzp" [e5b7fbc0-e347-4ef0-b7fa-3d09fe9afd54] Running
I0401 11:47:50.906845 8239 kverify.go:170] "kube-scheduler-minikube" [061b334-f87e-4e09-9b74-085edff2f27a] Running
I0401 11:47:50.906853 8239 kverify.go:170] "storage-provisioner" [279b9821-a4bf-4c6e-b4a4-bb8c5c379655] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0401 11:47:50.906863 8239 kverify.go:181] duration metric: took 6.77446ms to wait for pod list to return data ...
🏄 Done! kubectl is now configured to use "minikube"
I0401 11:47:51.166596 8239 start.go:457] kubectl: 1.17.4, cluster: 1.18.0 (minor skew: 1)

CoreDNS status:

Normal Scheduled 119s default-scheduler Successfully assigned kube-system/coredns-66bff467f8-z74w2 to minikube
Warning FailedCreatePodSandBox 108s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "183b7485f29d74fcca0e3e31b367621a4df3b0ac2bf4ff12f8a6c8f874411055" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "183b7485f29d74fcca0e3e31b367621a4df3b0ac2bf4ff12f8a6c8f874411055" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.2 -j CNI-4f9f5426d273ec48648560fb -m comment --comment name: "crio-bridge" id: "183b7485f29d74fcca0e3e31b367621a4df3b0ac2bf4ff12f8a6c8f874411055" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `CNI-4f9f5426d273ec48648560fb':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 99s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "6f575a367b050c1225ad44a1e7a47fbbfc85d3f92a602fa5e1297e80ff38aa3b" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "6f575a367b050c1225ad44a1e7a47fbbfc85d3f92a602fa5e1297e80ff38aa3b" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.5 -j CNI-c08e800aa04a454893c61a57 -m comment --comment name: "crio-bridge" id: "6f575a367b050c1225ad44a1e7a47fbbfc85d3f92a602fa5e1297e80ff38aa3b" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-c08e800aa04a454893c61a57':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 92s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "1b5a75c973b1fea24e63c129644cd2cc59e6bf56479afde9855b55a40a2a1bb1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "1b5a75c973b1fea24e63c129644cd2cc59e6bf56479afde9855b55a40a2a1bb1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.7 -j CNI-e573c6f3d6787116659417a2 -m comment --comment name: "crio-bridge" id: "1b5a75c973b1fea24e63c129644cd2cc59e6bf56479afde9855b55a40a2a1bb1" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-e573c6f3d6787116659417a2':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 84s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "b1fa52d756c9e93c0aca7a058120e8052da1139046153069906936153968ac7c" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "b1fa52d756c9e93c0aca7a058120e8052da1139046153069906936153968ac7c" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.9 -j CNI-5d08a8309232aa1b8848aba2 -m comment --comment name: "crio-bridge" id: "b1fa52d756c9e93c0aca7a058120e8052da1139046153069906936153968ac7c" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-5d08a8309232aa1b8848aba2':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 75s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "2727302b343e3aa10e170e2dbe29a99690a1a505143e557c88ae93a02b4fd126" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "2727302b343e3aa10e170e2dbe29a99690a1a505143e557c88ae93a02b4fd126" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.11 -j CNI-17f10a041095c868705e32aa -m comment --comment name: "crio-bridge" id: "2727302b343e3aa10e170e2dbe29a99690a1a505143e557c88ae93a02b4fd126" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-17f10a041095c868705e32aa':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 69s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "200ef6aa664dd46912f9b65cdda7e32f53cf77ce7fe8431a70c8c0e0cb600b9e" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "200ef6aa664dd46912f9b65cdda7e32f53cf77ce7fe8431a70c8c0e0cb600b9e" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.13 -j CNI-aa77f7ea540afa0c63b8391d -m comment --comment name: "crio-bridge" id: "200ef6aa664dd46912f9b65cdda7e32f53cf77ce7fe8431a70c8c0e0cb600b9e" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-aa77f7ea540afa0c63b8391d':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 62s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "3a78504fb11940e73be1943c8177f5db352820b5b56e9e25ec6ff8fe4812b7ee" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "3a78504fb11940e73be1943c8177f5db352820b5b56e9e25ec6ff8fe4812b7ee" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.15 -j CNI-1c71d365fe080056ebd5b176 -m comment --comment name: "crio-bridge" id: "3a78504fb11940e73be1943c8177f5db352820b5b56e9e25ec6ff8fe4812b7ee" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-1c71d365fe080056ebd5b176':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 56s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "9b4fffdf175906ab007358057a9fdaf8988f7f5812ba78fa89e1734da57cc434" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9b4fffdf175906ab007358057a9fdaf8988f7f5812ba78fa89e1734da57cc434" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.17 -j CNI-079971b10804b39a6ab8bbe8 -m comment --comment name: "crio-bridge" id: "9b4fffdf175906ab007358057a9fdaf8988f7f5812ba78fa89e1734da57cc434" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-079971b10804b39a6ab8bbe8':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Warning FailedCreatePodSandBox 50s kubelet, minikube Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "ecc94c37b51d60d44d53ad1cbfa7a46d06c7e4c0628b988531d8c0175ed9504a" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ecc94c37b51d60d44d53ad1cbfa7a46d06c7e4c0628b988531d8c0175ed9504a" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.19 -j CNI-73f9f4d74b3006aad0c3406c -m comment --comment name: "crio-bridge" id: "ecc94c37b51d60d44d53ad1cbfa7a46d06c7e4c0628b988531d8c0175ed9504a" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-73f9f4d74b3006aad0c3406c':No such file or directory

Try iptables -h' or 'iptables --help' for more information. ] Normal SandboxChanged 28s (x12 over 107s) kubelet, minikube Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 22s (x4 over 42s) kubelet, minikube (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "5a5685e1d547326e40c0aafde92255587e675b5e6e3cf3120dad8682425d5a94" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "5a5685e1d547326e40c0aafde92255587e675b5e6e3cf3120dad8682425d5a94" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.27 -j CNI-73713a8e9b80438fa8993e9c -m comment --comment name: "crio-bridge" id: "5a5685e1d547326e40c0aafde92255587e675b5e6e3cf3120dad8682425d5a94" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-73713a8e9b80438fa8993e9c':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.
]

Optional: Full output of minikube logs command:

==> Docker <==
-- Logs begin at Wed 2020-04-01 09:46:08 UTC, end at Wed 2020-04-01 09:54:23 UTC. --
Apr 01 09:50:59 minikube dockerd[490]: time="2020-04-01T09:50:59.681173698Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:02 minikube dockerd[490]: time="2020-04-01T09:51:02.228046376Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:05 minikube dockerd[490]: time="2020-04-01T09:51:05.673074490Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:08 minikube dockerd[490]: time="2020-04-01T09:51:08.379885826Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:11 minikube dockerd[490]: time="2020-04-01T09:51:11.621047949Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:14 minikube dockerd[490]: time="2020-04-01T09:51:14.334816950Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:17 minikube dockerd[490]: time="2020-04-01T09:51:17.844004852Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:21 minikube dockerd[490]: time="2020-04-01T09:51:21.177131297Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:24 minikube dockerd[490]: time="2020-04-01T09:51:24.982609634Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:28 minikube dockerd[490]: time="2020-04-01T09:51:28.309207620Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:31 minikube dockerd[490]: time="2020-04-01T09:51:31.502924456Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:34 minikube dockerd[490]: time="2020-04-01T09:51:34.912325876Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:39 minikube dockerd[490]: time="2020-04-01T09:51:39.158404812Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:42 minikube dockerd[490]: time="2020-04-01T09:51:42.118060135Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:45 minikube dockerd[490]: time="2020-04-01T09:51:45.712526197Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:49 minikube dockerd[490]: time="2020-04-01T09:51:49.345529143Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:52 minikube dockerd[490]: time="2020-04-01T09:51:52.166253129Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:55 minikube dockerd[490]: time="2020-04-01T09:51:55.276835795Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:51:57 minikube dockerd[490]: time="2020-04-01T09:51:57.699544341Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:01 minikube dockerd[490]: time="2020-04-01T09:52:01.465562143Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:03 minikube dockerd[490]: time="2020-04-01T09:52:03.641486670Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:06 minikube dockerd[490]: time="2020-04-01T09:52:06.285126797Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:09 minikube dockerd[490]: time="2020-04-01T09:52:09.175289346Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:13 minikube dockerd[490]: time="2020-04-01T09:52:13.941298528Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:17 minikube dockerd[490]: time="2020-04-01T09:52:17.236735825Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:20 minikube dockerd[490]: time="2020-04-01T09:52:20.206899509Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:23 minikube dockerd[490]: time="2020-04-01T09:52:23.701839258Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:26 minikube dockerd[490]: time="2020-04-01T09:52:26.926636358Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:29 minikube dockerd[490]: time="2020-04-01T09:52:29.833094069Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:33 minikube dockerd[490]: time="2020-04-01T09:52:33.207042534Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:36 minikube dockerd[490]: time="2020-04-01T09:52:36.379648298Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:40 minikube dockerd[490]: time="2020-04-01T09:52:40.001887414Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:42 minikube dockerd[490]: time="2020-04-01T09:52:42.892793435Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:46 minikube dockerd[490]: time="2020-04-01T09:52:46.299781147Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:49 minikube dockerd[490]: time="2020-04-01T09:52:49.549610895Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:52 minikube dockerd[490]: time="2020-04-01T09:52:52.062696136Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:54 minikube dockerd[490]: time="2020-04-01T09:52:54.583784539Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:52:57 minikube dockerd[490]: time="2020-04-01T09:52:57.207701245Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:01 minikube dockerd[490]: time="2020-04-01T09:53:01.851923856Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:04 minikube dockerd[490]: time="2020-04-01T09:53:04.903374988Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:10 minikube dockerd[490]: time="2020-04-01T09:53:10.377242639Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:12 minikube dockerd[490]: time="2020-04-01T09:53:12.676005901Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:16 minikube dockerd[490]: time="2020-04-01T09:53:16.322485699Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:19 minikube dockerd[490]: time="2020-04-01T09:53:19.992565166Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:22 minikube dockerd[490]: time="2020-04-01T09:53:22.705125103Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:25 minikube dockerd[490]: time="2020-04-01T09:53:25.729876394Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:28 minikube dockerd[490]: time="2020-04-01T09:53:28.773678873Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:31 minikube dockerd[490]: time="2020-04-01T09:53:31.896658010Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:35 minikube dockerd[490]: time="2020-04-01T09:53:35.836483723Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:38 minikube dockerd[490]: time="2020-04-01T09:53:38.769040837Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:41 minikube dockerd[490]: time="2020-04-01T09:53:41.738606281Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:44 minikube dockerd[490]: time="2020-04-01T09:53:44.893206265Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:48 minikube dockerd[490]: time="2020-04-01T09:53:48.816310907Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:52 minikube dockerd[490]: time="2020-04-01T09:53:52.877527392Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:56 minikube dockerd[490]: time="2020-04-01T09:53:56.841520637Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:53:59 minikube dockerd[490]: time="2020-04-01T09:53:59.311523463Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:54:03 minikube dockerd[490]: time="2020-04-01T09:54:03.170362141Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:54:11 minikube dockerd[490]: time="2020-04-01T09:54:11.321776022Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:54:15 minikube dockerd[490]: time="2020-04-01T09:54:15.172695882Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 01 09:54:19 minikube dockerd[490]: time="2020-04-01T09:54:19.930621480Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
8fc9cc1bc15ce 4689081edb103 6 minutes ago Running storage-provisioner 0 30a0c5c32385d
bea3e72d5638d aa67fec7d7ef7 6 minutes ago Running kindnet-cni 0 83fd2295aee50
0c8c6357ad05e 43940c34f24f3 6 minutes ago Running kube-proxy 0 dee789156ed52
e63096f3eaf43 303ce5db0e90d 7 minutes ago Running etcd 0 f00c9487ea1bf
6410bc0230f16 74060cea7f704 7 minutes ago Running kube-apiserver 0 7ead18e669662
359b0b724d7df d3e55153f52fb 7 minutes ago Running kube-controller-manager 0 d7bc7d8b68721
3fc33c53f9fca a31f78c7c8ce1 7 minutes ago Running kube-scheduler 0 4d11b2b54d9a5

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=48fefd43444d2f8852f527c78f0141b377b1e42a
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_04_01T11_47_35_0700
minikube.k8s.io/version=v1.9.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 01 Apr 2020 09:47:30 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Wed, 01 Apr 2020 09:54:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Wed, 01 Apr 2020 09:52:54 +0000 Wed, 01 Apr 2020 09:47:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 01 Apr 2020 09:52:54 +0000 Wed, 01 Apr 2020 09:47:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 01 Apr 2020 09:52:54 +0000 Wed, 01 Apr 2020 09:47:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 01 Apr 2020 09:52:54 +0000 Wed, 01 Apr 2020 09:47:51 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 8
ephemeral-storage: 231488068Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32721936Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 213339403116
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32619536Ki
pods: 110
System Info:
Machine ID: c9bb73676eea461685b74966f6baefb8
System UUID: be483f4b-4e9c-446c-a3d5-af8108c8f1c5
Boot ID: bfd613ae-bcd4-43c8-8ba9-c5a9f621c1b1
Kernel Version: 4.15.0-91-generic
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.18.0
Kube-Proxy Version: v1.18.0
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-66bff467f8-z74w2 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 6m48s
kube-system coredns-66bff467f8-zg7r6 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 6m48s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m45s
kube-system kindnet-p54hr 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 6m49s
kube-system kube-apiserver-minikube 250m (3%) 0 (0%) 0 (0%) 0 (0%) 6m45s
kube-system kube-controller-manager-minikube 200m (2%) 0 (0%) 0 (0%) 0 (0%) 6m45s
kube-system kube-proxy-dgfzp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m49s
kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 6m45s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m40s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 850m (10%) 100m (1%)
memory 190Mi (0%) 390Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 6m46s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 6m46s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m46s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m46s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m46s kubelet, minikube Updated Node Allocatable limit across pods
Normal NodeReady 6m35s kubelet, minikube Node minikube status is now: NodeReady
Normal Starting 6m35s kube-proxy, minikube Starting kube-proxy.

==> dmesg <==
[Mar31 15:50] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.

==> etcd [e63096f3eaf4] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-01 09:47:26.383944 I | etcdmain: etcd Version: 3.4.3
2020-04-01 09:47:26.383972 I | etcdmain: Git SHA: 3cf2f69b5
2020-04-01 09:47:26.383976 I | etcdmain: Go Version: go1.12.12
2020-04-01 09:47:26.383979 I | etcdmain: Go OS/Arch: linux/amd64
2020-04-01 09:47:26.383985 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-01 09:47:26.384052 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-04-01 09:47:26.391615 I | embed: name = minikube
2020-04-01 09:47:26.391625 I | embed: data dir = /var/lib/minikube/etcd
2020-04-01 09:47:26.391629 I | embed: member dir = /var/lib/minikube/etcd/member
2020-04-01 09:47:26.391632 I | embed: heartbeat = 100ms
2020-04-01 09:47:26.391635 I | embed: election = 1000ms
2020-04-01 09:47:26.391640 I | embed: snapshot count = 10000
2020-04-01 09:47:26.391651 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-04-01 09:47:26.402744 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
raft2020/04/01 09:47:26 INFO: b8e14bda2255bc24 switched to configuration voters=()
raft2020/04/01 09:47:26 INFO: b8e14bda2255bc24 became follower at term 0
raft2020/04/01 09:47:26 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/04/01 09:47:26 INFO: b8e14bda2255bc24 became follower at term 1
raft2020/04/01 09:47:26 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-04-01 09:47:26.415395 W | auth: simple token is not cryptographically signed
2020-04-01 09:47:26.498107 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-04-01 09:47:26.498565 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/04/01 09:47:26 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-04-01 09:47:26.498932 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-04-01 09:47:26.501093 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-04-01 09:47:26.501153 I | embed: listening for peers on 172.17.0.2:2380
2020-04-01 09:47:26.501223 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/04/01 09:47:27 INFO: b8e14bda2255bc24 is starting a new election at term 1
raft2020/04/01 09:47:27 INFO: b8e14bda2255bc24 became candidate at term 2
raft2020/04/01 09:47:27 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
raft2020/04/01 09:47:27 INFO: b8e14bda2255bc24 became leader at term 2
raft2020/04/01 09:47:27 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-04-01 09:47:27.403865 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-04-01 09:47:27.403880 I | embed: ready to serve client requests
2020-04-01 09:47:27.403887 I | etcdserver: setting up the initial cluster version to 3.4
2020-04-01 09:47:27.403905 I | embed: ready to serve client requests
2020-04-01 09:47:27.407486 N | etcdserver/membership: set the initial cluster version to 3.4
2020-04-01 09:47:27.407547 I | etcdserver/api: enabled capabilities for version 3.4
2020-04-01 09:47:27.559006 I | embed: serving client requests on 127.0.0.1:2379
2020-04-01 09:47:27.559053 I | embed: serving client requests on 172.17.0.2:2379
2020-04-01 09:49:21.811765 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-controller-manager" " with result "range_response_count:1 size:507" took too long (129.233387ms) to execute
2020-04-01 09:49:48.173177 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-scheduler" " with result "range_response_count:1 size:575" took too long (108.454287ms) to execute
2020-04-01 09:49:48.173261 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-controller-manager" " with result "range_response_count:1 size:505" took too long (137.867128ms) to execute
2020-04-01 09:49:54.367694 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-controller-manager" " with result "range_response_count:1 size:506" took too long (121.293908ms) to execute
2020-04-01 09:49:54.367776 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-scheduler" " with result "range_response_count:1 size:575" took too long (103.71238ms) to execute

==> kernel <==
09:54:29 up 13 days, 2:26, 0 users, load average: 6.30, 5.35, 3.71
Linux minikube 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [6410bc0230f1] <==
W0401 09:47:28.450728 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0401 09:47:28.461427 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0401 09:47:28.463643 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0401 09:47:28.473742 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0401 09:47:28.487001 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0401 09:47:28.487014 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0401 09:47:28.493612 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0401 09:47:28.493631 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0401 09:47:28.500200 1 client.go:361] parsed scheme: "endpoint"
I0401 09:47:28.500232 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0401 09:47:28.505611 1 client.go:361] parsed scheme: "endpoint"
I0401 09:47:28.505626 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0401 09:47:29.899977 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0401 09:47:29.899980 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0401 09:47:29.900450 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0401 09:47:29.900749 1 secure_serving.go:178] Serving securely on [::]:8443
I0401 09:47:29.900774 1 controller.go:81] Starting OpenAPI AggregationController
I0401 09:47:29.900791 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0401 09:47:29.900830 1 autoregister_controller.go:141] Starting autoregister controller
I0401 09:47:29.900838 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0401 09:47:29.900858 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0401 09:47:29.900864 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0401 09:47:29.900877 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0401 09:47:29.900882 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0401 09:47:29.900949 1 crd_finalizer.go:266] Starting CRDFinalizer
I0401 09:47:29.900965 1 controller.go:86] Starting OpenAPI controller
I0401 09:47:29.900977 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0401 09:47:29.900989 1 naming_controller.go:291] Starting NamingConditionController
I0401 09:47:29.901004 1 establishing_controller.go:76] Starting EstablishingController
I0401 09:47:29.901011 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0401 09:47:29.901023 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0401 09:47:29.901242 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0401 09:47:29.901252 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0401 09:47:29.901286 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0401 09:47:29.901312 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0401 09:47:29.901543 1 available_controller.go:387] Starting AvailableConditionController
I0401 09:47:29.901551 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
E0401 09:47:29.914095 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0401 09:47:30.000982 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0401 09:47:30.001029 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0401 09:47:30.001037 1 cache.go:39] Caches are synced for autoregister controller
I0401 09:47:30.001666 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0401 09:47:30.001675 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0401 09:47:30.849841 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0401 09:47:30.849979 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0401 09:47:30.903359 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0401 09:47:30.907103 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0401 09:47:30.907116 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0401 09:47:31.104144 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0401 09:47:31.122756 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0401 09:47:31.154683 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0401 09:47:31.155286 1 controller.go:606] quota admission added evaluator for: endpoints
I0401 09:47:31.157196 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0401 09:47:32.766466 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0401 09:47:32.777647 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0401 09:47:32.799345 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0401 09:47:33.212983 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0401 09:47:37.975472 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0401 09:47:37.975472 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0401 09:47:38.163698 1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-controller-manager [359b0b724d7d] <==
I0401 09:47:36.568440 1 daemon_controller.go:257] Starting daemon sets controller
I0401 09:47:36.568454 1 shared_informer.go:223] Waiting for caches to sync for daemon sets
I0401 09:47:36.661456 1 controllermanager.go:533] Started "job"
I0401 09:47:36.661504 1 job_controller.go:144] Starting job controller
I0401 09:47:36.661511 1 shared_informer.go:223] Waiting for caches to sync for job
E0401 09:47:36.810977 1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0401 09:47:36.810993 1 controllermanager.go:525] Skipping "service"
I0401 09:47:37.565673 1 garbagecollector.go:133] Starting garbage collector controller
I0401 09:47:37.565689 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0401 09:47:37.565704 1 graph_builder.go:282] GraphBuilder running
I0401 09:47:37.565707 1 controllermanager.go:533] Started "garbagecollector"
I0401 09:47:37.569358 1 controllermanager.go:533] Started "pv-protection"
I0401 09:47:37.570194 1 pv_protection_controller.go:83] Starting PV protection controller
I0401 09:47:37.570291 1 shared_informer.go:223] Waiting for caches to sync for PV protection
I0401 09:47:37.580369 1 shared_informer.go:230] Caches are synced for PVC protection
I0401 09:47:37.583790 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0401 09:47:37.587167 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0401 09:47:37.590652 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0401 09:47:37.594580 1 shared_informer.go:230] Caches are synced for ReplicationController
I0401 09:47:37.610654 1 shared_informer.go:230] Caches are synced for HPA
I0401 09:47:37.610890 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0401 09:47:37.661814 1 shared_informer.go:230] Caches are synced for endpoint
I0401 09:47:37.666121 1 shared_informer.go:230] Caches are synced for stateful set
I0401 09:47:37.670450 1 shared_informer.go:230] Caches are synced for PV protection
I0401 09:47:37.671387 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0401 09:47:37.713046 1 request.go:621] Throttling request took 1.048767055s, request: GET:https://172.17.0.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0401 09:47:37.761929 1 shared_informer.go:230] Caches are synced for expand
I0401 09:47:37.861624 1 shared_informer.go:230] Caches are synced for job
W0401 09:47:37.920659 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0401 09:47:37.960952 1 shared_informer.go:230] Caches are synced for GC
I0401 09:47:37.968599 1 shared_informer.go:230] Caches are synced for daemon sets
I0401 09:47:37.976524 1 shared_informer.go:230] Caches are synced for taint
I0401 09:47:37.976621 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0401 09:47:37.976705 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
W0401 09:47:37.976759 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0401 09:47:37.976766 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"bf59aa5f-9abe-4567-bb3f-f60e7dd76743", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0401 09:47:37.976785 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0401 09:47:37.983651 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"722f7b71-aa7d-43f5-8d2d-9f48e878b40c", APIVersion:"apps/v1", ResourceVersion:"183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-dgfzp
I0401 09:47:37.986247 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"b7daa473-1506-4813-bd70-f18fe26d6a1c", APIVersion:"apps/v1", ResourceVersion:"251", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-p54hr
E0401 09:47:37.992900 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"722f7b71-aa7d-43f5-8d2d-9f48e878b40c", ResourceVersion:"183", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721331252, loc:(*time.Location)(0x6d021e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018ac100), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018ac120)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018ac140), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0010ac8c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018ac160), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018ac180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0018ac1c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00150cbe0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000755918), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000456230), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00171c108)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000755978)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0401 09:47:38.010975 1 shared_informer.go:230] Caches are synced for TTL
I0401 09:47:38.010975 1 shared_informer.go:230] Caches are synced for persistent volume
I0401 09:47:38.012745 1 shared_informer.go:230] Caches are synced for attach detach
I0401 09:47:38.110672 1 shared_informer.go:230] Caches are synced for disruption
I0401 09:47:38.110691 1 disruption.go:339] Sending events to api server.
I0401 09:47:38.111319 1 shared_informer.go:230] Caches are synced for deployment
I0401 09:47:38.160755 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0401 09:47:38.164629 1 shared_informer.go:230] Caches are synced for resource quota
I0401 09:47:38.165032 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"47882f76-66a8-46bc-8bec-fea4824d733b", APIVersion:"apps/v1", ResourceVersion:"178", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0401 09:47:38.211247 1 shared_informer.go:230] Caches are synced for service account
I0401 09:47:38.217853 1 shared_informer.go:230] Caches are synced for namespace
I0401 09:47:38.265347 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d0af12d1-c047-437d-94e7-f456bc9bd4e0", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-zg7r6
I0401 09:47:38.265874 1 shared_informer.go:230] Caches are synced for garbage collector
I0401 09:47:38.265971 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0401 09:47:38.270105 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d0af12d1-c047-437d-94e7-f456bc9bd4e0", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-z74w2
I0401 09:47:38.313765 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0401 09:47:38.313811 1 shared_informer.go:230] Caches are synced for resource quota
I0401 09:47:39.065044 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0401 09:47:39.065091 1 shared_informer.go:230] Caches are synced for garbage collector
I0401 09:47:52.977389 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [0c8c6357ad05] <==
W0401 09:47:51.416796 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0401 09:47:51.421864 1 node.go:136] Successfully retrieved node IP: 172.17.0.2
I0401 09:47:51.421891 1 server_others.go:186] Using iptables Proxier.
W0401 09:47:51.421898 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0401 09:47:51.421902 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0401 09:47:51.422109 1 server.go:583] Version: v1.18.0
I0401 09:47:51.422449 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0401 09:47:51.422579 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0401 09:47:51.422622 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0401 09:47:51.422753 1 config.go:133] Starting endpoints config controller
I0401 09:47:51.422762 1 config.go:315] Starting service config controller
I0401 09:47:51.422768 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0401 09:47:51.422769 1 shared_informer.go:223] Waiting for caches to sync for service config
I0401 09:47:51.522942 1 shared_informer.go:230] Caches are synced for service config
I0401 09:47:51.522952 1 shared_informer.go:230] Caches are synced for endpoints config

==> kube-scheduler [3fc33c53f9fc] <==
I0401 09:47:26.314830 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0401 09:47:26.314904 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0401 09:47:26.655206 1 serving.go:313] Generated self-signed cert in-memory
W0401 09:47:29.922780 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0401 09:47:29.922950 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0401 09:47:29.923029 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0401 09:47:29.923090 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0401 09:47:29.958764 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0401 09:47:29.958777 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0401 09:47:29.959656 1 authorization.go:47] Authorization is disabled
W0401 09:47:29.959664 1 authentication.go:40] Authentication is disabled
I0401 09:47:29.959672 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0401 09:47:29.960379 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 09:47:29.960397 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 09:47:29.960546 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0401 09:47:29.960562 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0401 09:47:29.961540 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 09:47:29.961657 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0401 09:47:29.961769 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0401 09:47:29.961826 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 09:47:29.961860 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0401 09:47:29.961888 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 09:47:29.961935 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 09:47:29.962078 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0401 09:47:29.962151 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0401 09:47:29.962164 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0401 09:47:29.963025 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0401 09:47:29.964065 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0401 09:47:29.965151 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0401 09:47:29.966210 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0401 09:47:29.967266 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0401 09:47:29.968297 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0401 09:47:29.969523 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0401 09:47:29.970538 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0401 09:47:33.060603 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0401 09:47:33.261070 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0401 09:47:33.265614 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2020-04-01 09:46:08 UTC, end at Wed 2020-04-01 09:54:42 UTC. --
Apr 01 09:54:26 minikube kubelet[2655]: ]
Apr 01 09:54:26 minikube kubelet[2655]: E0401 09:54:26.163063 2655 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.113 -j CNI-5279f2785e63c780d0732515 -m comment --comment name: "crio-bridge" id: "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-5279f2785e63c780d0732515':No such file or directory Apr 01 09:54:26 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:26 minikube kubelet[2655]: ]
Apr 01 09:54:26 minikube kubelet[2655]: E0401 09:54:26.163123 2655 pod_workers.go:191] Error syncing pod 8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17 ("coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.113 -j CNI-5279f2785e63c780d0732515 -m comment --comment name: "crio-bridge" id: "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-5279f2785e63c780d0732515':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Apr 01 09:54:26 minikube kubelet[2655]: W0401 09:54:26.839683 2655 pod_container_deletor.go:77] Container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" not found in pod's containers
Apr 01 09:54:28 minikube kubelet[2655]: W0401 09:54:28.259379 2655 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-66bff467f8-z74w2_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1"
Apr 01 09:54:28 minikube kubelet[2655]: W0401 09:54:28.442908 2655 pod_container_deletor.go:77] Container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1" not found in pod's containers
Apr 01 09:54:28 minikube kubelet[2655]: W0401 09:54:28.444636 2655 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "a3d127884cab0e40db372b7fb6719bd45c90f47e8b478a9771d6f324b999bad1"
Apr 01 09:54:30 minikube kubelet[2655]: E0401 09:54:30.578092 2655 cni.go:364] Error adding kube-system_coredns-66bff467f8-zg7r6/f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Apr 01 09:54:31 minikube kubelet[2655]: E0401 09:54:31.382567 2655 cni.go:385] Error deleting kube-system_coredns-66bff467f8-zg7r6/f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.114 -j CNI-ea3ab019824f40082824bf95 -m comment --comment name: "crio-bridge" id: "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ea3ab019824f40082824bf95':No such file or directory Apr 01 09:54:31 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:31 minikube kubelet[2655]: W0401 09:54:31.553702 2655 pod_container_deletor.go:77] Container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" not found in pod's containers
Apr 01 09:54:32 minikube kubelet[2655]: E0401 09:54:32.093322 2655 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.114 -j CNI-ea3ab019824f40082824bf95 -m comment --comment name: "crio-bridge" id: "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ea3ab019824f40082824bf95':No such file or directory Apr 01 09:54:32 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:32 minikube kubelet[2655]: ]
Apr 01 09:54:32 minikube kubelet[2655]: E0401 09:54:32.093378 2655 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.114 -j CNI-ea3ab019824f40082824bf95 -m comment --comment name: "crio-bridge" id: "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ea3ab019824f40082824bf95':No such file or directory Apr 01 09:54:32 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:32 minikube kubelet[2655]: ]
Apr 01 09:54:32 minikube kubelet[2655]: E0401 09:54:32.093392 2655 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.114 -j CNI-ea3ab019824f40082824bf95 -m comment --comment name: "crio-bridge" id: "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ea3ab019824f40082824bf95':No such file or directory Apr 01 09:54:32 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:32 minikube kubelet[2655]: ]
Apr 01 09:54:32 minikube kubelet[2655]: E0401 09:54:32.093462 2655 pod_workers.go:191] Error syncing pod 4971bb50-edc1-48cc-8d8a-99c7f22f1b89 ("coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.114 -j CNI-ea3ab019824f40082824bf95 -m comment --comment name: "crio-bridge" id: "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ea3ab019824f40082824bf95':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Apr 01 09:54:32 minikube kubelet[2655]: W0401 09:54:32.696563 2655 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-66bff467f8-zg7r6_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4"
Apr 01 09:54:32 minikube kubelet[2655]: W0401 09:54:32.702199 2655 pod_container_deletor.go:77] Container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4" not found in pod's containers
Apr 01 09:54:32 minikube kubelet[2655]: W0401 09:54:32.703526 2655 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f44486313af6605b6b81e6dc657ebcbe3c291cc38b1de35b3efd910605398cb4"
Apr 01 09:54:33 minikube kubelet[2655]: E0401 09:54:33.274694 2655 cni.go:364] Error adding kube-system_coredns-66bff467f8-z74w2/9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Apr 01 09:54:33 minikube kubelet[2655]: E0401 09:54:33.461895 2655 cni.go:385] Error deleting kube-system_coredns-66bff467f8-z74w2/9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.115 -j CNI-a908ecc9a1ac57dfcd5e4b8f -m comment --comment name: "crio-bridge" id: "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-a908ecc9a1ac57dfcd5e4b8f':No such file or directory Apr 01 09:54:33 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:35 minikube kubelet[2655]: E0401 09:54:35.419973 2655 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.115 -j CNI-a908ecc9a1ac57dfcd5e4b8f -m comment --comment name: "crio-bridge" id: "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-a908ecc9a1ac57dfcd5e4b8f':No such file or directory Apr 01 09:54:35 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:35 minikube kubelet[2655]: ]
Apr 01 09:54:35 minikube kubelet[2655]: E0401 09:54:35.420002 2655 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.115 -j CNI-a908ecc9a1ac57dfcd5e4b8f -m comment --comment name: "crio-bridge" id: "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-a908ecc9a1ac57dfcd5e4b8f':No such file or directory Apr 01 09:54:35 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:35 minikube kubelet[2655]: ]
Apr 01 09:54:35 minikube kubelet[2655]: E0401 09:54:35.420012 2655 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.115 -j CNI-a908ecc9a1ac57dfcd5e4b8f -m comment --comment name: "crio-bridge" id: "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-a908ecc9a1ac57dfcd5e4b8f':No such file or directory Apr 01 09:54:35 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:35 minikube kubelet[2655]: ]
Apr 01 09:54:35 minikube kubelet[2655]: E0401 09:54:35.420049 2655 pod_workers.go:191] Error syncing pod 8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17 ("coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-66bff467f8-z74w2_kube-system(8ed551bf-dd2d-4715-ba1b-cc1ce5bd3c17)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to set up pod "coredns-66bff467f8-z74w2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" network for pod "coredns-66bff467f8-z74w2": networkPlugin cni failed to teardown pod "coredns-66bff467f8-z74w2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.115 -j CNI-a908ecc9a1ac57dfcd5e4b8f -m comment --comment name: "crio-bridge" id: "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-a908ecc9a1ac57dfcd5e4b8f':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Apr 01 09:54:36 minikube kubelet[2655]: W0401 09:54:36.597386 2655 pod_container_deletor.go:77] Container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" not found in pod's containers
Apr 01 09:54:37 minikube kubelet[2655]: W0401 09:54:37.819204 2655 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-66bff467f8-z74w2_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae"
Apr 01 09:54:37 minikube kubelet[2655]: W0401 09:54:37.824610 2655 pod_container_deletor.go:77] Container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae" not found in pod's containers
Apr 01 09:54:37 minikube kubelet[2655]: W0401 09:54:37.825979 2655 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9b11ceb1779c55f3ec66f9a7de8c1902a6119060855a4f7663ea014bf3cee5ae"
Apr 01 09:54:39 minikube kubelet[2655]: E0401 09:54:39.976888 2655 cni.go:364] Error adding kube-system_coredns-66bff467f8-zg7r6/7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Apr 01 09:54:40 minikube kubelet[2655]: E0401 09:54:40.180350 2655 cni.go:385] Error deleting kube-system_coredns-66bff467f8-zg7r6/7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.116 -j CNI-9858966d43acd0c70a2202fc -m comment --comment name: "crio-bridge" id: "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-9858966d43acd0c70a2202fc':No such file or directory Apr 01 09:54:40 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:41 minikube kubelet[2655]: W0401 09:54:41.188421 2655 pod_container_deletor.go:77] Container "d6920acfcacc1b50e5e4b12cc289af6b63f6744332aaae38e4854db74d914888" not found in pod's containers
Apr 01 09:54:41 minikube kubelet[2655]: E0401 09:54:41.328649 2655 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.116 -j CNI-9858966d43acd0c70a2202fc -m comment --comment name: "crio-bridge" id: "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-9858966d43acd0c70a2202fc':No such file or directory Apr 01 09:54:41 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:41 minikube kubelet[2655]: ]
Apr 01 09:54:41 minikube kubelet[2655]: E0401 09:54:41.328690 2655 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.116 -j CNI-9858966d43acd0c70a2202fc -m comment --comment name: "crio-bridge" id: "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-9858966d43acd0c70a2202fc':No such file or directory Apr 01 09:54:41 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:41 minikube kubelet[2655]: ]
Apr 01 09:54:41 minikube kubelet[2655]: E0401 09:54:41.328705 2655 kuberuntime_manager.go:727] createPodSandbox for pod "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.116 -j CNI-9858966d43acd0c70a2202fc -m comment --comment name: "crio-bridge" id: "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-9858966d43acd0c70a2202fc':No such file or directory Apr 01 09:54:41 minikube kubelet[2655]: Try iptables -h' or 'iptables --help' for more information.
Apr 01 09:54:41 minikube kubelet[2655]: ]
Apr 01 09:54:41 minikube kubelet[2655]: E0401 09:54:41.328762 2655 pod_workers.go:191] Error syncing pod 4971bb50-edc1-48cc-8d8a-99c7f22f1b89 ("coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)"), skipping: failed to "CreatePodSandbox" for "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-66bff467f8-zg7r6_kube-system(4971bb50-edc1-48cc-8d8a-99c7f22f1b89)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to set up pod "coredns-66bff467f8-zg7r6_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" network for pod "coredns-66bff467f8-zg7r6": networkPlugin cni failed to teardown pod "coredns-66bff467f8-zg7r6_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.116 -j CNI-9858966d43acd0c70a2202fc -m comment --comment name: "crio-bridge" id: "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-9858966d43acd0c70a2202fc':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Apr 01 09:54:42 minikube kubelet[2655]: W0401 09:54:42.497416 2655 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-66bff467f8-zg7r6_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774"
Apr 01 09:54:42 minikube kubelet[2655]: W0401 09:54:42.502982 2655 pod_container_deletor.go:77] Container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774" not found in pod's containers
Apr 01 09:54:42 minikube kubelet[2655]: W0401 09:54:42.504523 2655 cni.go:331] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "7e855a968fe76a86225a1e9c2c8f7a5473cf812944642bdcc50f238902abd774"

==> storage-provisioner [8fc9cc1bc15c] <==

@tstromberg tstromberg added the kind/support Categorizes issue or PR as a support question. label Apr 6, 2020
@tstromberg
Copy link
Contributor

If I recall, this issue happens because CNI is enabled, but no CNI has been loaded into Kubernetes yet, which I guess needs to happen before CoreDNS deploys are successful.

The workaround I've seen others use is side-loading a CNI while minikube is starting, but that isn't very friendly. This is certainly something we'll need to address for multi-node.

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/cni CNI support priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/support Categorizes issue or PR as a support question. labels Apr 6, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Apr 7, 2020

Related: #3852

@tstromberg
Copy link
Contributor

Also related: #7459

@medyagh
Copy link
Member

medyagh commented Apr 7, 2020

I have a feeling if we install a CNI by default if the run time is not Docker
simmilar to what we do in Docker and Podman Driver this would be solved

https://github.com/kubernetes/minikube/blob/master/pkg/minikube/bootstrapper/kubeadm/kubeadm.go#L235

related or possible dupe: #7428

@medyagh
Copy link
Member

medyagh commented Apr 8, 2020

I belive I found out the root cause of this issue, the problem is, if extra-opts is sets
it will replace the minikube extra args for kic overlay
I tried locally just by adding:

--extra-config=kubeadm.ignore-preflight-

both containerd and crio will be broken.

this should be an easy fix

@irizzant do you mind sharing what runtime do you use ? are u using docker or containerd or crio ?

@medyagh medyagh self-assigned this Apr 8, 2020
@medyagh
Copy link
Member

medyagh commented Apr 8, 2020

on observation
when we specifiy extra-options the required kic overlay extra options get over-written

minikube start -p p2 --memory=2200 --alsologtostderr -v=3 --wait=true --container-runtime=containerd --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7
{
    "Name": "p2",
    "KeepContext": false,
    "EmbedCerts": false,
    "MinikubeISO": "",
    "Memory": 2200,
    "CPUs": 2,
    "DiskSize": 20000,
    "Driver": "docker",
    "HyperkitVpnKitSock": "",
    "HyperkitVSockPorts": [],
    "DockerEnv": null,
    "InsecureRegistry": null,
    "RegistryMirror": null,
    "HostOnlyCIDR": "192.168.99.1/24",
    "HypervVirtualSwitch": "",
    "HypervUseExternalSwitch": false,
    "HypervExternalAdapter": "",
    "KVMNetwork": "default",
    "KVMQemuURI": "qemu:///system",
    "KVMGPU": false,
    "KVMHidden": false,
    "DockerOpt": null,
    "DisableDriverMounts": true,
    "NFSShare": [],
    "NFSSharesRoot": "/nfsshares",
    "UUID": "",
    "NoVTXCheck": false,
    "DNSProxy": false,
    "HostDNSResolver": true,
    "HostOnlyNicType": "virtio",
    "NatNicType": "virtio",
    "KubernetesConfig": {
        "KubernetesVersion": "v1.15.7",
        "ClusterName": "p2",
        "APIServerName": "minikubeCA",
        "APIServerNames": null,
        "APIServerIPs": null,
        "DNSDomain": "cluster.local",
        "ContainerRuntime": "containerd",
        "CRISocket": "",
        "NetworkPlugin": "cni",
        "FeatureGates": "",
        "ServiceCIDR": "10.96.0.0/12",
        "ImageRepository": "",
        "ExtraOptions": [
            {
                "Component": "kubeadm",
                "Key": "ignore-preflight-errors",
                "Value": "SystemVerification"
            }
        ],
        "ShouldLoadCachedImages": true,
        "EnableDefaultCNI": true,
        "NodeIP": "",
        "NodePort": 0,
        "NodeName": ""
    },
    "Nodes": [
        {
            "Name": "m01",
            "IP": "172.17.0.2",
            "Port": 8443,
            "KubernetesVersion": "v1.15.7",
            "ControlPlane": true,
            "Worker": true
        }
    ],
    "Addons": null,
    "VerifyComponents": {
        "apiserver": true,
        "default_sa": true,
        "system_pods": true
    }

@priyawadhwa
Copy link

Hey @irizzant -- are you still seeing this issue? If so, could you let us know which container runtime (docker (deafult)/containerd/crio) you are using?

@irizzant
Copy link
Author

irizzant commented May 29, 2020

Hi @priyawadhwa
I'm on:

minikube version: v1.10.1
commit: 63ab801ac27e5742ae442ce36dff7877dcccb278

and starting with --network-plugin=cni --enable-default-cni does not make minikube crashing anymore.
By the way I'm using docker as container runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cni CNI support kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

4 participants