Skip to content

Minikube 1.16.0 Fedora 33 (podman + cri-o) doesn't start #10182

Closed
@mrizzi

Description

Steps to reproduce the issue:

  1. $ minikube start --driver=podman --container-runtime=cri-o --alsologtostderr

Full output of failed command:

I0120 10:10:07.692919   34725 out.go:221] Setting OutFile to fd 1 ...
I0120 10:10:07.693207   34725 out.go:273] isatty.IsTerminal(1) = true
I0120 10:10:07.693217   34725 out.go:234] Setting ErrFile to fd 2...
I0120 10:10:07.693224   34725 out.go:273] isatty.IsTerminal(2) = true
I0120 10:10:07.693305   34725 root.go:280] Updating PATH: /home/mrizzi/.minikube/bin
W0120 10:10:07.693390   34725 root.go:255] Error reading config file at /home/mrizzi/.minikube/config/config.json: open /home/mrizzi/.minikube/config/config.json: no such file or directory
I0120 10:10:07.693726   34725 out.go:228] Setting JSON to false
I0120 10:10:07.706938   34725 start.go:104] hostinfo: {"hostname":"fedora-p1","uptime":50503,"bootTime":1611083304,"procs":443,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.7-200.fc33.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"2a0ffbe8-79f8-479f-b627-66a4d7b9718b"}
I0120 10:10:07.707432   34725 start.go:114] virtualization:  
I0120 10:10:07.707738   34725 out.go:119] 😄  minikube v1.16.0 on Fedora 33
😄  minikube v1.16.0 on Fedora 33
I0120 10:10:07.707846   34725 driver.go:303] Setting default libvirt URI to qemu:///system
I0120 10:10:07.707906   34725 notify.go:126] Checking for updates...
I0120 10:10:07.781589   34725 podman.go:118] podman version: 2.2.1
I0120 10:10:07.781701   34725 out.go:119] ✨  Using the podman (experimental) driver based on user configuration
✨  Using the podman (experimental) driver based on user configuration
I0120 10:10:07.781716   34725 start.go:277] selected driver: podman
I0120 10:10:07.781722   34725 start.go:686] validating driver "podman" against <nil>
I0120 10:10:07.781737   34725 start.go:697] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0120 10:10:07.781879   34725 cli_runner.go:111] Run: sudo -n podman system info --format json
I0120 10:10:07.873838   34725 info.go:273] podman info: {Host:{BuildahVersion:1.18.0 CgroupVersion:v2 Conmon:{Package:conmon-2.0.21-3.fc33.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab} Distribution:{Distribution:fedora Version:33} MemFree:22332567552 MemTotal:33410228224 OCIRuntime:{Name:crun Package:crun-0.16-3.fc33.x86_64 Path:/usr/bin/crun Version:crun version 0.16
commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:4294963200 SwapTotal:4294963200 Arch:amd64 Cpus:12 Eventlogger:journald Hostname:fedora-p1 Kernel:5.10.7-200.fc33.x86_64 Os:linux Rootless:false Uptime:14h 1m 43.28s (Approximately 0.58 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com registry.centos.org docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:2} RunRoot:/var/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0120 10:10:07.873928   34725 start_flags.go:235] no existing cluster config was found, will generate one from the flags 
I0120 10:10:07.874581   34725 start_flags.go:253] Using suggested 7900MB memory alloc based on sys=31862MB, container=31862MB
I0120 10:10:07.874682   34725 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
I0120 10:10:07.874707   34725 cni.go:74] Creating CNI manager for ""
I0120 10:10:07.874713   34725 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 10:10:07.874725   34725 start_flags.go:362] Found "CNI" CNI - setting NetworkPlugin=cni
I0120 10:10:07.874733   34725 start_flags.go:367] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 10:10:07.874844   34725 out.go:119] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0120 10:10:07.874858   34725 cache.go:112] Driver isn't docker, skipping base image download
I0120 10:10:07.874864   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 10:10:08.103660   34725 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:10:08.103728   34725 cache.go:54] Caching tarball of preloaded images
I0120 10:10:08.103796   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 10:10:08.308135   34725 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:10:08.308471   34725 out.go:119] 💾  Downloading Kubernetes v1.20.0 preload ...
💾  Downloading Kubernetes v1.20.0 preload ...
I0120 10:10:08.308741   34725 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 -> /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
    > preloaded-images-k8s-v8-v1....: 555.86 MiB / 555.86 MiB  100.00% 8.23 MiB
I0120 10:11:16.885251   34725 preload.go:160] saving checksum for preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 10:11:17.123192   34725 preload.go:177] verifying checksumm of /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 10:11:18.110836   34725 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.0 on crio
I0120 10:11:18.111034   34725 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 10:11:18.111055   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/config.json: {Name:mk473a46e0a7385fc7b1c17eee8567719c4a2678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:18.111277   34725 cache.go:185] Successfully downloaded all kic artifacts
I0120 10:11:18.111300   34725 start.go:314] acquiring machines lock for minikube: {Name:mk6d494bfb92177bc8505684a7c42000ca387cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 10:11:18.111346   34725 start.go:318] acquired machines lock for "minikube" in 32.849µs
I0120 10:11:18.111365   34725 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I0120 10:11:18.111409   34725 start.go:127] createHost starting for "" (driver="podman")
I0120 10:11:18.111516   34725 out.go:119] 🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
I0120 10:11:18.111629   34725 start.go:164] libmachine.API.Create for "minikube" (driver="podman")
I0120 10:11:18.111648   34725 client.go:165] LocalClient.Create starting
I0120 10:11:18.111670   34725 main.go:119] libmachine: Creating CA: /home/mrizzi/.minikube/certs/ca.pem
I0120 10:11:18.201203   34725 main.go:119] libmachine: Creating client certificate: /home/mrizzi/.minikube/certs/cert.pem
I0120 10:11:18.386075   34725 cli_runner.go:111] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0120 10:11:18.462551   34725 network_create.go:59] Found existing network {name:minikube subnet:0xc0002d8480 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0120 10:11:18.462587   34725 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container
I0120 10:11:18.462659   34725 cli_runner.go:111] Run: sudo -n podman ps -a --format {{.Names}}
I0120 10:11:18.534680   34725 cli_runner.go:111] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0120 10:11:18.622635   34725 oci.go:102] Successfully created a podman volume minikube
I0120 10:11:18.622695   34725 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -d /var/lib
I0120 10:11:19.142364   34725 oci.go:106] Successfully prepared a podman volume minikube
I0120 10:11:19.142404   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
W0120 10:11:19.142406   34725 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0120 10:11:19.142428   34725 oci.go:201] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0120 10:11:19.142580   34725 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:11:19.142593   34725 kic.go:159] Starting extracting preloaded images to volume ...
I0120 10:11:19.142697   34725 cli_runner.go:111] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
I0120 10:11:19.142699   34725 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir
W0120 10:11:19.237592   34725 cli_runner.go:149] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0120 10:11:19.237790   34725 cli_runner.go:111] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4
I0120 10:11:19.763960   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Running}}
I0120 10:11:19.852462   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 10:11:19.934538   34725 cli_runner.go:111] Run: sudo -n podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0120 10:11:20.256688   34725 oci.go:246] the created container "minikube" has a running status.
I0120 10:11:20.256710   34725 kic.go:190] Creating ssh key for kic: /home/mrizzi/.minikube/machines/minikube/id_rsa...
I0120 10:11:20.388662   34725 kic_runner.go:187] podman (temp): /home/mrizzi/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0120 10:11:20.388890   34725 kic_runner.go:217] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset879966068 minikube:/home/docker/.ssh/authorized_keys
I0120 10:11:20.693112   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 10:11:20.773164   34725 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0120 10:11:20.773216   34725 kic_runner.go:114] Args: [sudo -n podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0120 10:11:22.411940   34725 cli_runner.go:155] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.269208312s)
I0120 10:11:22.411973   34725 kic.go:168] duration metric: took 3.269382 seconds to extract preloaded images to volume
I0120 10:11:22.412052   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 10:11:22.489611   34725 machine.go:88] provisioning docker machine ...
I0120 10:11:22.489645   34725 ubuntu.go:169] provisioning hostname "minikube"
I0120 10:11:22.489762   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:22.559720   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:22.634695   34725 main.go:119] libmachine: Using SSH client type: native
I0120 10:11:22.634857   34725 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 38549 <nil> <nil>}
I0120 10:11:22.634873   34725 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0120 10:11:22.635051   34725 main.go:119] libmachine: Error dialing TCP: dial tcp 127.0.0.1:38549: connect: connection refused
I0120 10:11:25.767898   34725 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0120 10:11:25.768114   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:25.843664   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:25.918664   34725 main.go:119] libmachine: Using SSH client type: native
I0120 10:11:25.918860   34725 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 38549 <nil> <nil>}
I0120 10:11:25.918881   34725 main.go:119] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0120 10:11:26.045737   34725 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0120 10:11:26.045810   34725 ubuntu.go:175] set auth options {CertDir:/home/mrizzi/.minikube CaCertPath:/home/mrizzi/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mrizzi/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mrizzi/.minikube/machines/server.pem ServerKeyPath:/home/mrizzi/.minikube/machines/server-key.pem ClientKeyPath:/home/mrizzi/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mrizzi/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mrizzi/.minikube}
I0120 10:11:26.045888   34725 ubuntu.go:177] setting up certificates
I0120 10:11:26.045910   34725 provision.go:83] configureAuth start
I0120 10:11:26.046065   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 10:11:26.128641   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 10:11:26.202588   34725 provision.go:137] copyHostCerts
I0120 10:11:26.202652   34725 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/ca.pem --> /home/mrizzi/.minikube/ca.pem (1078 bytes)
I0120 10:11:26.202761   34725 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/cert.pem --> /home/mrizzi/.minikube/cert.pem (1119 bytes)
I0120 10:11:26.202838   34725 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/key.pem --> /home/mrizzi/.minikube/key.pem (1679 bytes)
I0120 10:11:26.202889   34725 provision.go:111] generating server cert: /home/mrizzi/.minikube/machines/server.pem ca-key=/home/mrizzi/.minikube/certs/ca.pem private-key=/home/mrizzi/.minikube/certs/ca-key.pem org=mrizzi.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0120 10:11:26.301469   34725 provision.go:165] copyRemoteCerts
I0120 10:11:26.301515   34725 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 10:11:26.301576   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:26.371741   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:26.446591   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:26.546541   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 10:11:26.594719   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0120 10:11:26.627907   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 10:11:26.643211   34725 provision.go:86] duration metric: configureAuth took 597.279487ms
I0120 10:11:26.643286   34725 ubuntu.go:193] setting minikube options for container-runtime
I0120 10:11:26.643662   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:26.718687   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:26.792613   34725 main.go:119] libmachine: Using SSH client type: native
I0120 10:11:26.792742   34725 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 38549 <nil> <nil>}
I0120 10:11:26.792757   34725 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0120 10:11:26.939546   34725 main.go:119] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0120 10:11:26.939672   34725 machine.go:91] provisioned docker machine in 4.450039663s
I0120 10:11:26.939708   34725 client.go:168] LocalClient.Create took 8.828047593s
I0120 10:11:26.939746   34725 start.go:172] duration metric: libmachine.API.Create for "minikube" took 8.82811025s
I0120 10:11:26.939770   34725 start.go:268] post-start starting for "minikube" (driver="podman")
I0120 10:11:26.939787   34725 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 10:11:26.939907   34725 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 10:11:26.940050   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.010578   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.086604   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:27.185515   34725 ssh_runner.go:149] Run: cat /etc/os-release
I0120 10:11:27.192618   34725 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 10:11:27.192694   34725 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 10:11:27.192733   34725 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 10:11:27.192755   34725 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I0120 10:11:27.192783   34725 filesync.go:118] Scanning /home/mrizzi/.minikube/addons for local assets ...
I0120 10:11:27.192929   34725 filesync.go:118] Scanning /home/mrizzi/.minikube/files for local assets ...
I0120 10:11:27.193018   34725 start.go:271] post-start completed in 253.229663ms
I0120 10:11:27.193855   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 10:11:27.270695   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 10:11:27.344643   34725 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 10:11:27.344899   34725 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 10:11:27.344948   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.412744   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.488611   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:27.576580   34725 start.go:130] duration metric: createHost completed in 9.465149655s
I0120 10:11:27.576636   34725 start.go:81] releasing machines lock for "minikube", held for 9.465274756s
I0120 10:11:27.576921   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 10:11:27.660607   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 10:11:27.736755   34725 ssh_runner.go:149] Run: systemctl --version
I0120 10:11:27.736813   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.736755   34725 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0120 10:11:27.736880   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.810626   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.862682   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.891524   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:27.939638   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:28.122052   34725 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 10:11:28.153457   34725 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
I0120 10:11:28.189199   34725 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 10:11:28.196452   34725 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0120 10:11:28.203119   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0120 10:11:28.211968   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
I0120 10:11:28.218728   34725 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 10:11:28.223138   34725 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 10:11:28.227372   34725 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0120 10:11:28.301867   34725 ssh_runner.go:149] Run: sudo systemctl start crio
I0120 10:11:28.465403   34725 ssh_runner.go:149] Run: crio --version
I0120 10:11:28.505287   34725 out.go:119] 🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
I0120 10:11:28.505351   34725 cli_runner.go:111] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} minikube
I0120 10:11:28.583617   34725 ssh_runner.go:149] Run: grep <nil>	host.minikube.internal$ /etc/hosts
I0120 10:11:28.585831   34725 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "<nil>	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 10:11:28.592190   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 10:11:28.592224   34725 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:11:28.592262   34725 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 10:11:28.625212   34725 crio.go:345] all images are preloaded for cri-o runtime.
I0120 10:11:28.625230   34725 crio.go:260] Images already preloaded, skipping extraction
I0120 10:11:28.625280   34725 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 10:11:28.635701   34725 crio.go:345] all images are preloaded for cri-o runtime.
I0120 10:11:28.635719   34725 cache_images.go:74] Images are preloaded, skipping loading
I0120 10:11:28.635768   34725 ssh_runner.go:149] Run: crio config
I0120 10:11:28.676303   34725 cni.go:74] Creating CNI manager for ""
I0120 10:11:28.676321   34725 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 10:11:28.676333   34725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 10:11:28.676345   34725 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0120 10:11:28.676438   34725 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 192.168.49.2:10249

I0120 10:11:28.676559   34725 kubeadm.go:862] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0120 10:11:28.676617   34725 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 10:11:28.681597   34725 binaries.go:44] Found k8s binaries, skipping transfer
I0120 10:11:28.681640   34725 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 10:11:28.686507   34725 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
I0120 10:11:28.696472   34725 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0120 10:11:28.705973   34725 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1843 bytes)
I0120 10:11:28.715510   34725 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0120 10:11:28.717371   34725 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 10:11:28.723448   34725 certs.go:52] Setting up /home/mrizzi/.minikube/profiles/minikube for IP: 192.168.49.2
I0120 10:11:28.723494   34725 certs.go:173] generating minikubeCA CA: /home/mrizzi/.minikube/ca.key
I0120 10:11:28.968184   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/ca.crt ...
I0120 10:11:28.968203   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.crt: {Name:mke03e9a1920afba460c060be5f4b6769ef644b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:28.968462   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/ca.key ...
I0120 10:11:28.968472   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.key: {Name:mkb240f7f8e6f82e4d610aab52b47468a1329330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:28.968559   34725 certs.go:173] generating proxyClientCA CA: /home/mrizzi/.minikube/proxy-client-ca.key
I0120 10:11:29.156962   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/proxy-client-ca.crt ...
I0120 10:11:29.156981   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.crt: {Name:mk4174df0f1b4beaf8e5a275fbdf42244be71f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.157136   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/proxy-client-ca.key ...
I0120 10:11:29.157146   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.key: {Name:mk5e6950da80fd9764adae2b6dd79810410ec3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.157252   34725 certs.go:277] generating minikube-user signed cert: /home/mrizzi/.minikube/profiles/minikube/client.key
I0120 10:11:29.157260   34725 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/client.crt with IP's: []
I0120 10:11:29.238721   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/client.crt ...
I0120 10:11:29.238744   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.crt: {Name:mk2ff7788ac9d0de0cd174f0617feb2f1dd707c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.238881   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/client.key ...
I0120 10:11:29.238891   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.key: {Name:mkedf501c0d6a07a0aa78a08660f8e8e7cc0c918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.238986   34725 certs.go:277] generating minikube signed cert: /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0120 10:11:29.238994   34725 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0120 10:11:29.341821   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0120 10:11:29.341842   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk422858b15bd0eaea2b6fcba46c45cc115c0286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.341983   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0120 10:11:29.341997   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk0658a97766b6658717586fb5056c92e38378bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.342087   34725 certs.go:288] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.crt
I0120 10:11:29.342171   34725 certs.go:292] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.key
I0120 10:11:29.342239   34725 certs.go:277] generating aggregator signed cert: /home/mrizzi/.minikube/profiles/minikube/proxy-client.key
I0120 10:11:29.342248   34725 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0120 10:11:29.442250   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt ...
I0120 10:11:29.442270   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt: {Name:mka2338a78f50214ee1948cd9bf268c531eaa3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.442402   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/proxy-client.key ...
I0120 10:11:29.442410   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.key: {Name:mk969b8bdb9a7c95302616c350453daaad785fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.442554   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca-key.pem (1679 bytes)
I0120 10:11:29.442581   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca.pem (1078 bytes)
I0120 10:11:29.442597   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/cert.pem (1119 bytes)
I0120 10:11:29.442615   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/key.pem (1679 bytes)
I0120 10:11:29.443251   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0120 10:11:29.457143   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 10:11:29.470424   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 10:11:29.483690   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 10:11:29.495539   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 10:11:29.508794   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0120 10:11:29.520759   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 10:11:29.533025   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0120 10:11:29.546061   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 10:11:29.558122   34725 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0120 10:11:29.567914   34725 ssh_runner.go:149] Run: openssl version
I0120 10:11:29.571701   34725 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 10:11:29.576915   34725 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 10:11:29.578926   34725 certs.go:393] hashing: -rw-r--r--. 1 root root 1111 Jan 20 09:11 /usr/share/ca-certificates/minikubeCA.pem
I0120 10:11:29.578961   34725 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 10:11:29.582173   34725 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 10:11:29.586946   34725 kubeadm.go:364] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 10:11:29.586994   34725 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0120 10:11:29.587036   34725 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 10:11:29.597079   34725 cri.go:76] found id: ""
I0120 10:11:29.597161   34725 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 10:11:29.603001   34725 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 10:11:29.607835   34725 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 10:11:29.607877   34725 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 10:11:29.612767   34725 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 10:11:29.612799   34725 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0120 10:11:29.790705   34725 out.go:140]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...| I0120 10:11:31.863066   34725 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...\ W0120 10:13:26.882652   34725 out.go:181] 💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

I0120 10:13:26.882820   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
| I0120 10:13:28.235433   34725 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.352592269s)
I0120 10:13:28.235493   34725 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
/ I0120 10:13:28.244390   34725 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
I0120 10:13:28.244451   34725 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 10:13:28.256278   34725 cri.go:76] found id: ""
I0120 10:13:28.256313   34725 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 10:13:28.256389   34725 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 10:13:28.262111   34725 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 10:13:28.262144   34725 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
- I0120 10:13:28.435841   34725 out.go:140]     ▪ Generating certificates and keys ...

    ▪ Generating certificates and keys ...| I0120 10:13:29.019428   34725 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...\ I0120 10:15:24.039004   34725 kubeadm.go:366] StartCluster complete in 3m54.452045986s
I0120 10:15:24.039040   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0120 10:15:24.039148   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 10:15:24.051246   34725 cri.go:76] found id: ""
I0120 10:15:24.051265   34725 logs.go:206] 0 containers: []
W0120 10:15:24.051277   34725 logs.go:208] No container was found matching "kube-apiserver"
I0120 10:15:24.051291   34725 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0120 10:15:24.051339   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
I0120 10:15:24.063070   34725 cri.go:76] found id: ""
I0120 10:15:24.063091   34725 logs.go:206] 0 containers: []
W0120 10:15:24.063103   34725 logs.go:208] No container was found matching "etcd"
I0120 10:15:24.063113   34725 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0120 10:15:24.063162   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
I0120 10:15:24.073915   34725 cri.go:76] found id: ""
I0120 10:15:24.073933   34725 logs.go:206] 0 containers: []
W0120 10:15:24.073944   34725 logs.go:208] No container was found matching "coredns"
I0120 10:15:24.073955   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0120 10:15:24.074003   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 10:15:24.084882   34725 cri.go:76] found id: ""
I0120 10:15:24.084904   34725 logs.go:206] 0 containers: []
W0120 10:15:24.084915   34725 logs.go:208] No container was found matching "kube-scheduler"
I0120 10:15:24.084930   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0120 10:15:24.084973   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
| I0120 10:15:24.102385   34725 cri.go:76] found id: ""
I0120 10:15:24.102464   34725 logs.go:206] 0 containers: []
W0120 10:15:24.102476   34725 logs.go:208] No container was found matching "kube-proxy"
I0120 10:15:24.102500   34725 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 10:15:24.102574   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 10:15:24.122481   34725 cri.go:76] found id: ""
I0120 10:15:24.122536   34725 logs.go:206] 0 containers: []
W0120 10:15:24.122553   34725 logs.go:208] No container was found matching "kubernetes-dashboard"
I0120 10:15:24.122572   34725 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0120 10:15:24.122681   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 10:15:24.142397   34725 cri.go:76] found id: ""
I0120 10:15:24.142422   34725 logs.go:206] 0 containers: []
W0120 10:15:24.142435   34725 logs.go:208] No container was found matching "storage-provisioner"
I0120 10:15:24.142444   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0120 10:15:24.142554   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 10:15:24.168968   34725 cri.go:76] found id: ""
I0120 10:15:24.169070   34725 logs.go:206] 0 containers: []
W0120 10:15:24.169143   34725 logs.go:208] No container was found matching "kube-controller-manager"
I0120 10:15:24.169194   34725 logs.go:120] Gathering logs for kubelet ...
I0120 10:15:24.169277   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
/ I0120 10:15:24.236790   34725 logs.go:120] Gathering logs for dmesg ...
I0120 10:15:24.236837   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 10:15:24.256094   34725 logs.go:120] Gathering logs for describe nodes ...
I0120 10:15:24.256127   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
- W0120 10:15:24.345646   34725 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0120 10:15:24.345681   34725 logs.go:120] Gathering logs for CRI-O ...
I0120 10:15:24.345702   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
\ I0120 10:15:24.416703   34725 logs.go:120] Gathering logs for container status ...
I0120 10:15:24.416772   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0120 10:15:24.444499   34725 out.go:294] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
W0120 10:15:24.444840   34725 out.go:181] 

W0120 10:15:24.445143   34725 out.go:181] 💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

W0120 10:15:24.445372   34725 out.go:181] 

W0120 10:15:24.445418   34725 out.go:181] 😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
W0120 10:15:24.445481   34725 out.go:181] 👉  https://github.com/kubernetes/minikube/issues/new/choose
👉  https://github.com/kubernetes/minikube/issues/new/choose
I0120 10:15:24.447800   34725 out.go:119] 


W0120 10:15:24.448113   34725 out.go:181] ❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

W0120 10:15:24.452245   34725 out.go:181] 💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0120 10:15:24.452310   34725 out.go:181] 🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0120 10:15:24.452338   34725 out.go:119] 

Full output of minikube logs command:

==> CRI-O <==
-- Logs begin at Wed 2021-01-20 09:11:25 UTC, end at Wed 2021-01-20 09:16:46 UTC. --
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.419778143Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=c143b1a7-7629-41cf-ae35-6604b4661000 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.420843854Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/pause\"],\"WorkingDir\":\"/\"},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770\"]},\"history\":[{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ARG ARCH\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ADD bin/pause-amd64 /pause # buildkit\",\"comment\":\"buildkit.dockerfile.v0\"},{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ENTRYPOINT [\\\"/pause\\\"]\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true}]}},},}" id=c143b1a7-7629-41cf-ae35-6604b4661000 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.425544638Z" level=info msg="Checking image status: k8s.gcr.io/etcd:3.4.13-0" id=82276258-4fb9-46f8-8add-4a1f85e32393 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.428568383Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,RepoTags:[k8s.gcr.io/etcd:3.4.13-0],RepoDigests:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a],Size_:254662613,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-08-27T13:47:36.718716443Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"ExposedPorts\":{\"2379/tcp\":{},\"2380/tcp\":{},\"4001/tcp\":{},\"7001/tcp\":{}},\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\"SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt\"],\"WorkingDir\":\"/\"},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d72a74c56330b347f7d18b64d2effd93edd695fde25dc301d52c37efbcf4844e\",\"sha256:d61c79b2929916dd31e6d4aa48d30587f63a3192ab0418db8e7fcbea1ad654b9\",\"sha256:1a4e46412eb09db65f559c3921e4b39ab2dfb059482ebe416bcb740c10769ab3\",\"sha256:bfa5849f3d098e8f222dacc4d682250340a9cab32590d052b6922f0956ccaa04\",\"sha256:bb63b9467928d4b064be1ccbb88d0f4ec868ce4aa4a7dd44338090528838b79e\"]},\"history\":[{\"created\":\"1970-01-01T00:00:00Z\",\"created_by\":\"bazel build ...\",\"author\":\"Bazel\"},{\"created\":\"2020-08-27T13:47:31.271664261Z\",\"created_by\":\"/bin/sh -c #(nop) WORKDIR /\",\"empty_layer\":true},{\"created\":\"2020-08-27T13:47:31.436965941Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:93201c93ac7e6e5b3976190c2d70671eb6576373537fda9ac1bd50d90e342ed1 in /bin/ \"},{\"created\":\"2020-08-27T13:47:31.550192267Z\",\"created_by\":\"/bin/sh -c #(nop)  EXPOSE 2379 2380 4001 7001\",\"empty_layer\":true},{\"created\":\"2020-08-27T13:47:34.464243112Z\",\"created_by\":\"/bin/sh -c #(nop) COPY multi:db2195e6dcec23938ed1dcaf030f0ec72e3ae97af5ef0c8a74c72a2a097ec8fd in /usr/local/bin/ \"},{\"created\":\"2020-08-27T13:47:36.357785715Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:cf93caea4c1e5a0eaaa9cf9147de2dd27a8545620caa35f0a592e42099d44ed0 in /bin/ \"},{\"created\":\"2020-08-27T13:47:36.718716443Z\",\"created_by\":\"/bin/sh -c #(nop) COPY multi:a1881dd50cdbd92225791143eb662674b0a4155ae2577453cd6fae7dab43f859 in /usr/local/bin/ \"}]}},},}" id=82276258-4fb9-46f8-8add-4a1f85e32393 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.433103303Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=db0139da-c693-41ce-8f15-bf5318c06e6d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.434567427Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-06-18T00:55:59.462921357Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"ExposedPorts\":{\"53/udp\":{},\"53/tcp\":{}},\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/coredns\"]},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:225df95e717ceb672de0e45aa49f352eace21512240205972aca0fccc9612722\",\"sha256:96d17b0b58a73f2d35707e37e5911f65cca8b4467dc54420b811d07784caee64\"]},\"history\":[{\"created\":\"2019-07-28T20:18:27.224802511Z\",\"created_by\":\"/bin/sh -c #(nop) COPY dir:0284c6efacdcf29cb632136811b7130fbe84998aefe3d1c36a0570424c7a2c92 in /etc/ssl/certs \"},{\"created\":\"2020-06-18T00:55:58.768320531Z\",\"created_by\":\"/bin/sh -c #(nop) ADD file:a39148838cdb612e6ae2cfd5672098607e86503673395922b6521249a1edbf6a in /coredns \"},{\"created\":\"2020-06-18T00:55:59.195850503Z\",\"created_by\":\"/bin/sh -c #(nop)  EXPOSE 53 53/udp\",\"empty_layer\":true},{\"created\":\"2020-06-18T00:55:59.462921357Z\",\"created_by\":\"/bin/sh -c #(nop)  ENTRYPOINT [\\\"/coredns\\\"]\",\"empty_layer\":true}]}},},}" id=db0139da-c693-41ce-8f15-bf5318c06e6d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:35 minikube crio[351]: time="2021-01-20 09:13:35.741500466Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=72927076-80e3-4624-8f8b-b451607dd3bc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:35 minikube crio[351]: time="2021-01-20 09:13:35.743212723Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=72927076-80e3-4624-8f8b-b451607dd3bc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:42 minikube crio[351]: time="2021-01-20 09:13:42.944566021Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2e7ff016-583b-4103-8f80-d2bc458c8a83 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:42 minikube crio[351]: time="2021-01-20 09:13:42.947114859Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e7ff016-583b-4103-8f80-d2bc458c8a83 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:50 minikube crio[351]: time="2021-01-20 09:13:50.222266459Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e4df4d5f-00f5-41c2-8816-17aa3cdcf80d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:50 minikube crio[351]: time="2021-01-20 09:13:50.223955381Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e4df4d5f-00f5-41c2-8816-17aa3cdcf80d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:57 minikube crio[351]: time="2021-01-20 09:13:57.463740576Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8e9d76d0-a1e2-4e7b-baf8-f1349e938cdd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:57 minikube crio[351]: time="2021-01-20 09:13:57.465470724Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8e9d76d0-a1e2-4e7b-baf8-f1349e938cdd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:04 minikube crio[351]: time="2021-01-20 09:14:04.692850595Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2cba5fc0-63c3-42db-be75-e56af2274c48 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:04 minikube crio[351]: time="2021-01-20 09:14:04.695926108Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2cba5fc0-63c3-42db-be75-e56af2274c48 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:11 minikube crio[351]: time="2021-01-20 09:14:11.963037978Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=5b158a6d-7c7d-4254-9d15-6baa602e220f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:11 minikube crio[351]: time="2021-01-20 09:14:11.965113740Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5b158a6d-7c7d-4254-9d15-6baa602e220f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:19 minikube crio[351]: time="2021-01-20 09:14:19.173060475Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=ef61a749-afa4-4a05-aa29-dec885496617 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:19 minikube crio[351]: time="2021-01-20 09:14:19.174695245Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ef61a749-afa4-4a05-aa29-dec885496617 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:26 minikube crio[351]: time="2021-01-20 09:14:26.479249436Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=335fe976-7171-4578-820f-0324341cda71 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:26 minikube crio[351]: time="2021-01-20 09:14:26.482346510Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=335fe976-7171-4578-820f-0324341cda71 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:33 minikube crio[351]: time="2021-01-20 09:14:33.723104028Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2686a37e-a142-439c-b558-77f9a3b65329 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:33 minikube crio[351]: time="2021-01-20 09:14:33.724867701Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2686a37e-a142-439c-b558-77f9a3b65329 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:40 minikube crio[351]: time="2021-01-20 09:14:40.865064180Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e2f08879-ee4c-458f-aa50-1ce96cda3a34 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:40 minikube crio[351]: time="2021-01-20 09:14:40.866856487Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e2f08879-ee4c-458f-aa50-1ce96cda3a34 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:48 minikube crio[351]: time="2021-01-20 09:14:48.197141018Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d5aae54f-58b6-4f7e-9c50-a855c182e83b name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:48 minikube crio[351]: time="2021-01-20 09:14:48.199036098Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d5aae54f-58b6-4f7e-9c50-a855c182e83b name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:55 minikube crio[351]: time="2021-01-20 09:14:55.440276964Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=c27ca990-3475-4289-985a-f27553b49281 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:55 minikube crio[351]: time="2021-01-20 09:14:55.442227891Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c27ca990-3475-4289-985a-f27553b49281 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:02 minikube crio[351]: time="2021-01-20 09:15:02.674854399Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2080cecc-d7f5-4229-9556-297ed924b970 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:02 minikube crio[351]: time="2021-01-20 09:15:02.676503253Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2080cecc-d7f5-4229-9556-297ed924b970 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:09 minikube crio[351]: time="2021-01-20 09:15:09.964515922Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8106490d-ec3b-4511-86ce-8be4cfe280dc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:09 minikube crio[351]: time="2021-01-20 09:15:09.966450337Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8106490d-ec3b-4511-86ce-8be4cfe280dc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:17 minikube crio[351]: time="2021-01-20 09:15:17.187216639Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=1020507f-6a59-41fb-b6bb-8f1df6a2d08c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:17 minikube crio[351]: time="2021-01-20 09:15:17.189133592Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1020507f-6a59-41fb-b6bb-8f1df6a2d08c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:24 minikube crio[351]: time="2021-01-20 09:15:24.450267532Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=9726db19-bdac-4108-a04c-eca8d27c3cd5 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:24 minikube crio[351]: time="2021-01-20 09:15:24.453788633Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9726db19-bdac-4108-a04c-eca8d27c3cd5 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:31 minikube crio[351]: time="2021-01-20 09:15:31.733362540Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d36661b8-e730-4e0d-a131-b491d2190902 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:31 minikube crio[351]: time="2021-01-20 09:15:31.735310911Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d36661b8-e730-4e0d-a131-b491d2190902 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:38 minikube crio[351]: time="2021-01-20 09:15:38.914004287Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=0d3da55e-f22b-43de-94cc-dc48c1951cac name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:38 minikube crio[351]: time="2021-01-20 09:15:38.915797186Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0d3da55e-f22b-43de-94cc-dc48c1951cac name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:46 minikube crio[351]: time="2021-01-20 09:15:46.189616711Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=138d2fb3-8ddb-4aa0-aa72-8aba0755a271 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:46 minikube crio[351]: time="2021-01-20 09:15:46.191544853Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=138d2fb3-8ddb-4aa0-aa72-8aba0755a271 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:53 minikube crio[351]: time="2021-01-20 09:15:53.422787267Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=bddffcbe-20b8-413d-b870-45c1801b03ca name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:53 minikube crio[351]: time="2021-01-20 09:15:53.424587191Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bddffcbe-20b8-413d-b870-45c1801b03ca name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:00 minikube crio[351]: time="2021-01-20 09:16:00.674148788Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=44f9501a-19ea-4886-b1ab-be476eb5c551 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:00 minikube crio[351]: time="2021-01-20 09:16:00.676378040Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=44f9501a-19ea-4886-b1ab-be476eb5c551 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:07 minikube crio[351]: time="2021-01-20 09:16:07.918510351Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2367c594-5a12-47f7-b0b6-45d0185b5d8a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:07 minikube crio[351]: time="2021-01-20 09:16:07.921542749Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2367c594-5a12-47f7-b0b6-45d0185b5d8a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:15 minikube crio[351]: time="2021-01-20 09:16:15.220587918Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=7230d8d4-ca5b-414c-ba5e-050d5edff0c9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:15 minikube crio[351]: time="2021-01-20 09:16:15.222502700Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7230d8d4-ca5b-414c-ba5e-050d5edff0c9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:22 minikube crio[351]: time="2021-01-20 09:16:22.464335672Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8f407728-5591-439f-8386-479d066c225f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:22 minikube crio[351]: time="2021-01-20 09:16:22.466506601Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8f407728-5591-439f-8386-479d066c225f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:29 minikube crio[351]: time="2021-01-20 09:16:29.728772336Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=4a4a7852-8aa9-45fe-8439-9056101df44d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:29 minikube crio[351]: time="2021-01-20 09:16:29.730742153Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4a4a7852-8aa9-45fe-8439-9056101df44d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:36 minikube crio[351]: time="2021-01-20 09:16:36.979665190Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=da76ac1b-a75c-4136-9080-87d92e09984c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:36 minikube crio[351]: time="2021-01-20 09:16:36.981318464Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=da76ac1b-a75c-4136-9080-87d92e09984c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:44 minikube crio[351]: time="2021-01-20 09:16:44.187176071Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=07b89e5a-6adf-4bee-b1b6-8245281d1049 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:44 minikube crio[351]: time="2021-01-20 09:16:44.188969236Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=07b89e5a-6adf-4bee-b1b6-8245281d1049 name=/runtime.v1alpha2.ImageService/ImageStatus

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

==> describe nodes <==
E0120 10:16:46.103955   45918 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[Jan19 19:08] x86/cpu: VMX (outside TXT) disabled by BIOS
[  +0.023630] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.792548] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.208865] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000038] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000109] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000071] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000048] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000063] acpi PNP0C14:07: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000086] acpi PNP0C14:08: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.016163] usb: port power management may be unreliable
[  +0.110762] nvme nvme0: missing or invalid SUBNQN field.
[ +14.894224] kauditd_printk_skb: 18 callbacks suppressed
[  +0.817494] systemd-sysv-generator[995]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000049] systemd-sysv-generator[995]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.069329] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.342651] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.074614] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver
[  +0.112511] resource sanity check: requesting [mem 0xfed10000-0xfed15fff], which spans more than pnp 00:07 [mem 0xfed10000-0xfed13fff]
[  +0.000009] caller snb_uncore_imc_init_box+0x6a/0xa0 [intel_uncore] mapping multiple BARs
[  +0.034138] r8152 4-2.1.2:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr
[  +0.331260] thermal thermal_zone13: failed to read out thermal zone (-61)
[  +0.179322] sof-audio-pci 0000:00:1f.3: ASoC: Parent card not yet available, widget card binding deferred
[  +0.257875] snd_hda_codec_realtek ehdaudio0D0: ASoC: sink widget AIF1TX overwritten
[  +0.000005] snd_hda_codec_realtek ehdaudio0D0: ASoC: source widget AIF1RX overwritten
[  +0.000180] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi3 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi2 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi1 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Codec Output Pin1 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Codec Input Pin1 overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Analog Codec Playback overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Digital Codec Playback overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Alt Analog Codec Playback overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Analog Codec Capture overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Digital Codec Capture overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Alt Analog Codec Capture overwritten
[  +0.005502] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[  +0.005862] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[ +16.142052] usb 3-2.1.1.2: 1:1: cannot get freq at ep 0x81
[Jan19 19:09] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 0000000005a5d522 port 1: DPCD read on addr 0x4b0 for 1 bytes NAKed
[  +0.030189] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 0000000005a5d522 port 3: DPCD read on addr 0x4b0 for 1 bytes NAKed
[Jan19 19:23] IRQ 166: no longer affine to CPU1
[  +0.004626] IRQ 167: no longer affine to CPU2
[  +0.005180] IRQ 168: no longer affine to CPU3
[  +0.004011] IRQ 169: no longer affine to CPU4
[  +0.004128] IRQ 170: no longer affine to CPU5
[  +0.004502] IRQ 171: no longer affine to CPU6
[  +0.002426] IRQ 172: no longer affine to CPU7
[  +0.001989] IRQ 173: no longer affine to CPU8
[  +0.002057] IRQ 174: no longer affine to CPU9
[  +0.002182] IRQ 175: no longer affine to CPU10
[  +0.007428] smpboot: Scheduler frequency invariance went wobbly, disabling!
[  +1.710710] usb 4-2: Disable of device-initiated U1 failed.
[  +0.000011] usb 4-2: Disable of device-initiated U2 failed.
[  +0.874553] usb 4-2.1: Disable of device-initiated U1 failed.
[  +0.010342] usb 4-2.1: Disable of device-initiated U2 failed.
[  +4.489934] done.
[  +0.879613] r8152 4-2.1.2:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr

==> kernel <==
 09:16:46 up 14:08,  0 users,  load average: 0.68, 0.56, 0.64
Linux minikube 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kubelet <==
-- Logs begin at Wed 2021-01-20 09:11:25 UTC, end at Wed 2021-01-20 09:16:46 UTC. --
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 398 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0xc0003466c0, 0xc00077bb00, 0x5f5e100, 0xc000714200)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:536 +0x127
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc0003466c0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:494 +0x25a
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 628 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start.func1(0xc0011ba3c0, 0xc000d72ea0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:91 +0x125
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:89 +0x477
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 629 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc000e00a00, 0xc000a8a910, 0xc000642ae0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1164 +0xe5
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1162 +0x21d
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 630 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc000e00a00, 0xc000c0d560)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:385 +0x145
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x585
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 631 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000e00a00, 0xc000c0d5c0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608
Jan 20 09:16:45 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27.
Jan 20 09:16:45 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 20 09:16:45 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 20 09:16:45 minikube kubelet[7084]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 20 09:16:45 minikube kubelet[7084]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.089876    7084 server.go:416] Version: v1.20.0
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.090080    7084 server.go:837] Client rotation is on, will bootstrap in background
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.091552    7084 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.092214    7084 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.092221    7084 manager.go:159] Cannot detect current cgroup on cgroup v2
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.138965    7084 fs.go:208] stat failed on /dev/mapper/luks-04d26ab7-d155-44f4-906f-c64d950aa812 with error: no such file or directory
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154757    7084 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154880    7084 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154895    7084 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154937    7084 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154942    7084 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154946    7084 container_manager_linux.go:315] Creating device plugin manager: true
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.154992    7084 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155012    7084 remote_runtime.go:62] parsed scheme: ""
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155018    7084 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155035    7084 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155040    7084 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.155081    7084 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155091    7084 remote_image.go:50] parsed scheme: ""
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155095    7084 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155101    7084 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155106    7084 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155129    7084 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155148    7084 kubelet.go:273] Watching apiserver
Jan 20 09:16:45 minikube kubelet[7084]: E0120 09:16:45.155806    7084 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 09:16:45 minikube kubelet[7084]: E0120 09:16:45.155820    7084 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 09:16:45 minikube kubelet[7084]: E0120 09:16:45.155855    7084 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.159969    7084 kuberuntime_manager.go:216] Container runtime cri-o initialized, version: 1.19.0, apiVersion: v1alpha1

❗  unable to fetch logs for: describe nodes

Tools versions

$ podman version
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 15:37:50 2020
OS/Arch:      linux/amd64

$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1

$ cat /etc/redhat-release 
Fedora release 33 (Thirty Three)

$ uname -a
Linux fedora-p1 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Notes
The test has been done with cgroups v2.
I've tested anyway also v1 but it doesn't work anyway.

Thanks

Metadata

Assignees

No one assigned

    Labels

    co/podman-driverpodman driver issuesco/runtime/crioCRIO related issueskind/bugCategorizes issue or PR as related to a bug.os/linuxpriority/awaiting-more-evidenceLowest priority. Possibly useful, but not yet enough support to actually get it done.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions