Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker driver: add support for btrfs #7923

Closed
solarnz opened this issue Apr 28, 2020 · 33 comments
Closed

docker driver: add support for btrfs #7923

solarnz opened this issue Apr 28, 2020 · 33 comments
Labels
co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-faq-entry Things that could use documentation in a FAQ needs-problem-regex priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@solarnz
Copy link

solarnz commented Apr 28, 2020

Steps to reproduce the issue:

  1. Install minikube and kubeadm
  2. run minikube start --driver=docker --v=5 --alsologtostderr

I'm at a loss as to how to proceed any further here, I'm not sure if this is related to my system configuration or if it's a bug in minikube.

Full output of failed command:

% minikube start --driver=docker  --v=5 --alsologtostderr
I0428 17:10:22.855774   33446 start.go:100] hostinfo: {"hostname":"chris-trotman-laptop","uptime":13754,"bootTime":1588044068,"procs":307,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"","kernelVersion":"5.6.7-arch1-1","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a4ab05cf-cb22-4c07-88c2-9bf79092f646"}
I0428 17:10:22.856368   33446 start.go:110] virtualization: kvm host
😄  minikube v1.10.0-beta.1 on Arch 
I0428 17:10:22.856514   33446 driver.go:255] Setting default libvirt URI to qemu:///system
I0428 17:10:22.856593   33446 notify.go:125] Checking for updates...
✨  Using the docker driver based on user configuration
I0428 17:10:22.909172   33446 start.go:207] selected driver: docker
I0428 17:10:22.909191   33446 start.go:580] validating driver "docker" against <nil>
I0428 17:10:22.909211   33446 start.go:586] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0428 17:10:22.909233   33446 start.go:899] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0428 17:10:22.909287   33446 start_flags.go:215] no existing cluster config was found, will generate one from the flags 
I0428 17:10:22.995065   33446 start_flags.go:229] Using suggested 3900MB memory alloc based on sys=15898MB, container=15898MB
I0428 17:10:22.995202   33446 start_flags.go:551] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node minikube in cluster minikube
I0428 17:10:22.995308   33446 cache.go:103] Beginning downloading kic artifacts
I0428 17:10:23.122024   33446 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.9@sha256:82a826cc03c3e59ead5969b8020ca138de98f366c1907293df91fc57205dbb53 in local docker daemon, skipping pull
I0428 17:10:23.122081   33446 preload.go:82] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0428 17:10:23.122130   33446 preload.go:97] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0428 17:10:23.122140   33446 cache.go:47] Caching tarball of preloaded images
I0428 17:10:23.122163   33446 preload.go:123] Found /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0428 17:10:23.122173   33446 cache.go:50] Finished verifying existence of preloaded tar for  v1.18.0 on docker
I0428 17:10:23.122395   33446 profile.go:149] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0428 17:10:23.122477   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/config.json: {Name:mk450fd4eda337c7ddd64ef0cf55f5d70f3fb5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:23.122754   33446 cache.go:120] Successfully downloaded all kic artifacts
I0428 17:10:23.122795   33446 start.go:221] acquiring machines lock for minikube: {Name:mkec809913d626154fe8c3badcd878ae0c8a6125 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0428 17:10:23.122847   33446 start.go:225] acquired machines lock for "minikube" in 38.051µs
I0428 17:10:23.122882   33446 start.go:81] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0428 17:10:23.122920   33446 start.go:102] createHost starting for "" (driver="docker")
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
I0428 17:10:23.123177   33446 start.go:138] libmachine.API.Create for "minikube" (driver="docker")
I0428 17:10:23.123201   33446 client.go:161] LocalClient.Create starting
I0428 17:10:23.123254   33446 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/ca.pem
I0428 17:10:23.123287   33446 main.go:110] libmachine: Decoding PEM data...
I0428 17:10:23.123312   33446 main.go:110] libmachine: Parsing certificate...
I0428 17:10:23.123460   33446 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/cert.pem
I0428 17:10:23.123493   33446 main.go:110] libmachine: Decoding PEM data...
I0428 17:10:23.123547   33446 main.go:110] libmachine: Parsing certificate...
I0428 17:10:23.123842   33446 oci.go:268] executing with [docker ps -a --format {{.Names}}] timeout: 30s
I0428 17:10:23.180487   33446 volumes.go:97] executing: [docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true]
I0428 17:10:23.266914   33446 oci.go:103] Successfully created a docker volume minikube
I0428 17:10:23.267183   33446 preload.go:82] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0428 17:10:23.267306   33446 preload.go:97] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0428 17:10:23.267338   33446 kic.go:133] Starting extracting preloaded images to volume ...
I0428 17:10:23.267464   33446 volumes.go:85] executing: [docker run --rm --entrypoint /usr/bin/tar -v /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.9@sha256:82a826cc03c3e59ead5969b8020ca138de98f366c1907293df91fc57205dbb53 -I lz4 -xvf /preloaded.tar -C /extractDir]
I0428 17:10:27.651134   33446 oci.go:268] executing with [docker inspect minikube --format={{.State.Status}}] timeout: 19s
I0428 17:10:27.741315   33446 oci.go:178] the created container "minikube" has a running status.
I0428 17:10:27.741379   33446 kic.go:157] Creating ssh key for kic: /home/chris/.minikube/machines/minikube/id_rsa...
I0428 17:10:27.994559   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0428 17:10:27.994623   33446 kic_runner.go:174] docker (temp): /home/chris/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0428 17:10:28.236449   33446 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0428 17:10:32.508265   33446 kic.go:138] duration metric: took 9.240928 seconds to extract preloaded images to volume
I0428 17:10:32.508326   33446 oci.go:268] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0428 17:10:32.558839   33446 machine.go:86] provisioning docker machine ...
I0428 17:10:32.558887   33446 ubuntu.go:166] provisioning hostname "minikube"
I0428 17:10:32.609446   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:32.609830   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:32.609857   33446 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0428 17:10:32.762640   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0428 17:10:32.816327   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:32.816516   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:32.816551   33446 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0428 17:10:32.948688   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0428 17:10:32.948810   33446 ubuntu.go:172] set auth options {CertDir:/home/chris/.minikube CaCertPath:/home/chris/.minikube/certs/ca.pem CaPrivateKeyPath:/home/chris/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/chris/.minikube/machines/server.pem ServerKeyPath:/home/chris/.minikube/machines/server-key.pem ClientKeyPath:/home/chris/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/chris/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/chris/.minikube}
I0428 17:10:32.948948   33446 ubuntu.go:174] setting up certificates
I0428 17:10:32.948995   33446 provision.go:82] configureAuth start
I0428 17:10:33.014245   33446 provision.go:131] copyHostCerts
I0428 17:10:33.014282   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /home/chris/.minikube/ca.pem
I0428 17:10:33.014312   33446 exec_runner.go:91] found /home/chris/.minikube/ca.pem, removing ...
I0428 17:10:33.014435   33446 exec_runner.go:98] cp: /home/chris/.minikube/certs/ca.pem --> /home/chris/.minikube/ca.pem (1034 bytes)
I0428 17:10:33.014515   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/cert.pem -> /home/chris/.minikube/cert.pem
I0428 17:10:33.014542   33446 exec_runner.go:91] found /home/chris/.minikube/cert.pem, removing ...
I0428 17:10:33.014590   33446 exec_runner.go:98] cp: /home/chris/.minikube/certs/cert.pem --> /home/chris/.minikube/cert.pem (1074 bytes)
I0428 17:10:33.014653   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/key.pem -> /home/chris/.minikube/key.pem
I0428 17:10:33.014678   33446 exec_runner.go:91] found /home/chris/.minikube/key.pem, removing ...
I0428 17:10:33.014722   33446 exec_runner.go:98] cp: /home/chris/.minikube/certs/key.pem --> /home/chris/.minikube/key.pem (1679 bytes)
I0428 17:10:33.014801   33446 provision.go:105] generating server cert: /home/chris/.minikube/machines/server.pem ca-key=/home/chris/.minikube/certs/ca.pem private-key=/home/chris/.minikube/certs/ca-key.pem org=chris.minikube san=[10.255.0.3 localhost 127.0.0.1]
I0428 17:10:33.084003   33446 provision.go:159] copyRemoteCerts
I0428 17:10:33.084085   33446 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0428 17:10:33.130999   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:33.244967   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server.pem -> /etc/docker/server.pem
I0428 17:10:33.245140   33446 ssh_runner.go:215] scp /home/chris/.minikube/machines/server.pem --> /etc/docker/server.pem (1115 bytes)
I0428 17:10:33.276875   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0428 17:10:33.276931   33446 ssh_runner.go:215] scp /home/chris/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0428 17:10:33.298085   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0428 17:10:33.298161   33446 ssh_runner.go:215] scp /home/chris/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1034 bytes)
I0428 17:10:33.320892   33446 provision.go:85] duration metric: configureAuth took 371.858734ms
I0428 17:10:33.320922   33446 ubuntu.go:190] setting minikube options for container-runtime
I0428 17:10:33.369324   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:33.369497   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:33.369518   33446 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0428 17:10:33.527605   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: btrfs

I0428 17:10:33.527695   33446 ubuntu.go:71] root file system type: btrfs
I0428 17:10:33.528096   33446 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0428 17:10:33.596778   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:33.596949   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:33.597039   33446 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0428 17:10:33.750658   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0428 17:10:33.810214   33446 main.go:110] libmachine: Using SSH client type: native
I0428 17:10:33.810394   33446 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0428 17:10:33.810430   33446 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0428 17:10:36.519422   33446 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2020-04-28 07:10:33.742393098 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0428 17:10:36.519585   33446 machine.go:89] provisioned docker machine in 3.96071047s
I0428 17:10:36.519601   33446 client.go:164] LocalClient.Create took 13.396378414s
I0428 17:10:36.519619   33446 start.go:143] duration metric: libmachine.API.Create for "minikube" took 13.396440623s
I0428 17:10:36.519630   33446 start.go:184] post-start starting for "minikube" (driver="docker")
I0428 17:10:36.519641   33446 start.go:194] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0428 17:10:36.519702   33446 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0428 17:10:36.581121   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:36.672172   33446 ssh_runner.go:148] Run: cat /etc/os-release
I0428 17:10:36.674892   33446 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0428 17:10:36.674932   33446 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0428 17:10:36.674953   33446 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0428 17:10:36.674964   33446 info.go:96] Remote host: Ubuntu 19.10
I0428 17:10:36.674978   33446 filesync.go:118] Scanning /home/chris/.minikube/addons for local assets ...
I0428 17:10:36.675034   33446 filesync.go:118] Scanning /home/chris/.minikube/files for local assets ...
I0428 17:10:36.675068   33446 start.go:187] post-start completed in 155.42656ms
I0428 17:10:36.675444   33446 start.go:105] duration metric: createHost completed in 13.552513924s
I0428 17:10:36.675462   33446 start.go:72] releasing machines lock for "minikube", held for 13.552587491s
I0428 17:10:36.731777   33446 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0428 17:10:36.731776   33446 profile.go:149] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0428 17:10:36.732291   33446 ssh_runner.go:148] Run: systemctl --version
I0428 17:10:36.797057   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:36.800151   33446 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0428 17:10:36.882685   33446 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0428 17:10:36.896218   33446 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0428 17:10:36.896303   33446 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0428 17:10:36.918579   33446 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0428 17:10:37.018551   33446 ssh_runner.go:148] Run: sudo systemctl start docker
I0428 17:10:37.029555   33446 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0428 17:10:37.370768   33446 preload.go:82] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0428 17:10:37.370902   33446 preload.go:97] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
I0428 17:10:37.371054   33446 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0428 17:10:37.490675   33446 docker.go:356] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0428 17:10:37.490775   33446 docker.go:294] Images already preloaded, skipping extraction
I0428 17:10:37.490879   33446 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0428 17:10:37.592905   33446 docker.go:356] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
kubernetesui/dashboard:v2.0.0-rc6
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.3
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0428 17:10:37.592944   33446 cache_images.go:69] Images are preloaded, skipping loading
I0428 17:10:37.592991   33446 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.255.0.3 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.255.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.255.0.3 ControlPlaneAddress:10.255.0.3 KubeProxyOptions:map[]}
I0428 17:10:37.593145   33446 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.255.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 10.255.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "10.255.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 10.255.0.3:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 10.255.0.3:10249

I0428 17:10:37.593581   33446 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0428 17:10:37.682632   33446 kubeadm.go:723] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.255.0.3 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0428 17:10:37.682719   33446 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0428 17:10:37.709125   33446 binaries.go:43] Found k8s binaries, skipping transfer
I0428 17:10:37.709270   33446 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0428 17:10:37.731348   33446 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1437 bytes)
I0428 17:10:37.764160   33446 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new (532 bytes)
I0428 17:10:37.803926   33446 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service.new (349 bytes)
I0428 17:10:37.822539   33446 ssh_runner.go:148] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0428 17:10:37.837202   33446 ssh_runner.go:148] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
I0428 17:10:37.870770   33446 ssh_runner.go:148] Run: sudo systemctl enable kubelet
I0428 17:10:37.963437   33446 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0428 17:10:38.055208   33446 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0428 17:10:38.075272   33446 kubeadm.go:784] reloadKubelet took 252.74769ms
I0428 17:10:38.075301   33446 kubeadm.go:705] reloadKubelet took 482.342893ms
I0428 17:10:38.075318   33446 certs.go:51] Setting up /home/chris/.minikube/profiles/minikube for IP: 10.255.0.3
I0428 17:10:38.075368   33446 certs.go:168] skipping minikubeCA CA generation: /home/chris/.minikube/ca.key
I0428 17:10:38.075392   33446 certs.go:168] skipping proxyClientCA CA generation: /home/chris/.minikube/proxy-client-ca.key
I0428 17:10:38.075447   33446 certs.go:266] generating minikube-user signed cert: /home/chris/.minikube/profiles/minikube/client.key
I0428 17:10:38.075457   33446 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/client.crt with IP's: []
I0428 17:10:38.315996   33446 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/client.crt ...
I0428 17:10:38.316087   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.crt: {Name:mka07a58dd5663c2670aeceac28b6f674efc8b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.316292   33446 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/client.key ...
I0428 17:10:38.316326   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.key: {Name:mkf7666bb385a6e9ae21189ba35d84d3b807484f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.316449   33446 certs.go:266] generating minikube signed cert: /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e
I0428 17:10:38.316479   33446 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e with IP's: [10.255.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0428 17:10:38.402676   33446 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e ...
I0428 17:10:38.402795   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e: {Name:mk5b4e9f64c589982974f227ddd5bafae57aa503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.403475   33446 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e ...
I0428 17:10:38.403552   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e: {Name:mk93e08310566686118ac5f0cc01b60808c6de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.403931   33446 certs.go:277] copying /home/chris/.minikube/profiles/minikube/apiserver.crt.c6d6ce8e -> /home/chris/.minikube/profiles/minikube/apiserver.crt
I0428 17:10:38.404265   33446 certs.go:281] copying /home/chris/.minikube/profiles/minikube/apiserver.key.c6d6ce8e -> /home/chris/.minikube/profiles/minikube/apiserver.key
I0428 17:10:38.404499   33446 certs.go:266] generating aggregator signed cert: /home/chris/.minikube/profiles/minikube/proxy-client.key
I0428 17:10:38.404531   33446 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0428 17:10:38.618956   33446 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/proxy-client.crt ...
I0428 17:10:38.619015   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.crt: {Name:mkdc81e217efe8a180042cd6a4ac0d23a55e96c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.619279   33446 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/proxy-client.key ...
I0428 17:10:38.619292   33446 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.key: {Name:mk49b6ed585271800d057c65a7f077c5e7fbddc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0428 17:10:38.619435   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0428 17:10:38.619497   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0428 17:10:38.619524   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0428 17:10:38.619543   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0428 17:10:38.619563   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0428 17:10:38.619581   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0428 17:10:38.619598   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0428 17:10:38.619634   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0428 17:10:38.619724   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca-key.pem (1679 bytes)
I0428 17:10:38.619774   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca.pem (1034 bytes)
I0428 17:10:38.619832   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/cert.pem (1074 bytes)
I0428 17:10:38.619870   33446 certs.go:341] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/key.pem (1679 bytes)
I0428 17:10:38.619907   33446 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.621050   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1306 bytes)
I0428 17:10:38.640526   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0428 17:10:38.683988   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0428 17:10:38.702470   33446 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0428 17:10:38.748673   33446 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0428 17:10:38.767540   33446 ssh_runner.go:215] scp /home/chris/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0428 17:10:38.806133   33446 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0428 17:10:38.843861   33446 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0428 17:10:38.866603   33446 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0428 17:10:38.903598   33446 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0428 17:10:38.931250   33446 ssh_runner.go:148] Run: openssl version
I0428 17:10:38.962925   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0428 17:10:38.977444   33446 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.981780   33446 certs.go:382] hashing: -rw-r--r-- 1 root root 1066 Apr 27 00:27 /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.981888   33446 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0428 17:10:38.990760   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0428 17:10:39.013916   33446 kubeadm.go:279] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.255.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0428 17:10:39.014229   33446 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0428 17:10:39.121422   33446 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0428 17:10:39.136956   33446 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0428 17:10:39.148834   33446 kubeadm.go:197] ignoring SystemVerification for kubeadm because of docker driver
I0428 17:10:39.148891   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0428 17:10:39.168402   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0428 17:10:39.194156   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0428 17:10:39.222745   33446 ssh_runner.go:148] Run: sudo /bin/bash -c "grep https://10.255.0.3:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0428 17:10:39.249783   33446 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0428 17:12:37.756946   33446 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m58.507101959s)
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0428 07:10:39.344716     733 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
W0428 07:10:42.737921     733 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0428 07:10:42.739363     733 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

...
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/4172

Optional: Full output of minikube logs command:

chris@chris-trotman-laptop ~ % minikube logs ==> Docker <== -- Logs begin at Tue 2020-04-28 07:10:28 UTC, end at Tue 2020-04-28 07:25:06 UTC. -- Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955447170Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955483783Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955512743Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955526815Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.955627504Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006210a0, CONNECTING" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.956005527Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006210a0, READY" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957012697Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957042027Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957060042Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957070956Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957123360Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006e44b0, CONNECTING" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.957411136Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006e44b0, READY" module=grpc Apr 28 07:10:28 minikube dockerd[117]: time="2020-04-28T07:10:28.960835439Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011890230Z" level=warning msg="Your kernel does not support cgroup rt period" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011918626Z" level=warning msg="Your kernel does not support cgroup rt runtime" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011930897Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.011939679Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.012247268Z" level=info msg="Loading containers: start." Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.144167586Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 28 07:10:29 minikube dockerd[117]: time="2020-04-28T07:10:29.225809008Z" level=info msg="Loading containers: done." Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.487891961Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.488161860Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.488236914Z" level=info msg="Daemon has completed initialization" Apr 28 07:10:30 minikube dockerd[117]: time="2020-04-28T07:10:30.529699427Z" level=info msg="API listen on /run/docker.sock" Apr 28 07:10:30 minikube systemd[1]: Started Docker Application Container Engine. Apr 28 07:10:34 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Apr 28 07:10:34 minikube systemd[1]: Stopping Docker Application Container Engine... Apr 28 07:10:34 minikube dockerd[117]: time="2020-04-28T07:10:34.129070096Z" level=info msg="Processing signal 'terminated'" Apr 28 07:10:34 minikube dockerd[117]: time="2020-04-28T07:10:34.129996176Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Apr 28 07:10:34 minikube dockerd[117]: time="2020-04-28T07:10:34.130636295Z" level=info msg="Daemon shutdown complete" Apr 28 07:10:34 minikube systemd[1]: docker.service: Succeeded. Apr 28 07:10:34 minikube systemd[1]: Stopped Docker Application Container Engine. Apr 28 07:10:34 minikube systemd[1]: Starting Docker Application Container Engine... Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.186156320Z" level=info msg="Starting up" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.187977039Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188003557Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188043755Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188062194Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188193523Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00074fe80, CONNECTING" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.188555491Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00074fe80, READY" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189221615Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189240699Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189256546Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189269570Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189312280Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007e65b0, CONNECTING" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.189526352Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007e65b0, READY" module=grpc Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.191769901Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236664613Z" level=warning msg="Your kernel does not support cgroup rt period" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236702122Z" level=warning msg="Your kernel does not support cgroup rt runtime" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236714677Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236725144Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.236916885Z" level=info msg="Loading containers: start." Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.381287962Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 28 07:10:34 minikube dockerd[343]: time="2020-04-28T07:10:34.442888341Z" level=info msg="Loading containers: done." Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.486587080Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.486828678Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.486872058Z" level=info msg="Daemon has completed initialization" Apr 28 07:10:36 minikube systemd[1]: Started Docker Application Container Engine. Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.515089778Z" level=info msg="API listen on /var/run/docker.sock" Apr 28 07:10:36 minikube dockerd[343]: time="2020-04-28T07:10:36.515328023Z" level=info msg="API listen on [::]:2376"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID

==> describe nodes <==
E0428 17:25:06.962887 60236 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
...

==> kernel <==
07:25:06 up 4:03, 0 users, load average: 1.57, 1.39, 1.16
Linux minikube 5.6.7-arch1-1 #1 SMP PREEMPT Thu, 23 Apr 2020 09:13:56 +0000 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kubelet <==
-- Logs begin at Tue 2020-04-28 07:10:28 UTC, end at Tue 2020-04-28 07:25:07 UTC. --
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360443 25347 state_mem.go:88] [cpumanager] updated default cpuset: ""
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360460 25347 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360477 25347 policy_none.go:43] [cpumanager] none policy: Start
Apr 28 07:25:03 minikube kubelet[25347]: W0428 07:25:03.360515 25347 fs.go:540] stat failed on /dev/mapper/cryptroot with error: no such file or directory
Apr 28 07:25:03 minikube kubelet[25347]: F0428 07:25:03.360543 25347 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 28 in cached partitions map
Apr 28 07:25:03 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Apr 28 07:25:03 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
...

Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.312374 25556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Apr 28 07:25:04 minikube kubelet[25556]: W0428 07:25:04.344205 25556 fs.go:206] stat failed on /dev/mapper/cryptroot with error: no such file or directory
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355387 25556 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355796 25556 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355815 25556 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355895 25556 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355903 25556 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355908 25556 container_manager_linux.go:306] Creating device plugin manager: true
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355995 25556 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.356007 25556 client.go:92] Start docker client with request timeout=2m0s
Apr 28 07:25:04 minikube kubelet[25556]: W0428 07:25:04.360846 25556 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.360869 25556 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.365791 25556 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371571 25556 docker_service.go:258] Docker Info: &{ID:JJU7:OSC4:67QH:5P6G:ZRID:BJZK:5B3A:SRU5:K4BX:YQBV:2H22:MGXF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2020-04-28T07:25:04.366616394Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.6.7-arch1-1 OperatingSystem:Ubuntu 19.10 (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001b2fc0 NCPU:4 MemTotal:16670576640 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:449e926990f8539fd00844b26c07e2f1e306c760 Expected:449e926990f8539fd00844b26c07e2f1e306c760} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371647 25556 docker_service.go:271] Setting cgroupDriver to cgroupfs
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378056 25556 remote_runtime.go:59] parsed scheme: ""
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378073 25556 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378104 25556 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378112 25556 clientconn.go:933] ClientConn switching balancer to "pick_first"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378163 25556 remote_image.go:50] parsed scheme: ""
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378173 25556 remote_image.go:50] scheme "" not registered, fallback to default scheme
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378184 25556 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378192 25556 clientconn.go:933] ClientConn switching balancer to "pick_first"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378223 25556 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378276 25556 kubelet.go:317] Watching apiserver
...

@tstromberg tstromberg changed the title Unable to start minikube with docker driver on Arch Linux arch+docker: [kubelet-check] Initial timeout of 40s passed. Apr 28, 2020
@tstromberg tstromberg added co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. labels Apr 28, 2020
@tstromberg
Copy link
Contributor

tstromberg commented May 1, 2020

Apr 28 07:25:03 minikube kubelet[25347]: F0428 07:25:03.360543 25347 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 28 in cached partitions map

Any idea what that's about? I've never seen this error before. Any chance that you are using btrfs? I ask because this may be related to kubernetes/kubernetes#65204

@tstromberg
Copy link
Contributor

tstromberg commented May 1, 2020

To help debug, do you mind sharing the result of:

minikube ssh "sudo grep '/var ' /proc/mounts"

I suspect that we have an issue with btrfs here:

Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371571 25556 docker_service.go:258] Docker Info: &{ID:JJU7:OSC4:67QH:5P6G:ZRID:BJZK:5B3A:SRU5:K4BX:YQBV:2H22:MGXF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2020-04-28T07:25:04.366616394Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.6.7-arch1-1 OperatingSys

@tstromberg tstromberg changed the title arch+docker: [kubelet-check] Initial timeout of 40s passed. docker on btrfs: [kubelet-check] Initial timeout of 40s passed. May 1, 2020
@tstromberg tstromberg added triage/needs-information Indicates an issue needs more information in order to work on it. needs-problem-regex labels May 1, 2020
@solarnz
Copy link
Author

solarnz commented May 1, 2020

Yep - I am using btrfs.
My setup is using dm-crypt + LUKS on the physical driver, then I mount a btrfs subvolume to /

% minikube ssh "sudo grep '/var ' /proc/mounts"
/dev/mapper/cryptroot /var btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/root/var/lib/docker/volumes/minikube/_data 0 0

For reference, the btrfs mounts directly on my system are

% cat /proc/mounts | grep btrfs
/dev/mapper/cryptroot / btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/root 0 0
/dev/mapper/cryptroot /mnt/system btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0
/dev/mapper/cryptroot /var/lib/docker/btrfs btrfs rw,relatime,ssd,space_cache,subvolid=257,subvol=/root/var/lib/docker/btrfs 0 0

@medyagh
Copy link
Member

medyagh commented May 3, 2020

@solarnz I noticed you are using arch linux
" minikube v1.10.0-beta.1 on Arch " do you mind checking if the overlayfs module been loaded in the kernel?
because even though your own docker is installed on btrfs, minikube's inner docker is installed on overlay2 (the default) because kubeadm does NOT like btrfs and it fails its system verification.

@solarnz
Copy link
Author

solarnz commented May 3, 2020

@medyagh sure,

% lsmod  | grep overlay
overlay               135168  0

It looks like it has been loaded into the kernel.

I also tried forcing docker to use the overlay2 storage driver, including removing the /var/lib/docker directory, however there was no change to the result

@medyagh
Copy link
Member

medyagh commented May 11, 2020

@solarnz could u plz paste the output of

cat /etc/docker/daemon.json

u would need to change the docker daemon settings on your system to use overlay2

@solarnz
Copy link
Author

solarnz commented May 19, 2020

I have modified my docker daemon.json file to include the setting to use overlay2,

{
    "bip": "10.255.0.1/17",
    "fixed-cidr": "10.255.0.0/17",
    "default-address-pools" : [
        {
            "base" : "10.255.128.0/17",
            "size" : 24
        }
    ],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "storage-driver": "overlay2"
}
# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.8-ce
 Storage Driver: overlay2
  Backing Filesystem: <unknown>
  Supports d_type: true
  Native Overlay Diff: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d76c121f76a5fc8a462dc64594aea72fe18e1178.m
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.6.13-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 31.25GiB
 Name: chris-trotman-laptop
 ID: HRXK:HE53:NN4E:I5MZ:XTSQ:6VPD:YNGZ:XX7U:2JY6:SCSP:4OIC:3YL7
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Minikube still couldn't start.

% minikube start --driver=docker  --v=5 --alsologtostderr                                             
I0520 09:14:05.607018   31269 start.go:99] hostinfo: {"hostname":"chris-trotman-laptop","uptime":57738,"bootTime":1589872307,"procs":308,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"","kernelVersion":"5.6.13-arch1-1","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a4ab05cf-cb22-4c07-88c2-9bf79092f646"}
I0520 09:14:05.607479   31269 start.go:109] virtualization: kvm host
😄  minikube v1.10.1 on Arch 
I0520 09:14:05.607629   31269 driver.go:253] Setting default libvirt URI to qemu:///system
I0520 09:14:05.654899   31269 docker.go:95] docker version: linux-19.03.8-ce
✨  Using the docker driver based on user configuration
I0520 09:14:05.655029   31269 start.go:215] selected driver: docker
I0520 09:14:05.655040   31269 start.go:594] validating driver "docker" against <nil>
I0520 09:14:05.655052   31269 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0520 09:14:05.655071   31269 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0520 09:14:05.655145   31269 start_flags.go:217] no existing cluster config was found, will generate one from the flags 
I0520 09:14:05.655361   31269 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0520 09:14:05.750523   31269 start_flags.go:231] Using suggested 8000MB memory alloc based on sys=32002MB, container=32002MB
I0520 09:14:05.750709   31269 start_flags.go:558] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node minikube in cluster minikube
I0520 09:14:05.750848   31269 cache.go:104] Beginning downloading kic artifacts for docker with docker
🚜  Pulling base image ...
I0520 09:14:05.794120   31269 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0520 09:14:05.794159   31269 cache.go:110] Downloading gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 to local daemon
I0520 09:14:05.794179   31269 preload.go:96] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0520 09:14:05.794189   31269 image.go:98] Writing gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 to local daemon
I0520 09:14:05.794191   31269 cache.go:48] Caching tarball of preloaded images
I0520 09:14:05.794216   31269 preload.go:122] Found /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0520 09:14:05.794226   31269 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0520 09:14:05.794226   31269 image.go:103] Getting image gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0520 09:14:05.794451   31269 profile.go:156] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0520 09:14:05.794577   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/config.json: {Name:mk450fd4eda337c7ddd64ef0cf55f5d70f3fb5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:14:07.543763   31269 image.go:112] Writing image gcr.io/k8s-minikube/kicbase:v0.0.10
I0520 09:15:48.765594   31269 image.go:123] Pulling image gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0520 09:15:51.417704   31269 cache.go:132] Successfully downloaded all kic artifacts
I0520 09:15:51.417832   31269 start.go:223] acquiring machines lock for minikube: {Name:mkec809913d626154fe8c3badcd878ae0c8a6125 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0520 09:15:51.418100   31269 start.go:227] acquired machines lock for "minikube" in 203.847µs
I0520 09:15:51.418187   31269 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}
I0520 09:15:51.418447   31269 start.go:104] createHost starting for "" (driver="docker")
🔥  Creating docker container (CPUs=2, Memory=8000MB) ...
I0520 09:15:51.419532   31269 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0520 09:15:51.419640   31269 client.go:161] LocalClient.Create starting
I0520 09:15:51.419751   31269 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/ca.pem
I0520 09:15:51.419874   31269 main.go:110] libmachine: Decoding PEM data...
I0520 09:15:51.419960   31269 main.go:110] libmachine: Parsing certificate...
I0520 09:15:51.420470   31269 main.go:110] libmachine: Reading certificate data from /home/chris/.minikube/certs/cert.pem
I0520 09:15:51.420567   31269 main.go:110] libmachine: Decoding PEM data...
I0520 09:15:51.420633   31269 main.go:110] libmachine: Parsing certificate...
I0520 09:15:51.422165   31269 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0520 09:15:51.477156   31269 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0520 09:15:51.537453   31269 oci.go:98] Successfully created a docker volume minikube
W0520 09:15:51.537515   31269 oci.go:158] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0520 09:15:51.537770   31269 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0520 09:15:51.537562   31269 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0520 09:15:51.537857   31269 preload.go:96] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0520 09:15:51.537870   31269 kic.go:134] Starting extracting preloaded images to volume ...
I0520 09:15:51.537919   31269 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0520 09:15:51.868168   31269 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0520 09:15:54.630666   31269 cli_runner.go:150] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: (2.762422589s)
I0520 09:15:54.630742   31269 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0520 09:15:54.696283   31269 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0520 09:15:54.762298   31269 oci.go:212] the created container "minikube" has a running status.
I0520 09:15:54.762331   31269 kic.go:162] Creating ssh key for kic: /home/chris/.minikube/machines/minikube/id_rsa...
I0520 09:15:54.865978   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0520 09:15:54.866048   31269 kic_runner.go:179] docker (temp): /home/chris/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0520 09:15:55.071540   31269 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0520 09:15:55.071566   31269 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0520 09:16:01.909935   31269 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (10.371913284s)
I0520 09:16:01.910010   31269 kic.go:139] duration metric: took 10.372128 seconds to extract preloaded images to volume
I0520 09:16:01.910267   31269 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0520 09:16:01.977143   31269 machine.go:86] provisioning docker machine ...
I0520 09:16:01.977217   31269 ubuntu.go:166] provisioning hostname "minikube"
I0520 09:16:01.977269   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.023760   31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:02.023986   31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:02.024012   31269 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0520 09:16:02.179030   31269 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0520 09:16:02.179165   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.247783   31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:02.247966   31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:02.248000   31269 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0520 09:16:02.381849   31269 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0520 09:16:02.381926   31269 ubuntu.go:172] set auth options {CertDir:/home/chris/.minikube CaCertPath:/home/chris/.minikube/certs/ca.pem CaPrivateKeyPath:/home/chris/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/chris/.minikube/machines/server.pem ServerKeyPath:/home/chris/.minikube/machines/server-key.pem ClientKeyPath:/home/chris/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/chris/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/chris/.minikube}
I0520 09:16:02.381982   31269 ubuntu.go:174] setting up certificates
I0520 09:16:02.382127   31269 provision.go:82] configureAuth start
I0520 09:16:02.382228   31269 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0520 09:16:02.479016   31269 provision.go:131] copyHostCerts
I0520 09:16:02.479072   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /home/chris/.minikube/ca.pem
I0520 09:16:02.479107   31269 exec_runner.go:91] found /home/chris/.minikube/ca.pem, removing ...
I0520 09:16:02.479266   31269 exec_runner.go:98] cp: /home/chris/.minikube/certs/ca.pem --> /home/chris/.minikube/ca.pem (1034 bytes)
I0520 09:16:02.479352   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/cert.pem -> /home/chris/.minikube/cert.pem
I0520 09:16:02.479380   31269 exec_runner.go:91] found /home/chris/.minikube/cert.pem, removing ...
I0520 09:16:02.479435   31269 exec_runner.go:98] cp: /home/chris/.minikube/certs/cert.pem --> /home/chris/.minikube/cert.pem (1074 bytes)
I0520 09:16:02.479499   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/key.pem -> /home/chris/.minikube/key.pem
I0520 09:16:02.479527   31269 exec_runner.go:91] found /home/chris/.minikube/key.pem, removing ...
I0520 09:16:02.479571   31269 exec_runner.go:98] cp: /home/chris/.minikube/certs/key.pem --> /home/chris/.minikube/key.pem (1679 bytes)
I0520 09:16:02.479637   31269 provision.go:105] generating server cert: /home/chris/.minikube/machines/server.pem ca-key=/home/chris/.minikube/certs/ca.pem private-key=/home/chris/.minikube/certs/ca-key.pem org=chris.minikube san=[10.255.0.3 localhost 127.0.0.1]
I0520 09:16:02.695001   31269 provision.go:159] copyRemoteCerts
I0520 09:16:02.695070   31269 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0520 09:16:02.695136   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.739944   31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:02.838402   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0520 09:16:02.838568   31269 ssh_runner.go:215] scp /home/chris/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1034 bytes)
I0520 09:16:02.871941   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server.pem -> /etc/docker/server.pem
I0520 09:16:02.872012   31269 ssh_runner.go:215] scp /home/chris/.minikube/machines/server.pem --> /etc/docker/server.pem (1115 bytes)
I0520 09:16:02.890317   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0520 09:16:02.890369   31269 ssh_runner.go:215] scp /home/chris/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0520 09:16:02.919053   31269 provision.go:85] duration metric: configureAuth took 536.867488ms
I0520 09:16:02.919084   31269 ubuntu.go:190] setting minikube options for container-runtime
I0520 09:16:02.919303   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:02.963663   31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:02.963855   31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:02.963874   31269 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0520 09:16:03.095784   31269 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0520 09:16:03.095907   31269 ubuntu.go:71] root file system type: overlay
I0520 09:16:03.096365   31269 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0520 09:16:03.096529   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:03.183131   31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:03.183327   31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:03.183490   31269 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0520 09:16:03.325533   31269 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0520 09:16:03.325709   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:03.380286   31269 main.go:110] libmachine: Using SSH client type: native
I0520 09:16:03.380511   31269 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0520 09:16:03.380557   31269 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0520 09:16:04.254668   31269 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2020-05-19 23:16:03.321615588 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0520 09:16:04.254770   31269 machine.go:89] provisioned docker machine in 2.277583293s
I0520 09:16:04.254782   31269 client.go:164] LocalClient.Create took 12.835107386s
I0520 09:16:04.254802   31269 start.go:145] duration metric: libmachine.API.Create for "minikube" took 12.835283642s
I0520 09:16:04.254813   31269 start.go:186] post-start starting for "minikube" (driver="docker")
I0520 09:16:04.254824   31269 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0520 09:16:04.254892   31269 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0520 09:16:04.254931   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:04.301793   31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:04.411769   31269 ssh_runner.go:148] Run: cat /etc/os-release
I0520 09:16:04.416204   31269 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0520 09:16:04.416249   31269 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0520 09:16:04.416284   31269 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0520 09:16:04.416302   31269 info.go:96] Remote host: Ubuntu 19.10
I0520 09:16:04.416333   31269 filesync.go:118] Scanning /home/chris/.minikube/addons for local assets ...
I0520 09:16:04.416420   31269 filesync.go:118] Scanning /home/chris/.minikube/files for local assets ...
I0520 09:16:04.416480   31269 start.go:189] post-start completed in 161.654948ms
I0520 09:16:04.416882   31269 start.go:107] duration metric: createHost completed in 12.998370713s
I0520 09:16:04.416899   31269 start.go:74] releasing machines lock for "minikube", held for 12.998755005s
I0520 09:16:04.416976   31269 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0520 09:16:04.465697   31269 profile.go:156] Saving config to /home/chris/.minikube/profiles/minikube/config.json ...
I0520 09:16:04.465708   31269 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0520 09:16:04.465770   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:04.465993   31269 ssh_runner.go:148] Run: systemctl --version
I0520 09:16:04.466035   31269 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0520 09:16:04.514169   31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:04.515778   31269 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:/home/chris/.minikube/machines/minikube/id_rsa Username:docker}
I0520 09:16:04.609972   31269 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0520 09:16:04.658251   31269 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0520 09:16:04.658460   31269 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0520 09:16:04.696971   31269 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0520 09:16:04.917972   31269 ssh_runner.go:148] Run: sudo systemctl start docker
I0520 09:16:04.930668   31269 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
I0520 09:16:04.982822   31269 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0520 09:16:05.041490   31269 cli_runner.go:108] Run: docker inspect --format "{{(index .IPAM.Config 0).Gateway}}" 68d467ba56af
I0520 09:16:05.106394   31269 network.go:77] got host ip for mount in container by inspect docker network: 10.255.0.1
I0520 09:16:05.106471   31269 start.go:251] checking
I0520 09:16:05.106655   31269 ssh_runner.go:148] Run: grep 10.255.0.1	host.minikube.internal$ /etc/hosts
I0520 09:16:05.111611   31269 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "10.255.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0520 09:16:05.121644   31269 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0520 09:16:05.121675   31269 preload.go:96] Found local preload: /home/chris/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0520 09:16:05.121716   31269 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0520 09:16:05.173742   31269 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0520 09:16:05.173784   31269 docker.go:317] Images already preloaded, skipping extraction
I0520 09:16:05.173836   31269 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0520 09:16:05.242959   31269 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0520 09:16:05.242991   31269 cache_images.go:69] Images are preloaded, skipping loading
I0520 09:16:05.243037   31269 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.255.0.3 APIServerPort:8443 KubernetesVersion:v1.18.2 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.255.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.255.0.3 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0520 09:16:05.243137   31269 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.255.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 10.255.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "10.255.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 10.255.0.3:10249

I0520 09:16:05.243248   31269 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0520 09:16:05.324882   31269 kubeadm.go:737] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.255.0.3 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0520 09:16:05.324966   31269 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.2
I0520 09:16:05.332697   31269 binaries.go:43] Found k8s binaries, skipping transfer
I0520 09:16:05.332761   31269 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0520 09:16:05.348705   31269 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1458 bytes)
I0520 09:16:05.372021   31269 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (532 bytes)
I0520 09:16:05.393360   31269 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0520 09:16:05.413667   31269 start.go:251] checking
I0520 09:16:05.413754   31269 ssh_runner.go:148] Run: grep 10.255.0.3	control-plane.minikube.internal$ /etc/hosts
I0520 09:16:05.417168   31269 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "10.255.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0520 09:16:05.427435   31269 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0520 09:16:05.501418   31269 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0520 09:16:05.513809   31269 certs.go:52] Setting up /home/chris/.minikube/profiles/minikube for IP: 10.255.0.3
I0520 09:16:05.513875   31269 certs.go:169] skipping minikubeCA CA generation: /home/chris/.minikube/ca.key
I0520 09:16:05.513908   31269 certs.go:169] skipping proxyClientCA CA generation: /home/chris/.minikube/proxy-client-ca.key
I0520 09:16:05.513976   31269 certs.go:267] generating minikube-user signed cert: /home/chris/.minikube/profiles/minikube/client.key
I0520 09:16:05.513987   31269 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/client.crt with IP's: []
I0520 09:16:05.682270   31269 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/client.crt ...
I0520 09:16:05.682299   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.crt: {Name:mka07a58dd5663c2670aeceac28b6f674efc8b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.682547   31269 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/client.key ...
I0520 09:16:05.682558   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/client.key: {Name:mkf7666bb385a6e9ae21189ba35d84d3b807484f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.682683   31269 certs.go:267] generating minikube signed cert: /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497
I0520 09:16:05.682692   31269 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497 with IP's: [10.255.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0520 09:16:05.810389   31269 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497 ...
I0520 09:16:05.810413   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497: {Name:mk6f1a42b5c3dc17333e7b9453fb03df3d061b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.810735   31269 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497 ...
I0520 09:16:05.810750   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497: {Name:mkc24f38a4bd64f8ff1af66d64fadd9799e384fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.810908   31269 certs.go:278] copying /home/chris/.minikube/profiles/minikube/apiserver.crt.eb746497 -> /home/chris/.minikube/profiles/minikube/apiserver.crt
I0520 09:16:05.811011   31269 certs.go:282] copying /home/chris/.minikube/profiles/minikube/apiserver.key.eb746497 -> /home/chris/.minikube/profiles/minikube/apiserver.key
I0520 09:16:05.811078   31269 certs.go:267] generating aggregator signed cert: /home/chris/.minikube/profiles/minikube/proxy-client.key
I0520 09:16:05.811087   31269 crypto.go:69] Generating cert /home/chris/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0520 09:16:05.884279   31269 crypto.go:157] Writing cert to /home/chris/.minikube/profiles/minikube/proxy-client.crt ...
I0520 09:16:05.884307   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.crt: {Name:mkdc81e217efe8a180042cd6a4ac0d23a55e96c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.884589   31269 crypto.go:165] Writing key to /home/chris/.minikube/profiles/minikube/proxy-client.key ...
I0520 09:16:05.884601   31269 lock.go:35] WriteFile acquiring /home/chris/.minikube/profiles/minikube/proxy-client.key: {Name:mk49b6ed585271800d057c65a7f077c5e7fbddc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0520 09:16:05.884738   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0520 09:16:05.884757   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0520 09:16:05.884767   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0520 09:16:05.884778   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0520 09:16:05.884800   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0520 09:16:05.884815   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0520 09:16:05.884824   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0520 09:16:05.884834   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0520 09:16:05.884889   31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca-key.pem (1679 bytes)
I0520 09:16:05.884953   31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/ca.pem (1034 bytes)
I0520 09:16:05.885019   31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/cert.pem (1074 bytes)
I0520 09:16:05.885055   31269 certs.go:342] found cert: /home/chris/.minikube/certs/home/chris/.minikube/certs/key.pem (1679 bytes)
I0520 09:16:05.885096   31269 vm_assets.go:95] NewFileAsset: /home/chris/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:05.885779   31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0520 09:16:05.904914   31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0520 09:16:05.926662   31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0520 09:16:05.946542   31269 ssh_runner.go:215] scp /home/chris/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0520 09:16:05.967015   31269 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0520 09:16:05.987855   31269 ssh_runner.go:215] scp /home/chris/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0520 09:16:06.007271   31269 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0520 09:16:06.028665   31269 ssh_runner.go:215] scp /home/chris/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0520 09:16:06.050704   31269 ssh_runner.go:215] scp /home/chris/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0520 09:16:06.082922   31269 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0520 09:16:06.103789   31269 ssh_runner.go:148] Run: openssl version
I0520 09:16:06.110910   31269 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0520 09:16:06.118544   31269 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:06.121690   31269 certs.go:383] hashing: -rw-r--r-- 1 root root 1066 Apr 27 00:27 /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:06.121750   31269 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0520 09:16:06.126884   31269 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0520 09:16:06.135075   31269 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.255.0.3 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0520 09:16:06.135264   31269 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0520 09:16:06.192050   31269 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0520 09:16:06.200042   31269 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0520 09:16:06.209841   31269 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0520 09:16:06.209917   31269 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0520 09:16:06.216880   31269 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0520 09:16:06.216934   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0520 09:18:05.040143   31269 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (1m58.823132347s)
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0519 23:16:06.276458     702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:16:10.008863     702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:16:10.010762     702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0520 09:18:05.041356   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0520 09:18:06.437551   31269 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.396114953s)
I0520 09:18:06.437633   31269 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0520 09:18:06.448879   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0520 09:18:06.501067   31269 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0520 09:18:06.501135   31269 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0520 09:18:06.508102   31269 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0520 09:18:06.508150   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0520 09:22:07.871833   31269 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m1.363602055s)
I0520 09:22:07.872009   31269 kubeadm.go:295] StartCluster complete in 6m1.736946372s
I0520 09:22:07.872184   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0520 09:22:07.977395   31269 logs.go:203] 0 containers: []
W0520 09:22:07.977425   31269 logs.go:205] No container was found matching "kube-apiserver"
I0520 09:22:07.977483   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0520 09:22:08.033266   31269 logs.go:203] 0 containers: []
W0520 09:22:08.033293   31269 logs.go:205] No container was found matching "etcd"
I0520 09:22:08.033354   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0520 09:22:08.080712   31269 logs.go:203] 0 containers: []
W0520 09:22:08.080737   31269 logs.go:205] No container was found matching "coredns"
I0520 09:22:08.080786   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0520 09:22:08.126296   31269 logs.go:203] 0 containers: []
W0520 09:22:08.126322   31269 logs.go:205] No container was found matching "kube-scheduler"
I0520 09:22:08.126384   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0520 09:22:08.173131   31269 logs.go:203] 0 containers: []
W0520 09:22:08.173160   31269 logs.go:205] No container was found matching "kube-proxy"
I0520 09:22:08.173223   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0520 09:22:08.219810   31269 logs.go:203] 0 containers: []
W0520 09:22:08.219835   31269 logs.go:205] No container was found matching "kubernetes-dashboard"
I0520 09:22:08.219887   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0520 09:22:08.267037   31269 logs.go:203] 0 containers: []
W0520 09:22:08.267080   31269 logs.go:205] No container was found matching "storage-provisioner"
I0520 09:22:08.267134   31269 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0520 09:22:08.316399   31269 logs.go:203] 0 containers: []
W0520 09:22:08.316423   31269 logs.go:205] No container was found matching "kube-controller-manager"
I0520 09:22:08.316436   31269 logs.go:117] Gathering logs for Docker ...
I0520 09:22:08.316452   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0520 09:22:08.331981   31269 logs.go:117] Gathering logs for container status ...
I0520 09:22:08.332005   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0520 09:22:08.351479   31269 logs.go:117] Gathering logs for kubelet ...
I0520 09:22:08.351504   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0520 09:22:08.434044   31269 logs.go:117] Gathering logs for dmesg ...
I0520 09:22:08.434078   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0520 09:22:08.451904   31269 logs.go:117] Gathering logs for describe nodes ...
I0520 09:22:08.451932   31269 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0520 09:22:08.551087   31269 logs.go:124] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
W0520 09:22:08.551132   31269 out.go:201] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0519 23:18:06.554295    5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💣  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0519 23:18:06.554295    5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose
I0520 09:22:08.551471   31269 exit.go:58] WithError(failed to start node)=startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0519 23:18:06.554295    5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1adae43, 0x14, 0x1d9bf60, 0xc0007199c0)
	/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2ae78c0, 0xc000355800, 0x0, 0x3)
	/app/cmd/minikube/cmd/start.go:204 +0x7f7
github.com/spf13/cobra.(*Command).execute(0x2ae78c0, 0xc0003557d0, 0x3, 0x3, 0x2ae78c0, 0xc0003557d0)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2ae6900, 0x0, 0x1, 0xc000044480)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
	/app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
	/app/cmd/minikube/main.go:66 +0xea

❌  [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: read tcp 127.0.0.1:46806->127.0.0.1:10248: read: connection reset by peer.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0519 23:18:06.554295    5829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0519 23:18:07.856341    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0519 23:18:07.857340    5829 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/4172

I grabbed the output of docker inspect minikube while it was running as well,

% docker inspect minikube 
[
    {
        "Id": "9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76",
        "Created": "2020-05-19T23:15:52.108544092Z",
        "Path": "/usr/local/bin/entrypoint",
        "Args": [
            "/sbin/init"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 31958,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2020-05-19T23:15:54.607483251Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:e6bc41c39dc48b2b472936db36aedb28527ce0f675ed1bc20d029125c9ccf578",
        "ResolvConfPath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/hostname",
        "HostsPath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/hosts",
        "LogPath": "/var/lib/docker/containers/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76/9ae896e2d7353160d850b1ab1cb8d276c99ddd5692e9b676998468232cd1cf76-json.log",
        "Name": "/minikube",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/lib/modules:/lib/modules:ro",
                "minikube:/var"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "22/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ],
                "2376/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ],
                "5000/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Capabilities": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined",
                "apparmor=unconfined"
            ],
            "Tmpfs": {
                "/run": "",
                "/tmp": ""
            },
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 8388608000,
            "NanoCpus": 2000000000,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 16777216000,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": null,
            "ReadonlyPaths": null
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10-init/diff:/var/lib/docker/overlay2/97001f1726d3aa4d8faa2791ab60afc80013efd05ce43b653fa8f778c04be09b/diff:/var/lib/docker/overlay2/825c8843c005b044e6cbe80b6f2fa1072ce45e07bd70a956af0c790fdcb54403/diff:/var/lib/docker/overlay2/29da775ccc9c6c136279f449ad732ec1b0e70e8245ca514c17eb8d800734ac86/diff:/var/lib/docker/overlay2/67f6c8e036869da61915d6e65e3764e9a4f42aabf864ba1f4750e50f9a0a2b5e/diff:/var/lib/docker/overlay2/73f77edfc6525c22482758136be0c6ba6276b3b07e18fce486dd0794085c4d7f/diff:/var/lib/docker/overlay2/5e62407e45ebcae5afdedddba4daeabfd82b3f7a21e52479585642511aa010d7/diff:/var/lib/docker/overlay2/1eab65627407c25724d041b5832571577ed3b46cc89166bf3fac9c38c54ba993/diff:/var/lib/docker/overlay2/3cdf1937c2639fa8ac54730be38af5f9ffc33c62941d36c3eba700be533f81fa/diff:/var/lib/docker/overlay2/7ea2109e2eed83eead30384979e04774f9cff2d53aeb453c89cb7d19f7a81e73/diff:/var/lib/docker/overlay2/c3901ca0d6396e8261e05c3dbaa17cc6587fe49af684915bd611c09d7eb75b65/diff:/var/lib/docker/overlay2/9f9497aac94cabb29f8db9f956172f0e1389d7beca548b8146a0dc409a08b6a6/diff:/var/lib/docker/overlay2/d0f26800b7b92db24e144c9c968d60470e41047ffd2c34e1f1652153a65fb287/diff:/var/lib/docker/overlay2/bcb502be953c08c7c698ffe0bb8bcc3b81144a65d2246b26d57e1cb05889e504/diff:/var/lib/docker/overlay2/4a62de26d9833bf297669ac36fa2f55f4035a68c46905c001f4c4d4fe9be2ef4/diff:/var/lib/docker/overlay2/e5264314d7045586164cf3d94ac62caed3ca69d65568054522f8a3a4c93157e7/diff:/var/lib/docker/overlay2/f042b33a7039e4769ea068f8279457d08d1ff0b2a57f403a40d87b5d54bc24b4/diff:/var/lib/docker/overlay2/bc6af7651a08e06b82fabba53021b1dffff4f744055571601a1fb6d3d4ebf900/diff:/var/lib/docker/overlay2/d7fdce89c13587dbcc2fb8f778fc1e76d2698d6bf6aca14884c6dce0dd769f8f/diff:/var/lib/docker/overlay2/e598b3e6d891b9e08921d600a3f5a0e2a59bf0c1629058a9eb3cf16ba15d5683/diff:/var/lib/docker/overlay2/fa206236361099372568dc154f812906577a6ec9c3addca581fdf8c0abe077cf/diff:/var/lib/docker/overlay2/db05f4ddb7a3daec133bef6f64e778bc22aa721b6def8f6556c445dc2934f34b/diff:/var/lib/docker/overlay2/ec1c0879089261d5dd75303b368afb0aac3c368355498621637e8a4020ef67c6/diff",
                "MergedDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10/merged",
                "UpperDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10/diff",
                "WorkDir": "/var/lib/docker/overlay2/109202af1d5c4f3b9ec67e5b85035432870eff5ff2e473fd32e00f04a4bc7e10/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/lib/modules",
                "Destination": "/lib/modules",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "minikube",
                "Source": "/var/lib/docker/volumes/minikube/_data",
                "Destination": "/var",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "minikube",
            "Domainname": "",
            "User": "root",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "22/tcp": {},
                "2376/tcp": {},
                "5000/tcp": {},
                "8443/tcp": {}
            },
            "Tty": true,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "container=docker",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": null,
            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/usr/local/bin/entrypoint",
                "/sbin/init"
            ],
            "OnBuild": null,
            "Labels": {
                "created_by.minikube.sigs.k8s.io": "true",
                "mode.minikube.sigs.k8s.io": "minikube",
                "name.minikube.sigs.k8s.io": "minikube",
                "role.minikube.sigs.k8s.io": ""
            },
            "StopSignal": "SIGRTMIN+3"
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "35292450a73751900a855e4e6fcf4e186b88708fbd64a575387b6dbe86e5f3ed",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "22/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "32771"
                    }
                ],
                "2376/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "32770"
                    }
                ],
                "5000/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "32769"
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "32768"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/35292450a737",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "eae87075bf1bf18bd457ab4b8bd9e41830740b5cc0cf745606848557fa44a25d",
            "Gateway": "10.255.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "10.255.0.3",
            "IPPrefixLen": 17,
            "IPv6Gateway": "",
            "MacAddress": "02:42:0a:ff:00:03",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "68d467ba56af2e7b140b8239c644cf0c04a63a9c317e0a9505203beb4da7fec2",
                    "EndpointID": "eae87075bf1bf18bd457ab4b8bd9e41830740b5cc0cf745606848557fa44a25d",
                    "Gateway": "10.255.0.1",
                    "IPAddress": "10.255.0.3",
                    "IPPrefixLen": 17,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:ff:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]

and it does appear that it is using the overlay2 storage driver.

@priyawadhwa
Copy link

Hey @solarnz I noticed your local docker is running systemd as cgroup manager and docker within minikube is running cgroupfs

could you try running:

minikube start --driver docker --force-systemd

which will force docker in minikube to run systemd. (Sometimes, conflicting cgroup managers can cause issues)

@solarnz
Copy link
Author

solarnz commented May 28, 2020

% minikube start --driver docker --force-systemd --v=5
😄  minikube v1.10.1 on Arch 
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=8000MB) ...
🐳  Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.255.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0528 01:23:45.818744     847 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0528 01:23:49.767756     847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0528 01:23:49.769578     847 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


💣  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0528 01:25:46.047664    6109 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0528 01:25:48.008828    6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0528 01:25:48.010652    6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

❌  [NONE_KUBELET] failed to start node startup failed: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0528 01:25:46.047664    6109 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0528 01:25:48.008828    6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0528 01:25:48.010652    6109 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/4172

@tstromberg
Copy link
Contributor

@solarnz - can you add the output of minikube logs as well?

@marcominetti
Copy link

hi there, I have same similar issues. The relevant part of the log files from docker systemd outputs is this one (other errors are recovered by further attemps to start kubelet service):

Jun 05 09:14:11 test kubelet[1675]: W0605 09:14:11.887944    1675 fs.go:540] stat failed on /dev/mapper/nvme0n1p3_crypt with error: no such file or directory
Jun 05 09:14:11 test kubelet[1675]: F0605 09:14:11.887951    1675 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find devi
ce with major: 0, minor: 29 in cached partitions map

I entered the docker container bash and /dev/mapper/ is only having the control not the mapped partition from the host... can this be a cgroup or volume binding issue?

@marcominetti
Copy link

The issues is generated by code "workaround" in google/cadvisor at https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L203 and https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L540

		// btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
		// instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point
		if mount.FsType == "btrfs" && mount.Major == 0 && strings.HasPrefix(mount.Source, "/dev/") {
			major, minor, err := getBtrfsMajorMinorIds(&mount)
			if err != nil {
				klog.Warningf("%s", err)

that executes a stat command on not existing /dev/mapper/xxx in docker container at
https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L736

// Get major and minor Ids for a mount point using btrfs as filesystem.
func getBtrfsMajorMinorIds(mount *mount.MountInfo) (int, int, error) {
	// btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
	// instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point

	buf := new(syscall.Stat_t)
	err := syscall.Stat(mount.Source, buf)
	if err != nil {
		err = fmt.Errorf("stat failed on %s with error: %s", mount.Source, err)
		return 0, 0, err
	}

@medyagh
Copy link
Member

medyagh commented Jul 1, 2020

btrfs is not currently supported by minikube, we test against overlayfs driver. I would be happy to accept PRs that adds btrfs support to minikbue's innner docker setup.

@priyawadhwa priyawadhwa changed the title docker on btrfs: [kubelet-check] Initial timeout of 40s passed. docker driver: add support for btrfs Jul 22, 2020
@priyawadhwa priyawadhwa added kind/feature Categorizes issue or PR as related to a new feature. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Jul 22, 2020
@sharifelgamal sharifelgamal added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jul 27, 2020
This was referenced Sep 3, 2020
@kppullin
Copy link
Contributor

Using the hints from @marcominetti above, I observed that the device /dev/mapper/cryptroot did not exist inside the minikube container.

Comparing with my host, /dev/mapper/cryptroot is symlink'd to /dev/dm-0 and dm-0 does exist inside the container. After creating this symlink inside the minikube container I was able to start successfully.

@marcominetti
Copy link

Thanks @kppullin... The following command worked for me

export MISSING_MOUNT_BIND=nvme0n1p3_crypt
docker exec -ti lss /bin/bash -c "ln -s /dev/dm-0 /dev/mapper/$MISSING_MOUNT_BIND"

I execute it immediately after the logged task "Creating docker container (CPUs=4, Memory=16384MB) ..." is finished

minettim@nuc:~$ minikube start -p lss --cpus 4 --memory 16384
😄  [lss] minikube v1.13.1 on Ubuntu 20.04
✨  Automatically selected the docker driver
❗  docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
👍  Starting control plane node lss in cluster lss
🔥  Creating docker container (CPUs=4, Memory=16384MB) ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "lss" by default

@medyagh
Copy link
Member

medyagh commented May 17, 2021

this is task is available for anyone who likes to pick it up

@spowelljr spowelljr modified the milestones: v1.21.0, 1.22.0-candidate May 27, 2021
@sharifelgamal sharifelgamal added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jun 14, 2021
@sharifelgamal sharifelgamal removed this from the 1.22.0-candidate milestone Jun 14, 2021
@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Jul 14, 2021
@freebased
Copy link

OS: openSUSE Leap 15.3 x86_64
Kernel: 5.3.18
Filesystem: Btrfs
Encryption: LUKS

I had two problems with minikube:

  1. "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 70 in cached partitions map"
  2. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

Here's what I did to fix it:

  1. I set overlay2 storage driver for docker (https://docs.docker.com/storage/storagedriver/overlayfs-driver/)
  2. I made docker a default driver for minikube (minikube config set driver docker)
  3. I started docker with the docker driver used also as the argument (minikube start --driver=docker)

@ghost
Copy link

ghost commented Sep 15, 2021

I just wanted to say, suggestions by @kppullin @medyagh and @marcoceppi worked for me. I can either link my volume:

minikube ssh "sudo ln -s /dev/dm-1 /dev/mapper/cryptroot"

Or run with:

minikube start --feature-gates="LocalStorageCapacityIsolation=false"

Based on comments by @marcoceppi, it appears this would need to be fixed in cAdvisor?

@marcominetti
Copy link

@braderhart if you mean me with marcoceppi yep, i think that a good focus to start with should be cAdvisor at least to avoid the exception, they were open to receive a PR... Don't know if the tentative/workaround code for btrfs is still there now...

In any case, because our workaround is based on mounting devs into the minikube container, the real solution might be in the docker initialization code within minikube itself (creating the mapped symlinks to devs or better to bind mount the devs)

@ghost
Copy link

ghost commented Sep 21, 2021

@marcominetti Do you have time to assist with this? I can confirm the solution you mentioned fixes the issue for me, where I have overlay2 setup on top of btrfs.

 Storage Driver: overlay2
  Backing Filesystem: btrfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: false

@marcominetti
Copy link

Yes, sure, I'll try to delve into the code of minikube and cAdvisor. Can someone here review and accept an eventual PR against minikube?

@sharifelgamal
Copy link
Collaborator

I'd be happy to review any PR that fixes this issue.

@spowelljr
Copy link
Member

Kubernetes 1.23 will support btrfs

kubernetes/system-validators#26

@LeoniePhiline
Copy link

It appears that Kubernetes 1.23 is released: https://www.kubernetes.dev/resources/release/

As I run minikube v1.24.0, it seems that kubernetes 1.22 is used. Is there a way to use kubernetes 1.23 to I can use docker with btrfs? Or should I wait for minikube 1.25 to run with kubernetes 1.23?

@ghost
Copy link

ghost commented Dec 19, 2021

It appears that Kubernetes 1.23 is released: https://www.kubernetes.dev/resources/release/

As I run minikube v1.24.0, it seems that kubernetes 1.22 is used. Is there a way to use kubernetes 1.23 to I can use docker with btrfs? Or should I wait for minikube 1.25 to run with kubernetes 1.23?

minikube start --kubernetes-version=v1.23.0

@LeoniePhiline
Copy link

I can confirm this works! (openSUSE tumbleweed, full disk encryption with cryptsetup / dm-crypt, btrfs)

❯ minikube start --extra-config=kubelet.cgroup-driver=systemd -v=5 --kubernetes-version=1.23.0
😄  minikube v1.24.0 auf Opensuse-Tumbleweed 
✨  Using the docker driver based on existing profile
❗  docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🏃  Updating the running docker "minikube" container ...

🧯  Docker is nearly out of disk space, which may cause deployments to fail! (94% of capacity)
💡  Suggestion: 

    Try one or more of the following to free up space on the device:
    
    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
    2. Increase the storage allocated to Docker for Desktop by clicking on:
    Docker icon > Preferences > Resources > Disk Image Size
    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
🍿  Related issue: https://github.com/kubernetes/minikube/issues/9024

🐳  Vorbereiten von Kubernetes v1.23.0 auf Docker 20.10.8...
    ▪ kubelet.cgroup-driver=systemd
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 44.42 MiB / 44.42 MiB [---------------] 100.00% 1.11 MiB p/s 40s
    > kubeadm: 43.11 MiB / 43.11 MiB [-----------] 100.00% 464.77 KiB p/s 1m35s
    > kubelet: 118.73 MiB / 118.73 MiB [---------] 100.00% 755.90 KiB p/s 2m41s
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

❤️

@spowelljr
Copy link
Member

Glad to hear @LeoniePhiline, thanks for testing!

@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels May 4, 2022
@spowelljr
Copy link
Member

I believe this has been fixed with k8s 1.23, I'm going to close this, but if it's not resolved feel free to respond and I'll reopen the issue, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-faq-entry Things that could use documentation in a FAQ needs-problem-regex priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests