Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Ingress fails on Amazon Linux 2 #11400

Closed
UnknownGnome opened this issue May 13, 2021 · 2 comments
Closed

Enable Ingress fails on Amazon Linux 2 #11400

UnknownGnome opened this issue May 13, 2021 · 2 comments

Comments

@UnknownGnome
Copy link

UnknownGnome commented May 13, 2021

Enable Ingress fails when on Amazon Linux 2 EC2 Instance. minikube 1.20.

Steps to reproduce the issue:

  1. minikube start
  2. minikube addons enable ingress

Full output of minikube logs command:

* 
* ==> Audit <==
* |---------|------|----------|----------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile  |   User   | Version |          Start Time           |           End Time            |
|---------|------|----------|----------|---------|-------------------------------|-------------------------------|
| start   |      | minikube | ec2-user | v1.20.0 | Wed, 12 May 2021 22:58:19 UTC | Wed, 12 May 2021 23:00:19 UTC |
| logs    |      | minikube | ec2-user | v1.20.0 | Thu, 13 May 2021 14:36:23 UTC | Thu, 13 May 2021 14:36:25 UTC |
|---------|------|----------|----------|---------|-------------------------------|-------------------------------|

* 
* ==> Last Start <==
* Log file created at: 2021/05/12 22:58:19
Running on machine: ip-10-4-20-69
Binary: Built with gc go1.16.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0512 22:58:19.914560    4074 out.go:291] Setting OutFile to fd 1 ...
I0512 22:58:19.914780    4074 out.go:338] TERM=xterm,COLORTERM=, which probably does not support color
I0512 22:58:19.914785    4074 out.go:304] Setting ErrFile to fd 2...
I0512 22:58:19.914790    4074 out.go:338] TERM=xterm,COLORTERM=, which probably does not support color
I0512 22:58:19.914943    4074 root.go:316] Updating PATH: /home/ec2-user/.minikube/bin
W0512 22:58:19.915044    4074 root.go:291] Error reading config file at /home/ec2-user/.minikube/config/config.json: open /home/ec2-user/.minikube/config/config.json: no such file or directory
I0512 22:58:19.915268    4074 out.go:298] Setting JSON to false
I0512 22:58:19.916069    4074 start.go:108] hostinfo: {"hostname":"ip-10-4-20-69.test.local","uptime":121,"bootTime":1620860179,"procs":117,"os":"linux","platform":"amazon","platformFamily":"rhel","platformVersion":"2","kernelVersion":"4.14.231-173.361.amzn2.x86_64","kernelArch":"x86_64","virtualizationSystem":"xen","virtualizationRole":"guest","hostId":"ec2d0f46-985d-6ec5-013a-525e909aa54e"}
I0512 22:58:19.916136    4074 start.go:118] virtualization: xen guest
I0512 22:58:19.918408    4074 out.go:170] * minikube v1.20.0 on Amazon 2 (xen/amd64)
I0512 22:58:19.918565    4074 driver.go:322] Setting default libvirt URI to qemu:///system
I0512 22:58:19.918588    4074 global.go:103] Querying for installed drivers using PATH=/home/ec2-user/.minikube/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
I0512 22:58:19.918680    4074 notify.go:169] Checking for updates...
I0512 22:58:19.964612    4074 docker.go:119] docker version: linux-20.10.4
I0512 22:58:19.964675    4074 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0512 22:58:20.013094    4074 info.go:261] docker info: {ID:KVPK:QZ6C:R7GS:DSPA:NZZL:WBM4:ESEB:I6G6:NHP2:N2HD:F5N2:X3XC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-05-12 22:58:19.999275933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.14.231-173.361.amzn2.x86_64 OperatingSystem:Amazon Linux 2 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16818016256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-10-4-20-69.test.local Labels:[] ExperimentalBuild:false ServerVersion:20.10.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0512 22:58:20.013172    4074 docker.go:225] overlay module found
I0512 22:58:20.013184    4074 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0512 22:58:20.013248    4074 global.go:111] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I0512 22:58:20.023126    4074 global.go:111] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0512 22:58:20.023185    4074 global.go:111] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0512 22:58:20.023198    4074 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0512 22:58:20.023235    4074 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0512 22:58:20.023263    4074 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0512 22:58:20.023280    4074 driver.go:258] not recommending "none" due to default: false
I0512 22:58:20.023287    4074 driver.go:258] not recommending "ssh" due to default: false
I0512 22:58:20.023299    4074 driver.go:292] Picked: docker
I0512 22:58:20.023306    4074 driver.go:293] Alternatives: [none ssh]
I0512 22:58:20.023312    4074 driver.go:294] Rejects: [kvm2 podman virtualbox vmware]
I0512 22:58:20.025162    4074 out.go:170] * Automatically selected the docker driver. Other choices: none, ssh
I0512 22:58:20.025201    4074 start.go:276] selected driver: docker
I0512 22:58:20.025207    4074 start.go:718] validating driver "docker" against <nil>
I0512 22:58:20.025223    4074 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0512 22:58:20.025298    4074 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0512 22:58:20.072564    4074 info.go:261] docker info: {ID:KVPK:QZ6C:R7GS:DSPA:NZZL:WBM4:ESEB:I6G6:NHP2:N2HD:F5N2:X3XC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-05-12 22:58:20.060052209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.14.231-173.361.amzn2.x86_64 OperatingSystem:Amazon Linux 2 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16818016256 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-10-4-20-69.test.local Labels:[] ExperimentalBuild:false ServerVersion:20.10.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0512 22:58:20.072698    4074 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
I0512 22:58:20.073021    4074 start_flags.go:314] Using suggested 4000MB memory alloc based on sys=16038MB, container=16038MB
I0512 22:58:20.073164    4074 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true]
I0512 22:58:20.073180    4074 cni.go:93] Creating CNI manager for ""
I0512 22:58:20.073188    4074 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0512 22:58:20.073195    4074 start_flags.go:273] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0512 22:58:20.075049    4074 out.go:170] * Starting control plane node minikube in cluster minikube
I0512 22:58:20.075092    4074 cache.go:111] Beginning downloading kic base image for docker with docker
W0512 22:58:20.075100    4074 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string
W0512 22:58:20.075118    4074 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string
I0512 22:58:20.076418    4074 out.go:170] * Pulling base image ...
I0512 22:58:20.076454    4074 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0512 22:58:20.076564    4074 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
I0512 22:58:20.076577    4074 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache
I0512 22:58:20.076628    4074 image.go:192] Writing gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache
I0512 22:58:22.066976    4074 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0512 22:58:22.066992    4074 cache.go:54] Caching tarball of preloaded images
I0512 22:58:22.067016    4074 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0512 22:58:22.131159    4074 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0512 22:58:22.133252    4074 out.go:170] * Downloading Kubernetes v1.20.2 preload ...
I0512 22:58:22.133286    4074 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
I0512 22:58:24.250676    4074 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4?checksum=md5:91e6984243eafcd2b938c7edbc7b7ef6 -> /home/ec2-user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0512 22:58:32.300171    4074 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e as a tarball
I0512 22:58:32.300306    4074 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
I0512 22:58:32.336787    4074 cache.go:160] Downloading gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local daemon
I0512 22:58:32.336818    4074 image.go:250] Writing gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local daemon
I0512 22:58:36.779401    4074 preload.go:206] saving checksum for preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
I0512 22:58:38.897241    4074 preload.go:218] verifying checksumm of /home/ec2-user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 ...
I0512 22:58:40.084342    4074 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
I0512 22:58:40.084646    4074 profile.go:148] Saving config to /home/ec2-user/.minikube/profiles/minikube/config.json ...
I0512 22:58:40.084671    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/config.json: {Name:mk5f377c42c55cd70f48bdaa1ac96608178ba470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:15.269983    4074 cache.go:163] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
I0512 22:59:15.269999    4074 cache.go:194] Successfully downloaded all kic artifacts
I0512 22:59:15.270026    4074 start.go:313] acquiring machines lock for minikube: {Name:mke39e9c84c97a4cc79f46eddd6372cb206cf268 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0512 22:59:15.270133    4074 start.go:317] acquired machines lock for "minikube" in 88.66µs
I0512 22:59:15.270153    4074 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0512 22:59:15.270229    4074 start.go:126] createHost starting for "" (driver="docker")
I0512 22:59:15.271910    4074 out.go:197] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0512 22:59:15.272170    4074 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0512 22:59:15.272200    4074 client.go:168] LocalClient.Create starting
I0512 22:59:15.272276    4074 main.go:128] libmachine: Creating CA: /home/ec2-user/.minikube/certs/ca.pem
I0512 22:59:15.724756    4074 main.go:128] libmachine: Creating client certificate: /home/ec2-user/.minikube/certs/cert.pem
I0512 22:59:16.052352    4074 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0512 22:59:16.089634    4074 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0512 22:59:16.089686    4074 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0512 22:59:16.089707    4074 cli_runner.go:115] Run: docker network inspect minikube
W0512 22:59:16.125695    4074 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0512 22:59:16.125716    4074 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0512 22:59:16.125729    4074 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0512 22:59:16.125769    4074 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 22:59:16.162744    4074 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000658580] misses:0}
I0512 22:59:16.162795    4074 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0512 22:59:16.162844    4074 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0512 22:59:16.162891    4074 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0512 22:59:16.244334    4074 network_create.go:84] docker network minikube 192.168.49.0/24 created
I0512 22:59:16.244352    4074 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0512 22:59:16.244405    4074 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0512 22:59:16.280973    4074 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0512 22:59:16.319651    4074 oci.go:102] Successfully created a docker volume minikube
I0512 22:59:16.319707    4074 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
I0512 22:59:23.085384    4074 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: (6.765633448s)
I0512 22:59:23.085403    4074 oci.go:106] Successfully prepared a docker volume minikube
W0512 22:59:23.085437    4074 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0512 22:59:23.085445    4074 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0512 22:59:23.085453    4074 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0512 22:59:23.085489    4074 preload.go:106] Found local preload: /home/ec2-user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0512 22:59:23.085496    4074 kic.go:179] Starting extracting preloaded images to volume ...
I0512 22:59:23.085500    4074 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0512 22:59:23.085534    4074 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/ec2-user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
I0512 22:59:23.139307    4074 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
I0512 22:59:24.190066    4074 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e: (1.050704098s)
I0512 22:59:24.190122    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0512 22:59:24.233601    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0512 22:59:24.276430    4074 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0512 22:59:24.386929    4074 oci.go:278] the created container "minikube" has a running status.
I0512 22:59:24.386948    4074 kic.go:210] Creating ssh key for kic: /home/ec2-user/.minikube/machines/minikube/id_rsa...
I0512 22:59:24.659883    4074 kic_runner.go:188] docker (temp): /home/ec2-user/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0512 22:59:25.040788    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0512 22:59:25.078863    4074 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0512 22:59:25.078875    4074 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0512 22:59:35.084164    4074 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/ec2-user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (11.998590446s)
I0512 22:59:35.084184    4074 kic.go:188] duration metric: took 11.998685 seconds to extract preloaded images to volume
I0512 22:59:35.084242    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0512 22:59:35.127491    4074 machine.go:88] provisioning docker machine ...
I0512 22:59:35.127522    4074 ubuntu.go:169] provisioning hostname "minikube"
I0512 22:59:35.127571    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:35.170229    4074 main.go:128] libmachine: Using SSH client type: native
I0512 22:59:35.170457    4074 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
I0512 22:59:35.170470    4074 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0512 22:59:35.299707    4074 main.go:128] libmachine: SSH cmd err, output: <nil>: minikube

I0512 22:59:35.299774    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:35.338201    4074 main.go:128] libmachine: Using SSH client type: native
I0512 22:59:35.338382    4074 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
I0512 22:59:35.338401    4074 main.go:128] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0512 22:59:35.454331    4074 main.go:128] libmachine: SSH cmd err, output: <nil>: 
I0512 22:59:35.454349    4074 ubuntu.go:175] set auth options {CertDir:/home/ec2-user/.minikube CaCertPath:/home/ec2-user/.minikube/certs/ca.pem CaPrivateKeyPath:/home/ec2-user/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/ec2-user/.minikube/machines/server.pem ServerKeyPath:/home/ec2-user/.minikube/machines/server-key.pem ClientKeyPath:/home/ec2-user/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/ec2-user/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/ec2-user/.minikube}
I0512 22:59:35.454370    4074 ubuntu.go:177] setting up certificates
I0512 22:59:35.454386    4074 provision.go:83] configureAuth start
I0512 22:59:35.454437    4074 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0512 22:59:35.492320    4074 provision.go:137] copyHostCerts
I0512 22:59:35.492372    4074 exec_runner.go:152] cp: /home/ec2-user/.minikube/certs/key.pem --> /home/ec2-user/.minikube/key.pem (1679 bytes)
I0512 22:59:35.492494    4074 exec_runner.go:152] cp: /home/ec2-user/.minikube/certs/ca.pem --> /home/ec2-user/.minikube/ca.pem (1082 bytes)
I0512 22:59:35.492577    4074 exec_runner.go:152] cp: /home/ec2-user/.minikube/certs/cert.pem --> /home/ec2-user/.minikube/cert.pem (1127 bytes)
I0512 22:59:35.492658    4074 provision.go:111] generating server cert: /home/ec2-user/.minikube/machines/server.pem ca-key=/home/ec2-user/.minikube/certs/ca.pem private-key=/home/ec2-user/.minikube/certs/ca-key.pem org=ec2-user.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0512 22:59:36.035652    4074 provision.go:165] copyRemoteCerts
I0512 22:59:36.035697    4074 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0512 22:59:36.035742    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:36.073146    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 22:59:36.157965    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0512 22:59:36.176192    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
I0512 22:59:36.194009    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0512 22:59:36.212167    4074 provision.go:86] duration metric: configureAuth took 757.769397ms
I0512 22:59:36.212181    4074 ubuntu.go:193] setting minikube options for container-runtime
I0512 22:59:36.212368    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:36.250188    4074 main.go:128] libmachine: Using SSH client type: native
I0512 22:59:36.250382    4074 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
I0512 22:59:36.250393    4074 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0512 22:59:36.366293    4074 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay

I0512 22:59:36.366325    4074 ubuntu.go:71] root file system type: overlay
I0512 22:59:36.366554    4074 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0512 22:59:36.366610    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:36.402750    4074 main.go:128] libmachine: Using SSH client type: native
I0512 22:59:36.402952    4074 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
I0512 22:59:36.403039    4074 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0512 22:59:36.527659    4074 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0512 22:59:36.527714    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:36.565202    4074 main.go:128] libmachine: Using SSH client type: native
I0512 22:59:36.565403    4074 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 49157 <nil> <nil>}
I0512 22:59:36.565423    4074 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0512 22:59:37.316167    4074 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-04-09 22:45:28.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2021-05-12 22:59:36.520100530 +0000
@@ -1,30 +1,32 @@
 [Unit]
 Description=Docker Application Container Engine
 Documentation=https://docs.docker.com
+BindsTo=containerd.service
 After=network-online.target firewalld.service containerd.service
 Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
 
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP $MAINPID
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
 
 # kill only the docker process, not all processes in the cgroup
 KillMode=process
-OOMScoreAdjust=-500
 
 [Install]
 WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0512 22:59:37.316193    4074 machine.go:91] provisioned docker machine in 2.188687623s
I0512 22:59:37.316204    4074 client.go:171] LocalClient.Create took 22.043997583s
I0512 22:59:37.316215    4074 start.go:168] duration metric: libmachine.API.Create for "minikube" took 22.044045069s
I0512 22:59:37.316222    4074 start.go:267] post-start starting for "minikube" (driver="docker")
I0512 22:59:37.316228    4074 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0512 22:59:37.316270    4074 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0512 22:59:37.316309    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:37.360863    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 22:59:37.446276    4074 ssh_runner.go:149] Run: cat /etc/os-release
I0512 22:59:37.448950    4074 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0512 22:59:37.448966    4074 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0512 22:59:37.448977    4074 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0512 22:59:37.448984    4074 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0512 22:59:37.448993    4074 filesync.go:118] Scanning /home/ec2-user/.minikube/addons for local assets ...
I0512 22:59:37.449033    4074 filesync.go:118] Scanning /home/ec2-user/.minikube/files for local assets ...
I0512 22:59:37.449054    4074 start.go:270] post-start completed in 132.824617ms
I0512 22:59:37.449329    4074 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0512 22:59:37.486031    4074 profile.go:148] Saving config to /home/ec2-user/.minikube/profiles/minikube/config.json ...
I0512 22:59:37.486237    4074 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0512 22:59:37.486277    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:37.523090    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 22:59:37.607025    4074 start.go:129] duration metric: createHost completed in 22.336781308s
I0512 22:59:37.607041    4074 start.go:80] releasing machines lock for "minikube", held for 22.336897913s
I0512 22:59:37.607112    4074 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0512 22:59:37.644024    4074 ssh_runner.go:149] Run: systemctl --version
I0512 22:59:37.644075    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:37.644133    4074 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0512 22:59:37.644183    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 22:59:37.683046    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 22:59:37.684357    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 22:59:41.789484    4074 ssh_runner.go:189] Completed: systemctl --version: (4.145425366s)
I0512 22:59:41.789536    4074 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0512 22:59:41.789600    4074 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (4.145446171s)
W0512 22:59:41.789624    4074 start.go:637] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 28
stdout:

stderr:
curl: (28) Resolving timed out after 2000 milliseconds
W0512 22:59:41.789750    4074 out.go:235] ! This container is having trouble accessing https://k8s.gcr.io
W0512 22:59:41.789779    4074 out.go:424] no arguments passed for "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/\n" - returning raw string
W0512 22:59:41.789813    4074 out.go:235] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0512 22:59:41.799066    4074 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0512 22:59:41.808093    4074 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0512 22:59:41.808130    4074 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0512 22:59:41.817309    4074 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0512 22:59:41.829679    4074 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0512 22:59:41.920039    4074 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0512 22:59:42.013166    4074 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0512 22:59:42.022620    4074 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0512 22:59:42.108345    4074 ssh_runner.go:149] Run: sudo systemctl start docker
I0512 22:59:42.118023    4074 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0512 22:59:42.172060    4074 out.go:197] * Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
I0512 22:59:42.172143    4074 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0512 22:59:42.208317    4074 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
I0512 22:59:42.211555    4074 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0512 22:59:42.221081    4074 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0512 22:59:42.221103    4074 preload.go:106] Found local preload: /home/ec2-user/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0512 22:59:42.221140    4074 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0512 22:59:42.264510    4074 docker.go:528] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0512 22:59:42.264526    4074 docker.go:465] Images already preloaded, skipping extraction
I0512 22:59:42.264563    4074 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0512 22:59:42.307952    4074 docker.go:528] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0512 22:59:42.307968    4074 cache_images.go:74] Images are preloaded, skipping loading
I0512 22:59:42.308009    4074 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0512 22:59:42.398245    4074 cni.go:93] Creating CNI manager for ""
I0512 22:59:42.398259    4074 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0512 22:59:42.398267    4074 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0512 22:59:42.398280    4074 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0512 22:59:42.398417    4074 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%!"(MISSING)
  nodefs.inodesFree: "0%!"(MISSING)
  imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0512 22:59:42.398509    4074 kubeadm.go:901] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0512 22:59:42.398553    4074 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0512 22:59:42.405843    4074 binaries.go:44] Found k8s binaries, skipping transfer
I0512 22:59:42.405877    4074 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0512 22:59:42.412665    4074 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0512 22:59:42.426145    4074 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0512 22:59:42.439635    4074 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes)
I0512 22:59:42.453051    4074 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0512 22:59:42.455746    4074 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0512 22:59:42.464634    4074 certs.go:52] Setting up /home/ec2-user/.minikube/profiles/minikube for IP: 192.168.49.2
I0512 22:59:42.464674    4074 certs.go:175] generating minikubeCA CA: /home/ec2-user/.minikube/ca.key
I0512 22:59:42.892888    4074 crypto.go:157] Writing cert to /home/ec2-user/.minikube/ca.crt ...
I0512 22:59:42.892905    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/ca.crt: {Name:mkb08f59e516b0d424ad13a91ccb2dacf7c7277f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:42.893135    4074 crypto.go:165] Writing key to /home/ec2-user/.minikube/ca.key ...
I0512 22:59:42.893145    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/ca.key: {Name:mk23eace0344495582ff3ffc34ef64c5e2823c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:42.893305    4074 certs.go:175] generating proxyClientCA CA: /home/ec2-user/.minikube/proxy-client-ca.key
I0512 22:59:43.106443    4074 crypto.go:157] Writing cert to /home/ec2-user/.minikube/proxy-client-ca.crt ...
I0512 22:59:43.106460    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/proxy-client-ca.crt: {Name:mk9a8c725ee9c1b19a2220eb76ac32d84e5f9db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:43.106676    4074 crypto.go:165] Writing key to /home/ec2-user/.minikube/proxy-client-ca.key ...
I0512 22:59:43.106685    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/proxy-client-ca.key: {Name:mk48b97e177eed8b45b2497b4a5355eb8b10afbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:43.106875    4074 certs.go:286] generating minikube-user signed cert: /home/ec2-user/.minikube/profiles/minikube/client.key
I0512 22:59:43.106882    4074 crypto.go:69] Generating cert /home/ec2-user/.minikube/profiles/minikube/client.crt with IP's: []
I0512 22:59:43.343250    4074 crypto.go:157] Writing cert to /home/ec2-user/.minikube/profiles/minikube/client.crt ...
I0512 22:59:43.343267    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/client.crt: {Name:mk31d488c08ab9e06d36b37d93edbcda9c61de3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:43.343503    4074 crypto.go:165] Writing key to /home/ec2-user/.minikube/profiles/minikube/client.key ...
I0512 22:59:43.343512    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/client.key: {Name:mk88f1e613013ac73655cb16d65fe871e5b8acbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:43.343650    4074 certs.go:286] generating minikube signed cert: /home/ec2-user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0512 22:59:43.343657    4074 crypto.go:69] Generating cert /home/ec2-user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0512 22:59:44.119044    4074 crypto.go:157] Writing cert to /home/ec2-user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0512 22:59:44.119063    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk24f35a223fa4cb37c924b7d68ca0b2c9ed3e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:44.119290    4074 crypto.go:165] Writing key to /home/ec2-user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0512 22:59:44.119300    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk0469b69ac98628e968cb887fea293fdd344fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:44.119448    4074 certs.go:297] copying /home/ec2-user/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/ec2-user/.minikube/profiles/minikube/apiserver.crt
I0512 22:59:44.119525    4074 certs.go:301] copying /home/ec2-user/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/ec2-user/.minikube/profiles/minikube/apiserver.key
I0512 22:59:44.119600    4074 certs.go:286] generating aggregator signed cert: /home/ec2-user/.minikube/profiles/minikube/proxy-client.key
I0512 22:59:44.119609    4074 crypto.go:69] Generating cert /home/ec2-user/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0512 22:59:44.249233    4074 crypto.go:157] Writing cert to /home/ec2-user/.minikube/profiles/minikube/proxy-client.crt ...
I0512 22:59:44.249251    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/proxy-client.crt: {Name:mkf0813785b02e8c2f81710723758be1ce3495c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:44.249487    4074 crypto.go:165] Writing key to /home/ec2-user/.minikube/profiles/minikube/proxy-client.key ...
I0512 22:59:44.249496    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.minikube/profiles/minikube/proxy-client.key: {Name:mk6371d46da52852a10e5d36c47a6130f62459d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 22:59:44.249722    4074 certs.go:361] found cert: /home/ec2-user/.minikube/certs/home/ec2-user/.minikube/certs/ca-key.pem (1675 bytes)
I0512 22:59:44.249760    4074 certs.go:361] found cert: /home/ec2-user/.minikube/certs/home/ec2-user/.minikube/certs/ca.pem (1082 bytes)
I0512 22:59:44.249792    4074 certs.go:361] found cert: /home/ec2-user/.minikube/certs/home/ec2-user/.minikube/certs/cert.pem (1127 bytes)
I0512 22:59:44.249824    4074 certs.go:361] found cert: /home/ec2-user/.minikube/certs/home/ec2-user/.minikube/certs/key.pem (1679 bytes)
I0512 22:59:44.251015    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0512 22:59:44.270418    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0512 22:59:44.288416    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0512 22:59:44.307083    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0512 22:59:44.325309    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0512 22:59:44.343967    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0512 22:59:44.362217    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0512 22:59:44.380220    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0512 22:59:44.397935    4074 ssh_runner.go:316] scp /home/ec2-user/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0512 22:59:44.415903    4074 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0512 22:59:44.429067    4074 ssh_runner.go:149] Run: openssl version
I0512 22:59:44.433845    4074 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0512 22:59:44.441219    4074 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0512 22:59:44.444151    4074 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 12 22:59 /usr/share/ca-certificates/minikubeCA.pem
I0512 22:59:44.444179    4074 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0512 22:59:44.448940    4074 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0512 22:59:44.455980    4074 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0512 22:59:44.456060    4074 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0512 22:59:44.496637    4074 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0512 22:59:44.503851    4074 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0512 22:59:44.510591    4074 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0512 22:59:44.510617    4074 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0512 22:59:44.517187    4074 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0512 22:59:44.517213    4074 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
W0512 23:00:08.228550    4074 out.go:424] no arguments passed for "  - Generating certificates and keys ..." - returning raw string
W0512 23:00:08.228572    4074 out.go:424] no arguments passed for "  - Generating certificates and keys ..." - returning raw string
I0512 23:00:08.230129    4074 out.go:197]   - Generating certificates and keys ...
W0512 23:00:08.232645    4074 out.go:424] no arguments passed for "  - Booting up control plane ..." - returning raw string
W0512 23:00:08.232657    4074 out.go:424] no arguments passed for "  - Booting up control plane ..." - returning raw string
I0512 23:00:08.234161    4074 out.go:197]   - Booting up control plane ...
W0512 23:00:08.236723    4074 out.go:424] no arguments passed for "  - Configuring RBAC rules ..." - returning raw string
W0512 23:00:08.236734    4074 out.go:424] no arguments passed for "  - Configuring RBAC rules ..." - returning raw string
I0512 23:00:08.238352    4074 out.go:197]   - Configuring RBAC rules ...
I0512 23:00:08.241604    4074 cni.go:93] Creating CNI manager for ""
I0512 23:00:08.241614    4074 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0512 23:00:08.241640    4074 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0512 23:00:08.241699    4074 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0512 23:00:08.241706    4074 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_05_12T23_00_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0512 23:00:08.262508    4074 ops.go:34] apiserver oom_adj: -16
I0512 23:00:08.328894    4074 kubeadm.go:977] duration metric: took 87.233131ms to wait for elevateKubeSystemPrivileges.
I0512 23:00:08.495113    4074 kubeadm.go:383] StartCluster complete in 24.039130492s
I0512 23:00:08.495135    4074 settings.go:142] acquiring lock: {Name:mk9bab7919fc13e4b960d13c91e47d3fae6c575e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 23:00:08.495256    4074 settings.go:150] Updating kubeconfig:  /home/ec2-user/.kube/config
I0512 23:00:08.495806    4074 lock.go:36] WriteFile acquiring /home/ec2-user/.kube/config: {Name:mk1d30f2bcdedcd5224458206a60e0bb002d6810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 23:00:09.013680    4074 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0512 23:00:09.013711    4074 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
W0512 23:00:09.013745    4074 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string
W0512 23:00:09.013756    4074 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string
I0512 23:00:09.015290    4074 out.go:170] * Verifying Kubernetes components...
I0512 23:00:09.013890    4074 addons.go:328] enableAddons start: toEnable=map[], additional=[]
I0512 23:00:09.015362    4074 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0512 23:00:09.015386    4074 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0512 23:00:09.015401    4074 addons.go:131] Setting addon storage-provisioner=true in "minikube"
W0512 23:00:09.015407    4074 addons.go:140] addon storage-provisioner should already be in state true
I0512 23:00:09.015420    4074 host.go:66] Checking if "minikube" exists ...
I0512 23:00:09.015444    4074 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0512 23:00:09.015456    4074 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0512 23:00:09.015756    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0512 23:00:09.015933    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0512 23:00:09.028438    4074 api_server.go:50] waiting for apiserver process to appear ...
I0512 23:00:09.028471    4074 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0512 23:00:09.049907    4074 api_server.go:70] duration metric: took 36.159926ms to wait for apiserver process to appear ...
I0512 23:00:09.049922    4074 api_server.go:86] waiting for apiserver healthz status ...
I0512 23:00:09.049935    4074 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0512 23:00:09.063141    4074 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
ok
I0512 23:00:09.065456    4074 api_server.go:139] control plane version: v1.20.2
I0512 23:00:09.065470    4074 api_server.go:129] duration metric: took 15.539962ms to wait for apiserver health ...
I0512 23:00:09.065481    4074 system_pods.go:43] waiting for kube-system pods to appear ...
I0512 23:00:09.074622    4074 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0512 23:00:09.074740    4074 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0512 23:00:09.074749    4074 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0512 23:00:09.073826    4074 system_pods.go:59] 0 kube-system pods found
I0512 23:00:09.074796    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 23:00:09.074806    4074 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0512 23:00:09.078564    4074 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0512 23:00:09.078573    4074 addons.go:140] addon default-storageclass should already be in state true
I0512 23:00:09.078586    4074 host.go:66] Checking if "minikube" exists ...
I0512 23:00:09.078991    4074 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0512 23:00:09.117880    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 23:00:09.119328    4074 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
I0512 23:00:09.119339    4074 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0512 23:00:09.119380    4074 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0512 23:00:09.159633    4074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/ec2-user/.minikube/machines/minikube/id_rsa Username:docker}
I0512 23:00:09.209820    4074 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0512 23:00:09.255576    4074 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0512 23:00:09.340659    4074 system_pods.go:59] 0 kube-system pods found
I0512 23:00:09.340678    4074 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up
I0512 23:00:09.553669    4074 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
I0512 23:00:09.553697    4074 addons.go:330] enableAddons completed in 539.817357ms
I0512 23:00:09.725828    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:09.725854    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:09.725865    4074 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
I0512 23:00:10.151872    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:10.151894    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:10.151905    4074 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
I0512 23:00:10.627985    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:10.628007    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:10.628018    4074 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0512 23:00:11.218545    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:11.218567    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:11.218577    4074 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0512 23:00:12.055979    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:12.056000    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:12.056011    4074 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0512 23:00:12.806239    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:12.806261    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:12.806272    4074 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
I0512 23:00:13.797010    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:13.797031    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:13.797045    4074 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up
I0512 23:00:14.990679    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:14.990701    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:14.990715    4074 retry.go:31] will retry after 1.677229867s: only 1 pod(s) have shown up
I0512 23:00:16.671590    4074 system_pods.go:59] 1 kube-system pods found
I0512 23:00:16.671612    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:16.671622    4074 retry.go:31] will retry after 2.346016261s: only 1 pod(s) have shown up
I0512 23:00:19.021885    4074 system_pods.go:59] 5 kube-system pods found
I0512 23:00:19.021903    4074 system_pods.go:61] "etcd-minikube" [c3393489-527f-4dbc-86ed-95b317ebfa58] Pending
I0512 23:00:19.021909    4074 system_pods.go:61] "kube-apiserver-minikube" [18608649-dcda-4e5e-aeda-38419882f8a1] Pending
I0512 23:00:19.021918    4074 system_pods.go:61] "kube-controller-manager-minikube" [67439043-6234-4549-a1d7-0dc28c44223e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0512 23:00:19.021927    4074 system_pods.go:61] "kube-scheduler-minikube" [81be8be6-75be-41f9-bc39-47b3f7c19f86] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0512 23:00:19.021937    4074 system_pods.go:61] "storage-provisioner" [129649ea-fc34-42c3-943f-71b083401bfc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0512 23:00:19.021945    4074 system_pods.go:74] duration metric: took 9.956457451s to wait for pod list to return data ...
I0512 23:00:19.021955    4074 kubeadm.go:538] duration metric: took 10.008213095s to wait for : map[apiserver:true system_pods:true] ...
I0512 23:00:19.021969    4074 node_conditions.go:102] verifying NodePressure condition ...
I0512 23:00:19.025043    4074 node_conditions.go:122] node storage ephemeral capacity is 8376300Ki
I0512 23:00:19.025057    4074 node_conditions.go:123] node cpu capacity is 4
I0512 23:00:19.025069    4074 node_conditions.go:105] duration metric: took 3.094249ms to run NodePressure ...
I0512 23:00:19.025077    4074 start.go:206] waiting for startup goroutines ...
I0512 23:00:19.078947    4074 start.go:460] kubectl: 1.21.1, cluster: 1.20.2 (minor skew: 1)
I0512 23:00:19.080922    4074 out.go:170] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

* 
* ==> Docker <==
* -- Logs begin at Wed 2021-05-12 22:59:32 UTC, end at Thu 2021-05-13 14:36:40 UTC. --
May 13 13:46:40 minikube dockerd[457]: time="2021-05-13T13:46:40.857799418Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:46:40 minikube dockerd[457]: time="2021-05-13T13:46:40.857858380Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:46:40 minikube dockerd[457]: time="2021-05-13T13:46:40.860153959Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:46:45 minikube dockerd[457]: time="2021-05-13T13:46:45.866949790Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:50788->192.168.49.1:53: i/o timeout"
May 13 13:46:45 minikube dockerd[457]: time="2021-05-13T13:46:45.866994448Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:50788->192.168.49.1:53: i/o timeout"
May 13 13:46:45 minikube dockerd[457]: time="2021-05-13T13:46:45.869441152Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:50788->192.168.49.1:53: i/o timeout"
May 13 13:52:01 minikube dockerd[457]: time="2021-05-13T13:52:01.858225882Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:52:01 minikube dockerd[457]: time="2021-05-13T13:52:01.858267358Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:52:01 minikube dockerd[457]: time="2021-05-13T13:52:01.860611757Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:52:06 minikube dockerd[457]: time="2021-05-13T13:52:06.860856961Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:52921->192.168.49.1:53: i/o timeout"
May 13 13:52:06 minikube dockerd[457]: time="2021-05-13T13:52:06.860910841Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:52921->192.168.49.1:53: i/o timeout"
May 13 13:52:06 minikube dockerd[457]: time="2021-05-13T13:52:06.863886872Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:52921->192.168.49.1:53: i/o timeout"
May 13 13:57:28 minikube dockerd[457]: time="2021-05-13T13:57:28.859121664Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:57:28 minikube dockerd[457]: time="2021-05-13T13:57:28.859173466Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:57:28 minikube dockerd[457]: time="2021-05-13T13:57:28.861413200Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 13:57:33 minikube dockerd[457]: time="2021-05-13T13:57:33.868859411Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:42865->192.168.49.1:53: i/o timeout"
May 13 13:57:33 minikube dockerd[457]: time="2021-05-13T13:57:33.868901528Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:42865->192.168.49.1:53: i/o timeout"
May 13 13:57:33 minikube dockerd[457]: time="2021-05-13T13:57:33.871390145Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:42865->192.168.49.1:53: i/o timeout"
May 13 14:02:53 minikube dockerd[457]: time="2021-05-13T14:02:53.858008041Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:02:53 minikube dockerd[457]: time="2021-05-13T14:02:53.858054603Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:02:53 minikube dockerd[457]: time="2021-05-13T14:02:53.860415153Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:02:58 minikube dockerd[457]: time="2021-05-13T14:02:58.860814947Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:36448->192.168.49.1:53: i/o timeout"
May 13 14:02:58 minikube dockerd[457]: time="2021-05-13T14:02:58.860863520Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:36448->192.168.49.1:53: i/o timeout"
May 13 14:02:58 minikube dockerd[457]: time="2021-05-13T14:02:58.863181367Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:36448->192.168.49.1:53: i/o timeout"
May 13 14:08:14 minikube dockerd[457]: time="2021-05-13T14:08:14.858318113Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:08:14 minikube dockerd[457]: time="2021-05-13T14:08:14.858374360Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:08:14 minikube dockerd[457]: time="2021-05-13T14:08:14.860491160Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:08:19 minikube dockerd[457]: time="2021-05-13T14:08:19.861308257Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:42301->192.168.49.1:53: i/o timeout"
May 13 14:08:19 minikube dockerd[457]: time="2021-05-13T14:08:19.861356044Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:42301->192.168.49.1:53: i/o timeout"
May 13 14:08:19 minikube dockerd[457]: time="2021-05-13T14:08:19.864746668Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:42301->192.168.49.1:53: i/o timeout"
May 13 14:13:34 minikube dockerd[457]: time="2021-05-13T14:13:34.858359000Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:13:34 minikube dockerd[457]: time="2021-05-13T14:13:34.858415034Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:13:34 minikube dockerd[457]: time="2021-05-13T14:13:34.860784068Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:13:39 minikube dockerd[457]: time="2021-05-13T14:13:39.861513336Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:58782->192.168.49.1:53: i/o timeout"
May 13 14:13:39 minikube dockerd[457]: time="2021-05-13T14:13:39.861554228Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:58782->192.168.49.1:53: i/o timeout"
May 13 14:13:39 minikube dockerd[457]: time="2021-05-13T14:13:39.864101189Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:58782->192.168.49.1:53: i/o timeout"
May 13 14:19:00 minikube dockerd[457]: time="2021-05-13T14:19:00.858227590Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:19:00 minikube dockerd[457]: time="2021-05-13T14:19:00.858282071Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:19:00 minikube dockerd[457]: time="2021-05-13T14:19:00.861749633Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:19:05 minikube dockerd[457]: time="2021-05-13T14:19:05.861161699Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:40045->192.168.49.1:53: i/o timeout"
May 13 14:19:05 minikube dockerd[457]: time="2021-05-13T14:19:05.861209040Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:40045->192.168.49.1:53: i/o timeout"
May 13 14:19:05 minikube dockerd[457]: time="2021-05-13T14:19:05.863611505Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:40045->192.168.49.1:53: i/o timeout"
May 13 14:24:22 minikube dockerd[457]: time="2021-05-13T14:24:22.858485333Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:24:22 minikube dockerd[457]: time="2021-05-13T14:24:22.858535668Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:24:22 minikube dockerd[457]: time="2021-05-13T14:24:22.860738717Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:24:27 minikube dockerd[457]: time="2021-05-13T14:24:27.860851916Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:53844->192.168.49.1:53: i/o timeout"
May 13 14:24:27 minikube dockerd[457]: time="2021-05-13T14:24:27.860891800Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:53844->192.168.49.1:53: i/o timeout"
May 13 14:24:27 minikube dockerd[457]: time="2021-05-13T14:24:27.863718080Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:53844->192.168.49.1:53: i/o timeout"
May 13 14:29:47 minikube dockerd[457]: time="2021-05-13T14:29:47.858316707Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:29:47 minikube dockerd[457]: time="2021-05-13T14:29:47.858358778Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:29:47 minikube dockerd[457]: time="2021-05-13T14:29:47.863729046Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:29:52 minikube dockerd[457]: time="2021-05-13T14:29:52.860818092Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35666->192.168.49.1:53: i/o timeout"
May 13 14:29:52 minikube dockerd[457]: time="2021-05-13T14:29:52.860899829Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35666->192.168.49.1:53: i/o timeout"
May 13 14:29:52 minikube dockerd[457]: time="2021-05-13T14:29:52.863076864Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:35666->192.168.49.1:53: i/o timeout"
May 13 14:35:14 minikube dockerd[457]: time="2021-05-13T14:35:14.859246466Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:35:14 minikube dockerd[457]: time="2021-05-13T14:35:14.859299516Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:35:14 minikube dockerd[457]: time="2021-05-13T14:35:14.861558468Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:35:19 minikube dockerd[457]: time="2021-05-13T14:35:19.861808029Z" level=warning msg="Error getting v2 registry: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout"
May 13 14:35:19 minikube dockerd[457]: time="2021-05-13T14:35:19.861850472Z" level=info msg="Attempting next endpoint for pull after error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout"
May 13 14:35:19 minikube dockerd[457]: time="2021-05-13T14:35:19.864557883Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout"

* 
* ==> container status <==
* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
cf9c61ad247df       6e38f40d628db       16 hours ago        Running             storage-provisioner       0                   802deb90eef85
d60a64a0a3a7f       bfe3a36ebd252       16 hours ago        Running             coredns                   0                   8b91eb9aa1cbe
f09001bf79788       43154ddb57a83       16 hours ago        Running             kube-proxy                0                   e15a777c1dffb
24f495ac23a9c       a27166429d98e       16 hours ago        Running             kube-controller-manager   0                   516e608862ca9
ab84d1d8cb7d3       ed2c44fbdd78b       16 hours ago        Running             kube-scheduler            0                   46a2fd5416690
980fe0f0fc15b       0369cf4303ffd       16 hours ago        Running             etcd                      0                   519280a9abb27
f084e18700a5c       a8c2fdb8bf76e       16 hours ago        Running             kube-apiserver            0                   550bac379fdb8

* 
* ==> coredns [d60a64a0a3a7] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:43694->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:51175->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:60174->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:42272->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:48276->192.168.49.1:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:39213->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:41307->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:51033->192.168.49.1:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:34927->192.168.49.1:53: i/o timeout
[ERROR] plugin/errors: 2 3861669324229711465.2133438455307272983. HINFO: read udp 172.17.0.2:35878->192.168.49.1:53: i/o timeout
I0512 23:00:55.382229       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-12 23:00:25.381671231 +0000 UTC m=+0.024038390) (total time: 30.000469033s):
Trace[1427131847]: [30.000469033s] [30.000469033s] END
E0512 23:00:55.382264       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0512 23:00:55.382526       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-12 23:00:25.382196864 +0000 UTC m=+0.024564079) (total time: 30.000314752s):
Trace[911902081]: [30.000314752s] [30.000314752s] END
E0512 23:00:55.382544       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0512 23:00:55.382750       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-05-12 23:00:25.382217531 +0000 UTC m=+0.024584706) (total time: 30.00050595s):
Trace[1474941318]: [30.00050595s] [30.00050595s] END
E0512 23:00:55.382768       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout

* 
* ==> describe nodes <==
* Name:               minikube
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=c61663e942ec43b20e8e70839dcca52e44cd85ae
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2021_05_12T23_00_08_0700
                    minikube.k8s.io/version=v1.20.0
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 12 May 2021 23:00:05 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Thu, 13 May 2021 14:36:34 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 13 May 2021 14:36:14 +0000   Wed, 12 May 2021 23:00:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 13 May 2021 14:36:14 +0000   Wed, 12 May 2021 23:00:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 13 May 2021 14:36:14 +0000   Wed, 12 May 2021 23:00:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 13 May 2021 14:36:14 +0000   Wed, 12 May 2021 23:00:24 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    minikube
Capacity:
  cpu:                4
  ephemeral-storage:  8376300Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16423844Ki
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  8376300Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16423844Ki
  pods:               110
System Info:
  Machine ID:                 822f5ed6656e44929f6c2cc5d6881453
  System UUID:                efefb2e2-8a64-47f0-b124-ed883e89bf5d
  Boot ID:                    26ccb40b-135e-4d38-a263-4670c8606d95
  Kernel Version:             4.14.231-173.361.amzn2.x86_64
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.6
  Kubelet Version:            v1.20.2
  Kube-Proxy Version:         v1.20.2
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  ingress-nginx               ingress-nginx-admission-create-r8rdv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
  ingress-nginx               ingress-nginx-admission-patch-28cjv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
  ingress-nginx               ingress-nginx-controller-5d88495688-59pr4    100m (2%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         15h
  kube-system                 coredns-74ff55c5b-gc98g                      100m (2%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (1%!)(MISSING)     15h
  kube-system                 etcd-minikube                                100m (2%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15h
  kube-system                 kube-apiserver-minikube                      250m (6%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
  kube-system                 kube-controller-manager-minikube             200m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
  kube-system                 kube-proxy-grppf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
  kube-system                 kube-scheduler-minikube                      100m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (21%!)(MISSING)  0 (0%!)(MISSING)
  memory             260Mi (1%!)(MISSING)  170Mi (1%!)(MISSING)
  ephemeral-storage  100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
Events:              <none>

* 
* ==> dmesg <==
* [May12 22:56] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22!
[  +0.184042] cpu 0 spinlock event irq 53
[  +0.027998] cpu 1 spinlock event irq 59
[  +0.008034]   #2
[  +0.002501] cpu 2 spinlock event irq 65
[  +0.005491]   #3
[  +0.002498] cpu 3 spinlock event irq 71
[  +0.209454] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[  +0.252789] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
              * this clock source is slow. Consider trying other clock sources
[  +0.627758] Grant table initialized
[  +0.003688] Cannot get hvm parameter CONSOLE_EVTCHN (18): -22!
[  +1.314961] systemd: 29 output lines suppressed due to ratelimiting

* 
* ==> etcd [980fe0f0fc15] <==
* 2021-05-13 14:27:20.891837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:27:30.891724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:27:40.891897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:27:50.891803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:28:00.891879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:28:10.891828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:28:20.891900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:28:30.891884 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:28:40.891883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:28:50.891829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:29:00.891873 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:29:10.891805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:29:20.891783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:29:30.891877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:29:40.891817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:29:50.891698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:30:00.891725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:30:01.721065 I | mvcc: store.index: compact 40877
2021-05-13 14:30:01.721748 I | mvcc: finished scheduled compaction at 40877 (took 428.055µs)
2021-05-13 14:30:10.891830 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:30:20.891719 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:30:30.891859 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:30:40.891694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:30:50.891860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:31:00.891820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:31:10.891843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:31:20.891690 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:31:30.891920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:31:40.891878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:31:50.891878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:32:00.891855 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:32:10.891960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:32:20.891868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:32:30.891895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:32:40.891743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:32:50.891913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:33:00.891883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:33:10.891887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:33:20.891697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:33:30.891818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:33:40.891906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:33:50.891701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:34:00.891851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:34:10.891907 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:34:20.891830 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:34:30.891875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:34:40.891757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:34:50.891847 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:35:00.891826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:35:01.724963 I | mvcc: store.index: compact 41090
2021-05-13 14:35:01.725745 I | mvcc: finished scheduled compaction at 41090 (took 418.65µs)
2021-05-13 14:35:10.891609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:35:20.891871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:35:30.891751 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:35:40.891611 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:35:50.891873 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:36:00.891796 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:36:10.891832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:36:20.891959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-05-13 14:36:30.891771 I | etcdserver/api/etcdhttp: /health OK (status code 200)

* 
* ==> kernel <==
*  14:36:40 up 15:40,  0 users,  load average: 0.09, 0.13, 0.09
Linux minikube 4.14.231-173.361.amzn2.x86_64 #1 SMP Mon Apr 26 20:57:08 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"

* 
* ==> kube-apiserver [f084e18700a5] <==
* I0513 14:24:20.854725       1 client.go:360] parsed scheme: "passthrough"
I0513 14:24:20.854775       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:24:20.854789       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:25:05.508948       1 client.go:360] parsed scheme: "passthrough"
I0513 14:25:05.508999       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:25:05.509010       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:25:44.141324       1 client.go:360] parsed scheme: "passthrough"
I0513 14:25:44.141367       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:25:44.141377       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:26:14.172526       1 client.go:360] parsed scheme: "passthrough"
I0513 14:26:14.172572       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:26:14.172582       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:26:47.162153       1 client.go:360] parsed scheme: "passthrough"
I0513 14:26:47.162202       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:26:47.162212       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:27:26.483756       1 client.go:360] parsed scheme: "passthrough"
I0513 14:27:26.483801       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:27:26.483811       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:28:07.302285       1 client.go:360] parsed scheme: "passthrough"
I0513 14:28:07.302331       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:28:07.302342       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:28:41.863710       1 client.go:360] parsed scheme: "passthrough"
I0513 14:28:41.863754       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:28:41.863765       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:29:20.627130       1 client.go:360] parsed scheme: "passthrough"
I0513 14:29:20.627177       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:29:20.627205       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:29:54.722875       1 client.go:360] parsed scheme: "passthrough"
I0513 14:29:54.722924       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:29:54.722937       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:30:28.150130       1 client.go:360] parsed scheme: "passthrough"
I0513 14:30:28.150175       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:30:28.150186       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:31:02.041668       1 client.go:360] parsed scheme: "passthrough"
I0513 14:31:02.041720       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:31:02.041731       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:31:32.966939       1 client.go:360] parsed scheme: "passthrough"
I0513 14:31:32.966989       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:31:32.967016       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:32:16.788737       1 client.go:360] parsed scheme: "passthrough"
I0513 14:32:16.788786       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:32:16.788800       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:33:01.437025       1 client.go:360] parsed scheme: "passthrough"
I0513 14:33:01.437072       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:33:01.437084       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:33:45.320976       1 client.go:360] parsed scheme: "passthrough"
I0513 14:33:45.321020       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:33:45.321032       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:34:20.948578       1 client.go:360] parsed scheme: "passthrough"
I0513 14:34:20.948624       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:34:20.948653       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:34:57.880778       1 client.go:360] parsed scheme: "passthrough"
I0513 14:34:57.880823       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:34:57.880834       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:35:34.800099       1 client.go:360] parsed scheme: "passthrough"
I0513 14:35:34.800144       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:35:34.800155       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0513 14:36:18.229143       1 client.go:360] parsed scheme: "passthrough"
I0513 14:36:18.229186       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0513 14:36:18.229197       1 clientconn.go:948] ClientConn switching balancer to "pick_first"

* 
* ==> kube-controller-manager [24f495ac23a9] <==
* W0512 23:00:24.053291       1 controllermanager.go:546] Skipping "ephemeral-volume"
I0512 23:00:24.053151       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0512 23:00:24.053328       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
I0512 23:00:24.054962       1 shared_informer.go:240] Waiting for caches to sync for resource quota
W0512 23:00:24.067208       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0512 23:00:24.072696       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I0512 23:00:24.073037       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I0512 23:00:24.073741       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I0512 23:00:24.073758       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I0512 23:00:24.084141       1 shared_informer.go:247] Caches are synced for PVC protection 
I0512 23:00:24.100441       1 shared_informer.go:247] Caches are synced for node 
I0512 23:00:24.100481       1 range_allocator.go:172] Starting range CIDR allocator
I0512 23:00:24.100487       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0512 23:00:24.100493       1 shared_informer.go:247] Caches are synced for cidrallocator 
I0512 23:00:24.102893       1 shared_informer.go:247] Caches are synced for PV protection 
I0512 23:00:24.102895       1 shared_informer.go:247] Caches are synced for HPA 
I0512 23:00:24.102911       1 shared_informer.go:247] Caches are synced for daemon sets 
I0512 23:00:24.102923       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I0512 23:00:24.102934       1 shared_informer.go:247] Caches are synced for crt configmap 
I0512 23:00:24.103795       1 shared_informer.go:247] Caches are synced for expand 
I0512 23:00:24.103938       1 shared_informer.go:247] Caches are synced for attach detach 
I0512 23:00:24.104723       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I0512 23:00:24.110360       1 shared_informer.go:247] Caches are synced for namespace 
I0512 23:00:24.119081       1 shared_informer.go:247] Caches are synced for persistent volume 
I0512 23:00:24.122994       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0512 23:00:24.132047       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-grppf"
I0512 23:00:24.142392       1 shared_informer.go:247] Caches are synced for job 
I0512 23:00:24.146888       1 shared_informer.go:247] Caches are synced for service account 
I0512 23:00:24.152938       1 shared_informer.go:247] Caches are synced for ReplicationController 
I0512 23:00:24.153274       1 shared_informer.go:247] Caches are synced for TTL 
I0512 23:00:24.155024       1 shared_informer.go:247] Caches are synced for GC 
I0512 23:00:24.155081       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I0512 23:00:24.155324       1 shared_informer.go:247] Caches are synced for stateful set 
I0512 23:00:24.155345       1 shared_informer.go:247] Caches are synced for endpoint 
E0512 23:00:24.209152       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0512 23:00:24.253390       1 shared_informer.go:247] Caches are synced for taint 
I0512 23:00:24.253474       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
W0512 23:00:24.253531       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0512 23:00:24.253571       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
I0512 23:00:24.253813       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0512 23:00:24.253831       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0512 23:00:24.333192       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0512 23:00:24.353150       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I0512 23:00:24.353605       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I0512 23:00:24.355180       1 shared_informer.go:247] Caches are synced for resource quota 
I0512 23:00:24.355518       1 shared_informer.go:247] Caches are synced for resource quota 
I0512 23:00:24.370774       1 shared_informer.go:247] Caches are synced for deployment 
I0512 23:00:24.375380       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
I0512 23:00:24.378951       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gc98g"
I0512 23:00:24.402918       1 shared_informer.go:247] Caches are synced for disruption 
I0512 23:00:24.402936       1 disruption.go:339] Sending events to api server.
I0512 23:00:24.515844       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0512 23:00:24.764164       1 shared_informer.go:247] Caches are synced for garbage collector 
I0512 23:00:24.764186       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0512 23:00:24.816071       1 shared_informer.go:247] Caches are synced for garbage collector 
I0512 23:00:51.681182       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-5d88495688 to 1"
I0512 23:00:51.713758       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-controller-5d88495688" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-5d88495688-59pr4"
I0512 23:00:51.736902       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create-r8rdv"
I0512 23:00:51.756018       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch-28cjv"
I0513 01:00:17.910038       1 cleaner.go:180] Cleaning CSR "csr-bg4cg" as it is more than 1h0m0s old and approved.

* 
* ==> kube-proxy [f09001bf7978] <==
* I0512 23:00:25.681771       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0512 23:00:25.681838       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W0512 23:00:25.695019       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0512 23:00:25.695105       1 server_others.go:185] Using iptables Proxier.
I0512 23:00:25.695369       1 server.go:650] Version: v1.20.2
I0512 23:00:25.695719       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0512 23:00:25.695750       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0512 23:00:25.695877       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0512 23:00:25.696029       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0512 23:00:25.696155       1 config.go:224] Starting endpoint slice config controller
I0512 23:00:25.696178       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0512 23:00:25.696398       1 config.go:315] Starting service config controller
I0512 23:00:25.696506       1 shared_informer.go:240] Waiting for caches to sync for service config
I0512 23:00:25.796418       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0512 23:00:25.796744       1 shared_informer.go:247] Caches are synced for service config 

* 
* ==> kube-scheduler [ab84d1d8cb7d] <==
* I0512 23:00:00.923240       1 serving.go:331] Generated self-signed cert in-memory
W0512 23:00:05.019646       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0512 23:00:05.019823       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0512 23:00:05.019924       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0512 23:00:05.020007       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0512 23:00:05.121180       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0512 23:00:05.121234       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0512 23:00:05.121476       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0512 23:00:05.121262       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0512 23:00:05.124146       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0512 23:00:05.124308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0512 23:00:05.124409       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0512 23:00:05.124429       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0512 23:00:05.124499       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0512 23:00:05.124517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0512 23:00:05.124584       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0512 23:00:05.124597       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0512 23:00:05.124677       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0512 23:00:05.124697       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0512 23:00:05.124764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0512 23:00:05.124770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0512 23:00:05.977255       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0512 23:00:05.978549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0512 23:00:06.134173       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0512 23:00:06.137648       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0512 23:00:06.138998       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0512 23:00:06.194910       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0512 23:00:06.212042       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0512 23:00:08.221663       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

* 
* ==> kubelet <==
* -- Logs begin at Wed 2021-05-12 22:59:32 UTC, end at Thu 2021-05-13 14:36:40 UTC. --
May 13 14:32:01 minikube kubelet[2234]: E0513 14:32:01.853701    2234 kubelet.go:1656] Unable to attach or mount volumes for pod "ingress-nginx-controller-5d88495688-59pr4_ingress-nginx(be718ede-e048-4cc8-a063-c0f46d09d9ba)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-q99vq]: timed out waiting for the condition; skipping pod
May 13 14:32:01 minikube kubelet[2234]: E0513 14:32:01.853769    2234 pod_workers.go:191] Error syncing pod be718ede-e048-4cc8-a063-c0f46d09d9ba ("ingress-nginx-controller-5d88495688-59pr4_ingress-nginx(be718ede-e048-4cc8-a063-c0f46d09d9ba)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-q99vq]: timed out waiting for the condition
May 13 14:32:01 minikube kubelet[2234]: E0513 14:32:01.856188    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:10 minikube kubelet[2234]: E0513 14:32:10.855046    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:16 minikube kubelet[2234]: E0513 14:32:16.855270    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:22 minikube kubelet[2234]: E0513 14:32:22.854996    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:31 minikube kubelet[2234]: E0513 14:32:31.025093    2234 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
May 13 14:32:31 minikube kubelet[2234]: E0513 14:32:31.025191    2234 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/be718ede-e048-4cc8-a063-c0f46d09d9ba-webhook-cert podName:be718ede-e048-4cc8-a063-c0f46d09d9ba nodeName:}" failed. No retries permitted until 2021-05-13 14:34:33.025159456 +0000 UTC m=+56065.003420815 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be718ede-e048-4cc8-a063-c0f46d09d9ba-webhook-cert\") pod \"ingress-nginx-controller-5d88495688-59pr4\" (UID: \"be718ede-e048-4cc8-a063-c0f46d09d9ba\") : secret \"ingress-nginx-admission\" not found"
May 13 14:32:31 minikube kubelet[2234]: E0513 14:32:31.855086    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:33 minikube kubelet[2234]: E0513 14:32:33.855161    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:44 minikube kubelet[2234]: E0513 14:32:44.854822    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:46 minikube kubelet[2234]: E0513 14:32:46.855045    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:32:58 minikube kubelet[2234]: E0513 14:32:58.855035    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:01 minikube kubelet[2234]: E0513 14:33:01.855220    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:12 minikube kubelet[2234]: E0513 14:33:12.854828    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:13 minikube kubelet[2234]: E0513 14:33:13.855258    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:24 minikube kubelet[2234]: E0513 14:33:24.855054    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:24 minikube kubelet[2234]: E0513 14:33:24.855060    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:38 minikube kubelet[2234]: E0513 14:33:38.854968    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:39 minikube kubelet[2234]: E0513 14:33:39.854781    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:52 minikube kubelet[2234]: E0513 14:33:52.854999    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:33:53 minikube kubelet[2234]: E0513 14:33:53.855086    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:05 minikube kubelet[2234]: E0513 14:34:05.855184    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:06 minikube kubelet[2234]: E0513 14:34:06.855001    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:18 minikube kubelet[2234]: E0513 14:34:18.853740    2234 kubelet.go:1656] Unable to attach or mount volumes for pod "ingress-nginx-controller-5d88495688-59pr4_ingress-nginx(be718ede-e048-4cc8-a063-c0f46d09d9ba)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-q99vq]: timed out waiting for the condition; skipping pod
May 13 14:34:18 minikube kubelet[2234]: E0513 14:34:18.853784    2234 pod_workers.go:191] Error syncing pod be718ede-e048-4cc8-a063-c0f46d09d9ba ("ingress-nginx-controller-5d88495688-59pr4_ingress-nginx(be718ede-e048-4cc8-a063-c0f46d09d9ba)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-q99vq]: timed out waiting for the condition
May 13 14:34:18 minikube kubelet[2234]: E0513 14:34:18.855019    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:20 minikube kubelet[2234]: E0513 14:34:20.854997    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:29 minikube kubelet[2234]: E0513 14:34:29.855199    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:33 minikube kubelet[2234]: E0513 14:34:33.071700    2234 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
May 13 14:34:33 minikube kubelet[2234]: E0513 14:34:33.071801    2234 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/be718ede-e048-4cc8-a063-c0f46d09d9ba-webhook-cert podName:be718ede-e048-4cc8-a063-c0f46d09d9ba nodeName:}" failed. No retries permitted until 2021-05-13 14:36:35.071773094 +0000 UTC m=+56187.050034520 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be718ede-e048-4cc8-a063-c0f46d09d9ba-webhook-cert\") pod \"ingress-nginx-controller-5d88495688-59pr4\" (UID: \"be718ede-e048-4cc8-a063-c0f46d09d9ba\") : secret \"ingress-nginx-admission\" not found"
May 13 14:34:33 minikube kubelet[2234]: E0513 14:34:33.859111    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:40 minikube kubelet[2234]: E0513 14:34:40.854907    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:44 minikube kubelet[2234]: E0513 14:34:44.855007    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:34:52 minikube kubelet[2234]: E0513 14:34:52.854686    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:35:14 minikube kubelet[2234]: E0513 14:35:14.862019    2234 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
May 13 14:35:14 minikube kubelet[2234]: E0513 14:35:14.862088    2234 kuberuntime_image.go:51] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
May 13 14:35:14 minikube kubelet[2234]: E0513 14:35:14.862338    2234 kuberuntime_manager.go:829] container &Container{Name:create,Image:docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7,Command:[],Args:[create --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc --namespace=$(POD_NAMESPACE) --secret-name=ingress-nginx-admission],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ingress-nginx-admission-token-l4xf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
May 13 14:35:14 minikube kubelet[2234]: E0513 14:35:14.862379    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
May 13 14:35:19 minikube kubelet[2234]: E0513 14:35:19.864908    2234 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout
May 13 14:35:19 minikube kubelet[2234]: E0513 14:35:19.864952    2234 kuberuntime_image.go:51] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout
May 13 14:35:19 minikube kubelet[2234]: E0513 14:35:19.865071    2234 kuberuntime_manager.go:829] container &Container{Name:patch,Image:docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7,Command:[],Args:[patch --webhook-name=ingress-nginx-admission --namespace=$(POD_NAMESPACE) --patch-mutating=false --secret-name=ingress-nginx-admission --patch-failure-policy=Fail],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ingress-nginx-admission-token-l4xf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout
May 13 14:35:19 minikube kubelet[2234]: E0513 14:35:19.865111    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.49.1:53: read udp 192.168.49.2:41903->192.168.49.1:53: i/o timeout"
May 13 14:35:26 minikube kubelet[2234]: E0513 14:35:26.858803    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:35:30 minikube kubelet[2234]: E0513 14:35:30.854933    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:35:40 minikube kubelet[2234]: E0513 14:35:40.854913    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:35:45 minikube kubelet[2234]: E0513 14:35:45.855045    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:35:53 minikube kubelet[2234]: E0513 14:35:53.854972    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:35:56 minikube kubelet[2234]: E0513 14:35:56.854996    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:05 minikube kubelet[2234]: E0513 14:36:05.855061    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:11 minikube kubelet[2234]: E0513 14:36:11.855065    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:16 minikube kubelet[2234]: E0513 14:36:16.854940    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:26 minikube kubelet[2234]: E0513 14:36:26.855082    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:27 minikube kubelet[2234]: E0513 14:36:27.855220    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:35 minikube kubelet[2234]: E0513 14:36:35.115007    2234 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
May 13 14:36:35 minikube kubelet[2234]: E0513 14:36:35.115125    2234 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/be718ede-e048-4cc8-a063-c0f46d09d9ba-webhook-cert podName:be718ede-e048-4cc8-a063-c0f46d09d9ba nodeName:}" failed. No retries permitted until 2021-05-13 14:38:37.115100445 +0000 UTC m=+56309.093361695 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/be718ede-e048-4cc8-a063-c0f46d09d9ba-webhook-cert\") pod \"ingress-nginx-controller-5d88495688-59pr4\" (UID: \"be718ede-e048-4cc8-a063-c0f46d09d9ba\") : secret \"ingress-nginx-admission\" not found"
May 13 14:36:35 minikube kubelet[2234]: E0513 14:36:35.853506    2234 kubelet.go:1656] Unable to attach or mount volumes for pod "ingress-nginx-controller-5d88495688-59pr4_ingress-nginx(be718ede-e048-4cc8-a063-c0f46d09d9ba)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-q99vq]: timed out waiting for the condition; skipping pod
May 13 14:36:35 minikube kubelet[2234]: E0513 14:36:35.853557    2234 pod_workers.go:191] Error syncing pod be718ede-e048-4cc8-a063-c0f46d09d9ba ("ingress-nginx-controller-5d88495688-59pr4_ingress-nginx(be718ede-e048-4cc8-a063-c0f46d09d9ba)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-q99vq]: timed out waiting for the condition
May 13 14:36:38 minikube kubelet[2234]: E0513 14:36:38.854926    2234 pod_workers.go:191] Error syncing pod 0dfcc946-2904-4f1b-9e03-242271074141 ("ingress-nginx-admission-patch-28cjv_ingress-nginx(0dfcc946-2904-4f1b-9e03-242271074141)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
May 13 14:36:39 minikube kubelet[2234]: E0513 14:36:39.855107    2234 pod_workers.go:191] Error syncing pod 1b7b2a54-c5e6-4d15-9ebe-231296c147b8 ("ingress-nginx-admission-create-r8rdv_ingress-nginx(1b7b2a54-c5e6-4d15-9ebe-231296c147b8)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""

* 
* ==> storage-provisioner [cf9c61ad247d] <==
* I0512 23:00:33.175782       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0512 23:00:33.185464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0512 23:00:33.185506       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0512 23:00:33.203254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0512 23:00:33.203344       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c8e5667-da50-457a-8f38-a3c11bf7f4ce", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_f66b152f-9bbb-41f4-9fc1-5143f24a71f6 became leader
I0512 23:00:33.203522       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_f66b152f-9bbb-41f4-9fc1-5143f24a71f6!
I0512 23:00:33.304680       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_f66b152f-9bbb-41f4-9fc1-5143f24a71f6!

Full output of failed command:

[ec2-user@ip-10-4-20-69 ~]$ minikube start
* minikube v1.20.0 on Amazon 2 (xen/amd64)
* Automatically selected the docker driver. Other choices: none, ssh
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.20.2 preload ...
    > gcr.io/k8s-minikube/kicbase...: 358.09 MiB / 358.10 MiB  100.00% 37.26 Mi
    > preloaded-images-k8s-v10-v1...: 491.71 MiB / 491.71 MiB  100.00% 70.16 Mi
    > gcr.io/k8s-minikube/kicbase...: 358.10 MiB / 358.10 MiB  100.00% 8.45 MiB
* Creating docker container (CPUs=2, Memory=4000MB) ...
! This container is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
[ec2-user@ip-10-4-20-69 ~]$ minikube addons enable ingress
  - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...

X Exiting due to MK_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
@UnknownGnome
Copy link
Author

@UnknownGnome
Copy link
Author

Setting primary DNS nameserver to 8.8.8.8 fixed it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant