Kubernetes cluster still exists after running minikube delete #12324
Labels
kind/support
Categorizes issue or PR as a support question.
triage/needs-information
Indicates an issue needs more information in order to work on it.
Steps to reproduce the issue:
Full output of
minikube logs
command:==> Audit <==
|---------|-----------------|----------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------|----------|---------|---------|-------------------------------|-------------------------------|
| start | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 03:21:47 PDT | Sat, 21 Aug 2021 03:24:11 PDT |
| config | set cpus 4 | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 03:24:24 PDT | Sat, 21 Aug 2021 03:24:24 PDT |
| delete | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 03:24:29 PDT | Sat, 21 Aug 2021 03:24:33 PDT |
| start | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 03:24:38 PDT | Sat, 21 Aug 2021 03:25:05 PDT |
| delete | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 03:48:08 PDT | Sat, 21 Aug 2021 03:48:12 PDT |
| start | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 03:48:16 PDT | Sat, 21 Aug 2021 03:48:51 PDT |
| config | set memory 6000 | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:14:45 PDT | Sat, 21 Aug 2021 04:14:45 PDT |
| cache | reload | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:14:59 PDT | Sat, 21 Aug 2021 04:14:59 PDT |
| delete | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:15:03 PDT | Sat, 21 Aug 2021 04:15:08 PDT |
| start | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:15:11 PDT | Sat, 21 Aug 2021 04:15:44 PDT |
| stop | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:16:10 PDT | Sat, 21 Aug 2021 04:16:22 PDT |
| delete | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:16:42 PDT | Sat, 21 Aug 2021 04:16:45 PDT |
| start | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:16:48 PDT | Sat, 21 Aug 2021 04:17:22 PDT |
| stop | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:33:24 PDT | Sat, 21 Aug 2021 04:33:36 PDT |
| delete | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 04:33:41 PDT | Sat, 21 Aug 2021 04:33:45 PDT |
| start | | minikube | utkarsh | v1.22.0 | Sat, 21 Aug 2021 12:41:42 PDT | Sat, 21 Aug 2021 12:42:16 PDT |
|---------|-----------------|----------|---------|---------|-------------------------------|-------------------------------|
==> Last Start <==
Log file created at: 2021/08/21 12:41:42
Running on machine: MacbookPro
Binary: Built with gc go1.16.5 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0821 12:41:42.754686 1067 out.go:286] Setting OutFile to fd 1 ...
I0821 12:41:42.755357 1067 out.go:338] isatty.IsTerminal(1) = true
I0821 12:41:42.755361 1067 out.go:299] Setting ErrFile to fd 2...
I0821 12:41:42.755364 1067 out.go:338] isatty.IsTerminal(2) = true
I0821 12:41:42.756012 1067 root.go:312] Updating PATH: /Users/utkarsh/.minikube/bin
I0821 12:41:42.757071 1067 out.go:293] Setting JSON to false
I0821 12:41:42.788024 1067 start.go:111] hostinfo: {"hostname":"MacbookPro.local","uptime":299,"bootTime":1629574603,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.4","kernelVersion":"20.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"52a1e876-863e-38e3-ac80-09bbab13b752"}
W0821 12:41:42.788127 1067 start.go:119] gopshost.Virtualization returned error: not implemented yet
I0821 12:41:42.817291 1067 out.go:165] 😄 minikube v1.22.0 on Darwin 11.4
I0821 12:41:42.820197 1067 notify.go:169] Checking for updates...
I0821 12:41:42.820659 1067 driver.go:335] Setting default libvirt URI to qemu:///system
I0821 12:41:42.820738 1067 global.go:111] Querying for installed drivers using PATH=/Users/utkarsh/.minikube/bin:/Users/utkarsh/google-cloud-sdk/bin:/Users/utkarsh/.rbenv/shims:/usr/local/opt/ruby/bin:/Library/Frameworks/Python.framework/Versions/3.6/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/Library/Frameworks/Python.framework/Versions/3.9/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware Fusion.app/Contents/Public:/opt/X11/bin:/Library/Apple/usr/bin:/Applications/Wireshark.app/Contents/MacOS:/usr/local/Cellar/openvpn/2.4.8/sbin
I0821 12:41:42.821003 1067 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0821 12:41:42.821042 1067 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0821 12:41:42.826492 1067 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:"/usr/local/bin/VBoxManage list hostinfo" returned: exit status 126: Reason: Fix:Restart VirtualBox, or upgrade to the latest version of VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0821 12:41:42.826738 1067 global.go:119] vmware default: true priority: 7, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0821 12:41:42.826747 1067 global.go:119] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/}
I0821 12:41:43.194719 1067 docker.go:132] docker version: linux-20.10.8
I0821 12:41:43.196725 1067 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0821 12:41:43.927377 1067 info.go:263] docker info: {ID:T2GE:HXOC:SVBA:WSBF:AACF:DQF6:LM7L:LPID:5EYF:SCFH:TRUM:2APM Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-08-21 19:41:43.336062903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6496915456 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:true ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0-rc.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
I0821 12:41:43.927501 1067 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0821 12:41:43.949097 1067 global.go:119] hyperkit default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:}
I0821 12:41:43.949324 1067 global.go:119] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/}
I0821 12:41:43.949357 1067 driver.go:270] not recommending "ssh" due to default: false
I0821 12:41:43.949368 1067 driver.go:265] not recommending "virtualbox" due to health: "/usr/local/bin/VBoxManage list hostinfo" returned: exit status 126:
I0821 12:41:43.949385 1067 driver.go:305] Picked: docker
I0821 12:41:43.949396 1067 driver.go:306] Alternatives: [hyperkit vmware ssh]
I0821 12:41:43.949399 1067 driver.go:307] Rejects: [virtualbox vmwarefusion podman parallels]
I0821 12:41:43.971219 1067 out.go:165] ✨ Automatically selected the docker driver. Other choices: hyperkit, vmware, ssh
I0821 12:41:43.971532 1067 start.go:278] selected driver: docker
I0821 12:41:43.971540 1067 start.go:751] validating driver "docker" against
I0821 12:41:43.971554 1067 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0821 12:41:43.972625 1067 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0821 12:41:44.217468 1067 info.go:263] docker info: {ID:T2GE:HXOC:SVBA:WSBF:AACF:DQF6:LM7L:LPID:5EYF:SCFH:TRUM:2APM Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-08-21 19:41:44.126181022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.47-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6496915456 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:true ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0-rc.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
I0821 12:41:44.217968 1067 start_flags.go:261] no existing cluster config was found, will generate one from the flags
I0821 12:41:44.219431 1067 start_flags.go:669] Wait components to verify : map[apiserver:true system_pods:true]
I0821 12:41:44.219446 1067 cni.go:93] Creating CNI manager for ""
I0821 12:41:44.219683 1067 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0821 12:41:44.219689 1067 start_flags.go:275] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0821 12:41:44.241534 1067 out.go:165] 👍 Starting control plane node minikube in cluster minikube
I0821 12:41:44.242277 1067 cache.go:117] Beginning downloading kic base image for docker with docker
I0821 12:41:44.305611 1067 out.go:165] 🚜 Pulling base image ...
I0821 12:41:44.307007 1067 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker
I0821 12:41:44.307135 1067 preload.go:150] Found local preload: /Users/utkarsh/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4
I0821 12:41:44.307147 1067 cache.go:56] Caching tarball of preloaded images
I0821 12:41:44.307658 1067 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
I0821 12:41:44.307781 1067 preload.go:174] Found /Users/utkarsh/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0821 12:41:44.307802 1067 cache.go:59] Finished verifying existence of preloaded tar for v1.21.2 on docker
I0821 12:41:44.309958 1067 profile.go:148] Saving config to /Users/utkarsh/.minikube/profiles/minikube/config.json ...
I0821 12:41:44.310047 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/config.json: {Name:mk209c2311c7d4f8df73f568363587a5d4c04302 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:44.474325 1067 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
I0821 12:41:44.474340 1067 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
I0821 12:41:44.474350 1067 cache.go:205] Successfully downloaded all kic artifacts
I0821 12:41:44.475036 1067 start.go:313] acquiring machines lock for minikube: {Name:mke7d21cb76a92db23bf00a3e3fb2dc82013d6d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0821 12:41:44.475208 1067 start.go:317] acquired machines lock for "minikube" in 157.82µs
I0821 12:41:44.475551 1067 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
I0821 12:41:44.475612 1067 start.go:126] createHost starting for "" (driver="docker")
I0821 12:41:44.497253 1067 out.go:192] 🔥 Creating docker container (CPUs=4, Memory=6000MB) ...
I0821 12:41:44.498873 1067 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0821 12:41:44.498934 1067 client.go:168] LocalClient.Create starting
I0821 12:41:44.499224 1067 main.go:130] libmachine: Reading certificate data from /Users/utkarsh/.minikube/certs/ca.pem
I0821 12:41:44.499604 1067 main.go:130] libmachine: Decoding PEM data...
I0821 12:41:44.499956 1067 main.go:130] libmachine: Parsing certificate...
I0821 12:41:44.500195 1067 main.go:130] libmachine: Reading certificate data from /Users/utkarsh/.minikube/certs/cert.pem
I0821 12:41:44.500516 1067 main.go:130] libmachine: Decoding PEM data...
I0821 12:41:44.500552 1067 main.go:130] libmachine: Parsing certificate...
I0821 12:41:44.519426 1067 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0821 12:41:44.674265 1067 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0821 12:41:44.674417 1067 network_create.go:255] running [docker network inspect minikube] to gather additional debugging logs...
I0821 12:41:44.674434 1067 cli_runner.go:115] Run: docker network inspect minikube
W0821 12:41:44.826892 1067 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0821 12:41:44.826914 1067 network_create.go:258] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I0821 12:41:44.826929 1067 network_create.go:260] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I0821 12:41:44.827073 1067 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0821 12:41:44.976400 1067 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000010140] misses:0}
I0821 12:41:44.976431 1067 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0821 12:41:44.976447 1067 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0821 12:41:44.976565 1067 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0821 12:41:45.166979 1067 network_create.go:90] docker network minikube 192.168.49.0/24 created
I0821 12:41:45.167028 1067 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0821 12:41:45.167475 1067 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0821 12:41:45.316873 1067 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0821 12:41:45.468601 1067 oci.go:102] Successfully created a docker volume minikube
I0821 12:41:45.468795 1067 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
I0821 12:41:46.217812 1067 oci.go:106] Successfully prepared a docker volume minikube
I0821 12:41:46.217915 1067 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker
I0821 12:41:46.218116 1067 kic.go:179] Starting extracting preloaded images to volume ...
I0821 12:41:46.218275 1067 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0821 12:41:46.218286 1067 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/utkarsh/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
I0821 12:41:46.535024 1067 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=6000mb --memory-swap=6000mb --cpus=4 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
W0821 12:41:46.553457 1067 cli_runner.go:162] docker run --rm --entrypoint /usr/bin/tar -v /Users/utkarsh/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I0821 12:41:46.553543 1067 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /Users/utkarsh/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:
stderr:
docker: Error response from daemon: Mounts denied:
The path /Users/utkarsh/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 is not shared from the host and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> Resources -> File Sharing.
See https://docs.docker.com/docker-for-mac for more info.
time="2021-08-21T12:41:46-07:00" level=error msg="error waiting for container: context canceled"
I0821 12:41:47.257461 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0821 12:41:47.446304 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0821 12:41:47.650459 1067 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0821 12:41:47.960283 1067 oci.go:278] the created container "minikube" has a running status.
I0821 12:41:47.960310 1067 kic.go:210] Creating ssh key for kic: /Users/utkarsh/.minikube/machines/minikube/id_rsa...
I0821 12:41:48.082649 1067 kic_runner.go:188] docker (temp): /Users/utkarsh/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0821 12:41:48.395077 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0821 12:41:48.576668 1067 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0821 12:41:48.576685 1067 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0821 12:41:48.845541 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0821 12:41:49.001394 1067 machine.go:88] provisioning docker machine ...
I0821 12:41:49.002036 1067 ubuntu.go:169] provisioning hostname "minikube"
I0821 12:41:49.002805 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:49.152865 1067 main.go:130] libmachine: Using SSH client type: native
I0821 12:41:49.153776 1067 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 49631 }
I0821 12:41:49.153789 1067 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0821 12:41:49.315456 1067 main.go:130] libmachine: SSH cmd err, output: : minikube
I0821 12:41:49.315900 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:49.465837 1067 main.go:130] libmachine: Using SSH client type: native
I0821 12:41:49.466030 1067 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 49631 }
I0821 12:41:49.466041 1067 main.go:130] libmachine: About to run SSH command:
I0821 12:41:49.595204 1067 main.go:130] libmachine: SSH cmd err, output: :
I0821 12:41:49.595238 1067 ubuntu.go:175] set auth options {CertDir:/Users/utkarsh/.minikube CaCertPath:/Users/utkarsh/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/utkarsh/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/utkarsh/.minikube/machines/server.pem ServerKeyPath:/Users/utkarsh/.minikube/machines/server-key.pem ClientKeyPath:/Users/utkarsh/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/utkarsh/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/utkarsh/.minikube}
I0821 12:41:49.595266 1067 ubuntu.go:177] setting up certificates
I0821 12:41:49.595277 1067 provision.go:83] configureAuth start
I0821 12:41:49.595441 1067 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0821 12:41:49.751330 1067 provision.go:137] copyHostCerts
I0821 12:41:49.751455 1067 exec_runner.go:145] found /Users/utkarsh/.minikube/cert.pem, removing ...
I0821 12:41:49.751462 1067 exec_runner.go:190] rm: /Users/utkarsh/.minikube/cert.pem
I0821 12:41:49.751577 1067 exec_runner.go:152] cp: /Users/utkarsh/.minikube/certs/cert.pem --> /Users/utkarsh/.minikube/cert.pem (1123 bytes)
I0821 12:41:49.751820 1067 exec_runner.go:145] found /Users/utkarsh/.minikube/key.pem, removing ...
I0821 12:41:49.751823 1067 exec_runner.go:190] rm: /Users/utkarsh/.minikube/key.pem
I0821 12:41:49.751894 1067 exec_runner.go:152] cp: /Users/utkarsh/.minikube/certs/key.pem --> /Users/utkarsh/.minikube/key.pem (1675 bytes)
I0821 12:41:49.752534 1067 exec_runner.go:145] found /Users/utkarsh/.minikube/ca.pem, removing ...
I0821 12:41:49.752541 1067 exec_runner.go:190] rm: /Users/utkarsh/.minikube/ca.pem
I0821 12:41:49.752620 1067 exec_runner.go:152] cp: /Users/utkarsh/.minikube/certs/ca.pem --> /Users/utkarsh/.minikube/ca.pem (1082 bytes)
I0821 12:41:49.752770 1067 provision.go:111] generating server cert: /Users/utkarsh/.minikube/machines/server.pem ca-key=/Users/utkarsh/.minikube/certs/ca.pem private-key=/Users/utkarsh/.minikube/certs/ca-key.pem org=utkarsh.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0821 12:41:49.897283 1067 provision.go:171] copyRemoteCerts
I0821 12:41:49.897881 1067 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0821 12:41:49.898473 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:50.054033 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:41:50.150344 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0821 12:41:50.172670 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
I0821 12:41:50.191602 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0821 12:41:50.211188 1067 provision.go:86] duration metric: configureAuth took 615.89782ms
I0821 12:41:50.211199 1067 ubuntu.go:193] setting minikube options for container-runtime
I0821 12:41:50.211781 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:50.360524 1067 main.go:130] libmachine: Using SSH client type: native
I0821 12:41:50.360699 1067 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 49631 }
I0821 12:41:50.360721 1067 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0821 12:41:50.491871 1067 main.go:130] libmachine: SSH cmd err, output: : overlay
I0821 12:41:50.491882 1067 ubuntu.go:71] root file system type: overlay
I0821 12:41:50.492414 1067 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0821 12:41:50.492567 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:50.649339 1067 main.go:130] libmachine: Using SSH client type: native
I0821 12:41:50.649526 1067 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 49631 }
I0821 12:41:50.649585 1067 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0821 12:41:50.790253 1067 main.go:130] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0821 12:41:50.791163 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:50.951465 1067 main.go:130] libmachine: Using SSH client type: native
I0821 12:41:50.951650 1067 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 49631 }
I0821 12:41:50.951662 1067 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0821 12:41:51.733222 1067 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-08-21 19:41:50.797804066 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0821 12:41:51.733243 1067 machine.go:91] provisioned docker machine in 2.731822141s$'\thost.minikube.internal$ ' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0821 12:41:51.733247 1067 client.go:171] LocalClient.Create took 7.234288135s
I0821 12:41:51.733264 1067 start.go:168] duration metric: libmachine.API.Create for "minikube" took 7.234374857s
I0821 12:41:51.733569 1067 start.go:267] post-start starting for "minikube" (driver="docker")
I0821 12:41:51.733575 1067 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0821 12:41:51.733712 1067 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0821 12:41:51.733789 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:51.893704 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:41:51.988907 1067 ssh_runner.go:149] Run: cat /etc/os-release
I0821 12:41:51.995292 1067 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0821 12:41:51.995307 1067 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0821 12:41:51.995314 1067 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0821 12:41:51.995607 1067 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0821 12:41:51.995861 1067 filesync.go:126] Scanning /Users/utkarsh/.minikube/addons for local assets ...
I0821 12:41:51.996030 1067 filesync.go:126] Scanning /Users/utkarsh/.minikube/files for local assets ...
I0821 12:41:51.996088 1067 start.go:270] post-start completed in 262.513005ms
I0821 12:41:51.996772 1067 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0821 12:41:52.150648 1067 profile.go:148] Saving config to /Users/utkarsh/.minikube/profiles/minikube/config.json ...
I0821 12:41:52.151187 1067 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0821 12:41:52.151258 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:52.300870 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:41:52.389905 1067 start.go:129] duration metric: createHost completed in 7.914252354s
I0821 12:41:52.389929 1067 start.go:80] releasing machines lock for "minikube", held for 7.914687641s
I0821 12:41:52.390868 1067 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0821 12:41:52.546233 1067 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0821 12:41:52.546483 1067 ssh_runner.go:149] Run: systemctl --version
I0821 12:41:52.546558 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:52.546620 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:41:52.736896 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:41:52.753086 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:41:52.829428 1067 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0821 12:41:53.122164 1067 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0821 12:41:53.137236 1067 cruntime.go:249] skipping containerd shutdown because we are bound to it
I0821 12:41:53.137701 1067 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0821 12:41:53.153025 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0821 12:41:53.169573 1067 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0821 12:41:53.234779 1067 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0821 12:41:53.301172 1067 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0821 12:41:53.312578 1067 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0821 12:41:53.379364 1067 ssh_runner.go:149] Run: sudo systemctl start docker
I0821 12:41:53.390646 1067 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0821 12:41:53.564539 1067 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0821 12:41:53.638100 1067 out.go:192] 🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
I0821 12:41:53.638771 1067 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I0821 12:41:53.925097 1067 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
I0821 12:41:53.925639 1067 ssh_runner.go:149] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0821 12:41:53.931387 1067 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0821 12:41:53.943580 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0821 12:41:54.096229 1067 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker
I0821 12:41:54.096356 1067 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0821 12:41:54.139443 1067 docker.go:535] Got preloaded images: -- stdout --
uprakash2/airflow:latest
ghcr.io/airflow-helm/pgbouncer:1.15.0-patch.0
apache/airflow:2.1.2-python3.8
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
alpine:3.13
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/git-sync/git-sync:v3.2.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
bitnami/postgresql:11.7.0-debian-10-r9
bitnami/redis:5.0.7-debian-10-r32
-- /stdout --
I0821 12:41:54.139451 1067 docker.go:466] Images already preloaded, skipping extraction
I0821 12:41:54.139924 1067 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0821 12:41:54.184002 1067 docker.go:535] Got preloaded images: -- stdout --
uprakash2/airflow:latest
ghcr.io/airflow-helm/pgbouncer:1.15.0-patch.0
apache/airflow:2.1.2-python3.8
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
alpine:3.13
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/git-sync/git-sync:v3.2.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
bitnami/postgresql:11.7.0-debian-10-r9
bitnami/redis:5.0.7-debian-10-r32
-- /stdout --
I0821 12:41:54.184289 1067 cache_images.go:74] Images are preloaded, skipping loading
I0821 12:41:54.184666 1067 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0821 12:41:54.464897 1067 cni.go:93] Creating CNI manager for ""
I0821 12:41:54.464907 1067 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0821 12:41:54.465815 1067 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0821 12:41:54.465836 1067 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0821 12:41:54.466205 1067 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.21.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0821 12:41:54.466879 1067 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]$'\tcontrol-plane.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
config:
{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0821 12:41:54.466989 1067 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
I0821 12:41:54.477107 1067 binaries.go:44] Found k8s binaries, skipping transfer
I0821 12:41:54.477227 1067 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0821 12:41:54.485779 1067 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0821 12:41:54.500069 1067 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0821 12:41:54.513860 1067 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes)
I0821 12:41:54.530990 1067 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0821 12:41:54.535303 1067 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0821 12:41:54.547010 1067 certs.go:52] Setting up /Users/utkarsh/.minikube/profiles/minikube for IP: 192.168.49.2
I0821 12:41:54.547365 1067 certs.go:179] skipping minikubeCA CA generation: /Users/utkarsh/.minikube/ca.key
I0821 12:41:54.547481 1067 certs.go:179] skipping proxyClientCA CA generation: /Users/utkarsh/.minikube/proxy-client-ca.key
I0821 12:41:54.547550 1067 certs.go:294] generating minikube-user signed cert: /Users/utkarsh/.minikube/profiles/minikube/client.key
I0821 12:41:54.547887 1067 crypto.go:69] Generating cert /Users/utkarsh/.minikube/profiles/minikube/client.crt with IP's: []
I0821 12:41:54.669900 1067 crypto.go:157] Writing cert to /Users/utkarsh/.minikube/profiles/minikube/client.crt ...
I0821 12:41:54.669926 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/client.crt: {Name:mkfbe5335cc71094db9f214c85ba3fbda8573232 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:54.670783 1067 crypto.go:165] Writing key to /Users/utkarsh/.minikube/profiles/minikube/client.key ...
I0821 12:41:54.670795 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/client.key: {Name:mkd856a9a97c2f0b42d4553a3510ed797d52cef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:54.671254 1067 certs.go:294] generating minikube signed cert: /Users/utkarsh/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0821 12:41:54.671260 1067 crypto.go:69] Generating cert /Users/utkarsh/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0821 12:41:54.871550 1067 crypto.go:157] Writing cert to /Users/utkarsh/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0821 12:41:54.871568 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkb3511a453d3b8f29fad9dd480e1ce7fa7b33a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:54.871888 1067 crypto.go:165] Writing key to /Users/utkarsh/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0821 12:41:54.871896 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk55f0b673166e389fab0472a4a45a3826242b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:54.872108 1067 certs.go:305] copying /Users/utkarsh/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/utkarsh/.minikube/profiles/minikube/apiserver.crt
I0821 12:41:54.873062 1067 certs.go:309] copying /Users/utkarsh/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/utkarsh/.minikube/profiles/minikube/apiserver.key
I0821 12:41:54.873269 1067 certs.go:294] generating aggregator signed cert: /Users/utkarsh/.minikube/profiles/minikube/proxy-client.key
I0821 12:41:54.873275 1067 crypto.go:69] Generating cert /Users/utkarsh/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0821 12:41:55.061334 1067 crypto.go:157] Writing cert to /Users/utkarsh/.minikube/profiles/minikube/proxy-client.crt ...
I0821 12:41:55.061344 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/proxy-client.crt: {Name:mka0234f339bed19337f8e0c4523c79e76cad30a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:55.061658 1067 crypto.go:165] Writing key to /Users/utkarsh/.minikube/profiles/minikube/proxy-client.key ...
I0821 12:41:55.061663 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.minikube/profiles/minikube/proxy-client.key: {Name:mk05eb062d8877d1f03825daca0df28527a9ccd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:55.062598 1067 certs.go:369] found cert: /Users/utkarsh/.minikube/certs/Users/utkarsh/.minikube/certs/ca-key.pem (1679 bytes)
I0821 12:41:55.062671 1067 certs.go:369] found cert: /Users/utkarsh/.minikube/certs/Users/utkarsh/.minikube/certs/ca.pem (1082 bytes)
I0821 12:41:55.062711 1067 certs.go:369] found cert: /Users/utkarsh/.minikube/certs/Users/utkarsh/.minikube/certs/cert.pem (1123 bytes)
I0821 12:41:55.062744 1067 certs.go:369] found cert: /Users/utkarsh/.minikube/certs/Users/utkarsh/.minikube/certs/key.pem (1675 bytes)
I0821 12:41:55.074114 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0821 12:41:55.094346 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0821 12:41:55.114653 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0821 12:41:55.133267 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0821 12:41:55.152232 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0821 12:41:55.172310 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0821 12:41:55.192519 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0821 12:41:55.212959 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0821 12:41:55.233815 1067 ssh_runner.go:316] scp /Users/utkarsh/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0821 12:41:55.253888 1067 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0821 12:41:55.269950 1067 ssh_runner.go:149] Run: openssl version
I0821 12:41:55.278910 1067 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0821 12:41:55.289809 1067 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0821 12:41:55.295038 1067 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:23 /usr/share/ca-certificates/minikubeCA.pem
I0821 12:41:55.295152 1067 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0821 12:41:55.301950 1067 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0821 12:41:55.311306 1067 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:6000 CPUs:4 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0821 12:41:55.311454 1067 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0821 12:41:55.350118 1067 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0821 12:41:55.358601 1067 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0821 12:41:55.358611 1067 kubeadm.go:600] restartCluster start
I0821 12:41:55.359446 1067 ssh_runner.go:149] Run: sudo test -d /data/minikube
I0821 12:41:55.367270 1067 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0821 12:41:55.367382 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0821 12:41:55.595143 1067 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /Users/utkarsh/.kube/config
I0821 12:41:55.595518 1067 kubeconfig.go:128] "minikube" context is missing from /Users/utkarsh/.kube/config - will repair!
I0821 12:41:55.595780 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.kube/config: {Name:mk0797a44a7ed922fc4d9086469815144bccc5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:41:55.615213 1067 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0821 12:41:55.626567 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:55.626690 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:55.642942 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:55.844047 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:55.844428 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:55.867087 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:56.043078 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:56.043368 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:56.063114 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:56.243350 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:56.243476 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:56.260979 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:56.443252 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:56.443406 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:56.461241 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:56.646561 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:56.646703 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:56.664543 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:56.843619 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:56.843751 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:56.862609 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:57.043094 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:57.043244 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:57.061404 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:57.243171 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:57.243503 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:57.265458 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:57.443150 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:57.443501 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:57.466478 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:57.646570 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:57.646879 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:57.670170 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:57.843192 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:57.843413 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:57.862662 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:58.044605 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:58.045030 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:58.064648 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:58.243825 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:58.244134 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:58.266189 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:58.444103 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:58.444460 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:58.465294 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:58.646594 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:58.646838 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:58.666243 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:58.666250 1067 api_server.go:164] Checking apiserver status ...
I0821 12:41:58.666377 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0821 12:41:58.684509 1067 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0821 12:41:58.684518 1067 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
I0821 12:41:58.684528 1067 kubeadm.go:1032] stopping kube-system containers ...
I0821 12:41:58.684656 1067 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0821 12:41:58.733913 1067 docker.go:367] Stopping containers: [4a6903c286e6 fd4b73dca4ec 08055f785a77 0b07a4b722e4 9fcf58d0bebb 777268745c71 54a9782220a8 aa64b639b32d e664f17ccfc2 e28713f28618 a60060ad503a d258f6d7d7a8 d099c6cec5ea 7eae311d421d 1fd3c7de43e6 e462a87dab6e 387a0037cfde 95e7d32445b1 fed9200d15cc eff651d6d80a c9da871f2e47 9185db31f7da 4398bd6da92f 4253d1ce9715 37a7a6cbd71d 28c3f060e011 685e1ce90783]
I0821 12:41:58.734061 1067 ssh_runner.go:149] Run: docker stop 4a6903c286e6 fd4b73dca4ec 08055f785a77 0b07a4b722e4 9fcf58d0bebb 777268745c71 54a9782220a8 aa64b639b32d e664f17ccfc2 e28713f28618 a60060ad503a d258f6d7d7a8 d099c6cec5ea 7eae311d421d 1fd3c7de43e6 e462a87dab6e 387a0037cfde 95e7d32445b1 fed9200d15cc eff651d6d80a c9da871f2e47 9185db31f7da 4398bd6da92f 4253d1ce9715 37a7a6cbd71d 28c3f060e011 685e1ce90783
I0821 12:41:58.777587 1067 ssh_runner.go:149] Run: sudo systemctl stop kubelet
I0821 12:41:58.792573 1067 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0821 12:41:58.801456 1067 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0821 12:41:58.801577 1067 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0821 12:41:58.811920 1067 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0821 12:41:58.811930 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0821 12:41:59.082476 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0821 12:42:00.325909 1067 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243411809s)
I0821 12:42:00.325923 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0821 12:42:00.557531 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0821 12:42:00.724740 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0821 12:42:00.896317 1067 api_server.go:50] waiting for apiserver process to appear ...
I0821 12:42:00.896451 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:01.414761 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:01.914247 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:02.415346 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:02.916389 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:03.415453 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:03.915662 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:04.415282 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:04.914596 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:05.416406 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:05.914440 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:06.418300 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:06.914066 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:07.414109 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:07.913833 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:07.940311 1067 api_server.go:70] duration metric: took 7.043971611s to wait for apiserver process to appear ...
I0821 12:42:07.940320 1067 api_server.go:86] waiting for apiserver healthz status ...
I0821 12:42:07.940541 1067 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:49630/healthz ...
I0821 12:42:07.944004 1067 api_server.go:255] stopped: https://127.0.0.1:49630/healthz: Get "https://127.0.0.1:49630/healthz": EOF
I0821 12:42:08.445179 1067 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:49630/healthz ...
I0821 12:42:12.588174 1067 api_server.go:265] https://127.0.0.1:49630/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User "system:anonymous" cannot get path "/healthz"","reason":"Forbidden","details":{},"code":403}
W0821 12:42:12.588418 1067 api_server.go:101] status: https://127.0.0.1:49630/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User "system:anonymous" cannot get path "/healthz"","reason":"Forbidden","details":{},"code":403}
I0821 12:42:12.948260 1067 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:49630/healthz ...
I0821 12:42:12.957408 1067 api_server.go:265] https://127.0.0.1:49630/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0821 12:42:12.957422 1067 api_server.go:101] status: https://127.0.0.1:49630/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0821 12:42:13.444223 1067 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:49630/healthz ...
I0821 12:42:13.454588 1067 api_server.go:265] https://127.0.0.1:49630/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0821 12:42:13.454604 1067 api_server.go:101] status: https://127.0.0.1:49630/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0821 12:42:13.944496 1067 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:49630/healthz ...
I0821 12:42:13.952635 1067 api_server.go:265] https://127.0.0.1:49630/healthz returned 200:
ok
I0821 12:42:13.965196 1067 api_server.go:139] control plane version: v1.21.2
I0821 12:42:13.965206 1067 api_server.go:129] duration metric: took 6.024864949s to wait for apiserver health ...
I0821 12:42:13.965212 1067 cni.go:93] Creating CNI manager for ""
I0821 12:42:13.965217 1067 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0821 12:42:13.965486 1067 system_pods.go:43] waiting for kube-system pods to appear ...
I0821 12:42:13.986025 1067 system_pods.go:59] 7 kube-system pods found
I0821 12:42:13.986040 1067 system_pods.go:61] "coredns-558bd4d5db-r6nfd" [95f43f94-9b3c-4b6a-afee-8fefa50d0ae5] Running
I0821 12:42:13.986043 1067 system_pods.go:61] "etcd-minikube" [0cf97309-19a1-4cf7-afb2-b5c74a2067bd] Running
I0821 12:42:13.986045 1067 system_pods.go:61] "kube-apiserver-minikube" [4854425b-1ec6-4f28-af40-9393a8b2c8e9] Running
I0821 12:42:13.986047 1067 system_pods.go:61] "kube-controller-manager-minikube" [db9eb2e3-6107-4065-a42d-bb23566458b4] Running
I0821 12:42:13.986050 1067 system_pods.go:61] "kube-proxy-h2hg7" [c9a65632-42c9-4d9c-b81e-b9dfbbe31590] Running
I0821 12:42:13.986052 1067 system_pods.go:61] "kube-scheduler-minikube" [1aeaad20-164f-4f71-a47e-eb12e220a050] Running
I0821 12:42:13.986054 1067 system_pods.go:61] "storage-provisioner" [71b7cf1e-1033-4bd4-b21b-51cea4e8862c] Running
I0821 12:42:13.986057 1067 system_pods.go:74] duration metric: took 20.567053ms to wait for pod list to return data ...
I0821 12:42:13.986062 1067 node_conditions.go:102] verifying NodePressure condition ...
I0821 12:42:13.990818 1067 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0821 12:42:13.990836 1067 node_conditions.go:123] node cpu capacity is 6
I0821 12:42:13.991091 1067 node_conditions.go:105] duration metric: took 5.024841ms to run NodePressure ...
I0821 12:42:13.991101 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0821 12:42:14.253062 1067 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0821 12:42:14.267739 1067 ops.go:34] apiserver oom_adj: -16
I0821 12:42:14.267750 1067 kubeadm.go:604] restartCluster took 18.909076608s
I0821 12:42:14.267755 1067 kubeadm.go:392] StartCluster complete in 18.956396884s
I0821 12:42:14.267765 1067 settings.go:142] acquiring lock: {Name:mk85e8bd2b3c0b7ecc8c1e41c3a838e2660ab589 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:42:14.267964 1067 settings.go:150] Updating kubeconfig: /Users/utkarsh/.kube/config
I0821 12:42:14.270232 1067 lock.go:36] WriteFile acquiring /Users/utkarsh/.kube/config: {Name:mk0797a44a7ed922fc4d9086469815144bccc5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0821 12:42:14.279153 1067 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0821 12:42:14.279471 1067 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0821 12:42:14.279567 1067 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
I0821 12:42:14.280247 1067 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0821 12:42:14.303435 1067 out.go:165] 🔎 Verifying Kubernetes components...
I0821 12:42:14.303601 1067 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0821 12:42:14.303765 1067 addons.go:59] Setting default-storageclass=true in profile "minikube"
I0821 12:42:14.303765 1067 addons.go:59] Setting storage-provisioner=true in profile "minikube"
I0821 12:42:14.304075 1067 addons.go:135] Setting addon storage-provisioner=true in "minikube"
W0821 12:42:14.304085 1067 addons.go:147] addon storage-provisioner should already be in state true
I0821 12:42:14.304083 1067 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0821 12:42:14.304280 1067 host.go:66] Checking if "minikube" exists ...
I0821 12:42:14.325636 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0821 12:42:14.327478 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0821 12:42:14.566880 1067 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0821 12:42:14.567215 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0821 12:42:14.804577 1067 out.go:165] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0821 12:42:14.792643 1067 api_server.go:50] waiting for apiserver process to appear ...
I0821 12:42:14.804801 1067 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0821 12:42:14.804809 1067 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0821 12:42:14.804838 1067 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0821 12:42:14.804984 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:42:14.830517 1067 addons.go:135] Setting addon default-storageclass=true in "minikube"
W0821 12:42:14.830532 1067 addons.go:147] addon default-storageclass should already be in state true
I0821 12:42:14.830550 1067 host.go:66] Checking if "minikube" exists ...
I0821 12:42:14.831103 1067 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0821 12:42:14.837222 1067 api_server.go:70] duration metric: took 557.63105ms to wait for apiserver process to appear ...
I0821 12:42:14.837243 1067 api_server.go:86] waiting for apiserver healthz status ...
I0821 12:42:14.837251 1067 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:49630/healthz ...
I0821 12:42:14.847551 1067 api_server.go:265] https://127.0.0.1:49630/healthz returned 200:
ok
I0821 12:42:14.849795 1067 api_server.go:139] control plane version: v1.21.2
I0821 12:42:14.849806 1067 api_server.go:129] duration metric: took 12.559277ms to wait for apiserver health ...
I0821 12:42:14.849811 1067 system_pods.go:43] waiting for kube-system pods to appear ...
I0821 12:42:14.858761 1067 system_pods.go:59] 7 kube-system pods found
I0821 12:42:14.858781 1067 system_pods.go:61] "coredns-558bd4d5db-r6nfd" [95f43f94-9b3c-4b6a-afee-8fefa50d0ae5] Running
I0821 12:42:14.858786 1067 system_pods.go:61] "etcd-minikube" [0cf97309-19a1-4cf7-afb2-b5c74a2067bd] Running
I0821 12:42:14.858792 1067 system_pods.go:61] "kube-apiserver-minikube" [4854425b-1ec6-4f28-af40-9393a8b2c8e9] Running
I0821 12:42:14.858797 1067 system_pods.go:61] "kube-controller-manager-minikube" [db9eb2e3-6107-4065-a42d-bb23566458b4] Running
I0821 12:42:14.858812 1067 system_pods.go:61] "kube-proxy-h2hg7" [c9a65632-42c9-4d9c-b81e-b9dfbbe31590] Running
I0821 12:42:14.858862 1067 system_pods.go:61] "kube-scheduler-minikube" [1aeaad20-164f-4f71-a47e-eb12e220a050] Running
I0821 12:42:14.858872 1067 system_pods.go:61] "storage-provisioner" [71b7cf1e-1033-4bd4-b21b-51cea4e8862c] Running
I0821 12:42:14.858877 1067 system_pods.go:74] duration metric: took 9.062ms to wait for pod list to return data ...
I0821 12:42:14.858883 1067 kubeadm.go:547] duration metric: took 579.298786ms to wait for : map[apiserver:true system_pods:true] ...
I0821 12:42:14.858897 1067 node_conditions.go:102] verifying NodePressure condition ...
I0821 12:42:14.864418 1067 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0821 12:42:14.864432 1067 node_conditions.go:123] node cpu capacity is 6
I0821 12:42:14.864448 1067 node_conditions.go:105] duration metric: took 5.546524ms to run NodePressure ...
I0821 12:42:14.864457 1067 start.go:225] waiting for startup goroutines ...
I0821 12:42:15.040540 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:42:15.067290 1067 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0821 12:42:15.067300 1067 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0821 12:42:15.067460 1067 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0821 12:42:15.202715 1067 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0821 12:42:15.370078 1067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49631 SSHKeyPath:/Users/utkarsh/.minikube/machines/minikube/id_rsa Username:docker}
I0821 12:42:15.588220 1067 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0821 12:42:15.979260 1067 out.go:165] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0821 12:42:15.979300 1067 addons.go:344] enableAddons completed in 1.699574243s
I0821 12:42:16.276896 1067 start.go:462] kubectl: 1.21.3, cluster: 1.21.2 (minor skew: 0)
I0821 12:42:16.303426 1067 out.go:165] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Sat 2021-08-21 19:41:47 UTC, end at Sat 2021-08-21 19:57:15 UTC. --
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.448162799Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.452911213Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.466566432Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.466657379Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.466892356Z" level=info msg="Loading containers: start."
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.639355876Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.692823245Z" level=info msg="Loading containers: done."
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.712244998Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.712337573Z" level=info msg="Daemon has completed initialization"
Aug 21 19:41:51 minikube systemd[1]: Started Docker Application Container Engine.
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.746854502Z" level=info msg="API listen on [::]:2376"
Aug 21 19:41:51 minikube dockerd[531]: time="2021-08-21T19:41:51.750089491Z" level=info msg="API listen on /var/run/docker.sock"
Aug 21 19:42:41 minikube dockerd[531]: time="2021-08-21T19:42:41.301248206Z" level=info msg="ignoring event" container=8f37d162e6e43f2587724e7c54b0b706e10e62c3c55e5db37566b80925ff957a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:42:41 minikube dockerd[531]: time="2021-08-21T19:42:41.321640055Z" level=info msg="ignoring event" container=abeea67bdb75c534ee19d1e6aedb0ea2d34abe02c77ab8627fdf5420e44caf94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:42:41 minikube dockerd[531]: time="2021-08-21T19:42:41.403283456Z" level=info msg="ignoring event" container=9d2cb240bcd992fdfeafc715d9d48aa53c19b63487914ce678b0f3d5a7021465 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:42:41 minikube dockerd[531]: time="2021-08-21T19:42:41.413333361Z" level=info msg="ignoring event" container=7e2f2a1a79ed0f59744811dbd20f688442ef02590eb914514fe299c00ef5be01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:42:41 minikube dockerd[531]: time="2021-08-21T19:42:41.423316723Z" level=info msg="ignoring event" container=d67fac3f46aa99a05141cdb5261093f61b045488c0b13a84d19a1f952d168498 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:42:42 minikube dockerd[531]: time="2021-08-21T19:42:42.002460457Z" level=info msg="ignoring event" container=efd048516bbe96259910010f30db6626933691c046964008561f0268ba404764 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:43:03 minikube dockerd[531]: time="2021-08-21T19:43:03.021601563Z" level=info msg="ignoring event" container=86e24ee284b3a766bcd10788c5df0496832703d4b63b1477110d20c3ceb48ba7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:43:28 minikube dockerd[531]: time="2021-08-21T19:43:28.915813497Z" level=info msg="ignoring event" container=6e60e097528814ee6046aad5b938e99c917969d4a87f516540dc5fabdfc442a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:00 minikube dockerd[531]: time="2021-08-21T19:44:00.348822620Z" level=info msg="ignoring event" container=87d5e6b3dd4f67317e794774f268ad10dbd7aa17f7849b7b02bc53fb02d6fe6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:01 minikube dockerd[531]: time="2021-08-21T19:44:01.246091039Z" level=info msg="ignoring event" container=d4960577b1b63306a8437c182f69269309904be2d39940abb39dcc2e91ffc4cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:01 minikube dockerd[531]: time="2021-08-21T19:44:01.277625088Z" level=info msg="ignoring event" container=bbfcfc10396837234a65fe641f372c860596c0e9797cdc7d837aae582585b666 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:01 minikube dockerd[531]: time="2021-08-21T19:44:01.313272215Z" level=info msg="ignoring event" container=61bac67854b0adffe9b1ce7a27a95ea36b2ef2bc702da3e60f05fc326005c65d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:01 minikube dockerd[531]: time="2021-08-21T19:44:01.341847243Z" level=info msg="ignoring event" container=167d029f578f0439e9b723a2e203ac83003e2d65b23f7848315360f58c8614b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:01 minikube dockerd[531]: time="2021-08-21T19:44:01.374223342Z" level=info msg="ignoring event" container=dc72f89492c5e23fe06cb1bf92be1c934eb01ec7cabc9fbbb7c79ded93c1d991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:44:05 minikube dockerd[531]: time="2021-08-21T19:44:05.994644778Z" level=info msg="ignoring event" container=417fd367bc2b7354c294d8e55746859d32c1f4e9e52ec56b4d4d01d2dcf1a93f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:45:06 minikube dockerd[531]: time="2021-08-21T19:45:06.747899930Z" level=info msg="ignoring event" container=905d7b33b8e157f7979ab857c4782d65e0bdcd96ebf7c2a3b30de773ef721b53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:45:21 minikube dockerd[531]: time="2021-08-21T19:45:21.177616521Z" level=info msg="ignoring event" container=4d6e320d135b1360d8cf364ead3a3b3dc556a9a6f508c8fff87fe90e945fcb2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:45:21 minikube dockerd[531]: time="2021-08-21T19:45:21.303193219Z" level=info msg="ignoring event" container=9fe32dbf513fe73afd26fdc5bd8211eca6d13645a332faf0ab933fd17b0c37b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:45:21 minikube dockerd[531]: time="2021-08-21T19:45:21.409193100Z" level=info msg="ignoring event" container=bf7b050b717b674c1a85bd145a7751836e598529cfd6e7a7102188158ef0787b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:45:21 minikube dockerd[531]: time="2021-08-21T19:45:21.412059571Z" level=info msg="ignoring event" container=682c59e3e9d493ce582d41da18f9b40c74fb4c407b3ac02ade6ff68645b972e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:45:21 minikube dockerd[531]: time="2021-08-21T19:45:21.424256465Z" level=info msg="ignoring event" container=551c9d9d9140db6f8798954441193c9855ee8e0459c7c881c045c5e031575630 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:44 minikube dockerd[531]: time="2021-08-21T19:46:44.429338927Z" level=info msg="ignoring event" container=f99fdd5c61f5f659b1142948a95af3ba66b2ad4c16faf1569383405d12044e5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:44 minikube dockerd[531]: time="2021-08-21T19:46:44.456296428Z" level=info msg="ignoring event" container=37e40bb6eb3ccdb8c893415b273bf4542b704b30fe147f66414f8e57ecc54e05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:46 minikube dockerd[531]: time="2021-08-21T19:46:46.082838905Z" level=info msg="ignoring event" container=f9acbc3a1f78dd3f37b2863483485599aa8f92318d47415d6c4a3aa075632591 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:46 minikube dockerd[531]: time="2021-08-21T19:46:46.288330494Z" level=info msg="ignoring event" container=6834a82aa44ca358323447b573a431f6d4bbaefb136332c94a7a8dfec3262716 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:46 minikube dockerd[531]: time="2021-08-21T19:46:46.497288355Z" level=info msg="ignoring event" container=ff25913ea8665ad3360efb39f7f9bd3c3adff21f05b5808c9850e2b315c6f7a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:46 minikube dockerd[531]: time="2021-08-21T19:46:46.540780819Z" level=info msg="ignoring event" container=565afcd64f3c4bb9494410773a37cde4c6c2798bc6660cee92bea0e26ed255f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:46:50 minikube dockerd[531]: time="2021-08-21T19:46:50.935254578Z" level=info msg="ignoring event" container=e0d8096fe5718bb87d590d2e146e0045e730ab6b3d0edc56a11ed7bf0bc34c97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:48:18 minikube dockerd[531]: time="2021-08-21T19:48:18.851478351Z" level=info msg="ignoring event" container=ef9ea1a7fc16158f73a5eb961c64159ac929a316114edbed4b62b0a749656e0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:48:23 minikube dockerd[531]: time="2021-08-21T19:48:23.012025560Z" level=info msg="ignoring event" container=998d7c0aa9fa8beef22d17db9edf48aab4681aee123b1a9af05ea6f05ff75ccc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:48:25 minikube dockerd[531]: time="2021-08-21T19:48:25.467509490Z" level=info msg="ignoring event" container=73a535d3b7a612240a8e128161cc824507b0408cb076742df35ed5bc55e47bfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:48:25 minikube dockerd[531]: time="2021-08-21T19:48:25.493324035Z" level=info msg="ignoring event" container=f54f24bbae3e2b382dcb6b1f4c1613c87b8d6a5f45d2c21c0c476a6fa2a86298 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:48:25 minikube dockerd[531]: time="2021-08-21T19:48:25.740015802Z" level=info msg="ignoring event" container=13186ba4966b3d66f789b5dc5d50172a7d85d910d1e4c3925ff3f6df62e98dfd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:48:26 minikube dockerd[531]: time="2021-08-21T19:48:26.126441577Z" level=info msg="ignoring event" container=76edad4fe4e5fe604701dc71183f44a08268b3a43dcc9cb782cf9d508592f9df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:49:47 minikube dockerd[531]: time="2021-08-21T19:49:47.252053459Z" level=info msg="ignoring event" container=5956474789d62073d068ca8962f2cae9316b40b0e399f14a29ea926a0051f18a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:50:12 minikube dockerd[531]: time="2021-08-21T19:50:12.989934026Z" level=info msg="ignoring event" container=34fcd83bb388d9c034936fca711cd28e0cd12469e3a8ad21afc947d90d4cdb0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:50:19 minikube dockerd[531]: time="2021-08-21T19:50:19.659526991Z" level=info msg="ignoring event" container=cd87b894f9ba745f5ac52e7682c188a610ca2e73fc4b7ec6689d4dbb561c514c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:50:23 minikube dockerd[531]: time="2021-08-21T19:50:23.051809454Z" level=info msg="ignoring event" container=0a8ec587806d8ceb19f17263c1f1cb0a06024c92064f1586da95a54181923469 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:50:26 minikube dockerd[531]: time="2021-08-21T19:50:26.791165074Z" level=info msg="ignoring event" container=d454328128a3bc1f14c394ae72522b8d40572b6507dab379dbd602eec76d7376 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:50:27 minikube dockerd[531]: time="2021-08-21T19:50:27.250998031Z" level=info msg="ignoring event" container=6b964c9f647fa8712c02b855ff339fc2418f23594706d5a0251f8b980eaf8b97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:50:27 minikube dockerd[531]: time="2021-08-21T19:50:27.563655835Z" level=info msg="ignoring event" container=caaaf6b80d7d611b4eb24e286e18e443cb8095edacf18cab0adda3b20debfd56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:52:52 minikube dockerd[531]: time="2021-08-21T19:52:52.700589011Z" level=info msg="ignoring event" container=3af30d0630b314ef034aff7bc7da5fa8f938676f72cca5ba3ad4d3669ad80b72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:52:56 minikube dockerd[531]: time="2021-08-21T19:52:56.829309210Z" level=info msg="ignoring event" container=78e77318e2e95d11ee1d3784f94e8aeb41fbf7d9918c37f7f42ea0f71691221d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:52:58 minikube dockerd[531]: time="2021-08-21T19:52:58.793615101Z" level=info msg="ignoring event" container=8b43cad73163a7d513ede75ac852fd4789fcb22cd6a8c6517f59637d8e20d0b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:53:00 minikube dockerd[531]: time="2021-08-21T19:53:00.803534863Z" level=info msg="ignoring event" container=4ff50e8344c7776fe75c165265a88f8504475ae671e656edb8b91a0377dd8692 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:53:04 minikube dockerd[531]: time="2021-08-21T19:53:04.345156347Z" level=info msg="ignoring event" container=4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:53:04 minikube dockerd[531]: time="2021-08-21T19:53:04.674450587Z" level=info msg="ignoring event" container=f84076c8216148f85842428d0aaad30506ee1690eb0c89b86fa2b5fb233158f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 21 19:56:28 minikube dockerd[531]: time="2021-08-21T19:56:28.776928656Z" level=info msg="ignoring event" container=1739a6b978b936b544093a8c5bf1ef1ea71ff3a95075cd764b505bac9ea712b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
a89b4b8c81689 f334317d64422 About a minute ago Running pgbouncer 8 2acdf25ec4e2e
19efdffeb1193 0710e5e3a4efb About a minute ago Running wait-for-db-migrations 6 540bae572fa9c
512d811d29654 0710e5e3a4efb About a minute ago Running wait-for-db-migrations 6 99f8f122293f2
e81c428ccc0e8 0710e5e3a4efb About a minute ago Running wait-for-db-migrations 6 9b5916b127a50
437f942fb76db 0710e5e3a4efb About a minute ago Running wait-for-db-migrations 6 17c41622902b2
6c1bde3895720 0710e5e3a4efb About a minute ago Running wait-for-db-migrations 6 db2faf1bd7c53
1739a6b978b93 0710e5e3a4efb 2 minutes ago Exited db-migrations 11 fc3526df7176b
f84076c821614 0710e5e3a4efb 5 minutes ago Exited wait-for-db-migrations 5 540bae572fa9c
4ff50e8344c77 0710e5e3a4efb 5 minutes ago Exited wait-for-db-migrations 5 9b5916b127a50
8b43cad73163a 0710e5e3a4efb 5 minutes ago Exited wait-for-db-migrations 5 99f8f122293f2
78e77318e2e95 0710e5e3a4efb 5 minutes ago Exited wait-for-db-migrations 5 17c41622902b2
3af30d0630b31 0710e5e3a4efb 5 minutes ago Exited wait-for-db-migrations 5 db2faf1bd7c53
4bdf541ad84b4 f334317d64422 6 minutes ago Exited pgbouncer 7 2acdf25ec4e2e
6d06542a8c455 724b2b2c7f0b9 14 minutes ago Running airflow-postgresql 1 f46fe0f5db6d8
abeea67bdb75c 0710e5e3a4efb 14 minutes ago Exited check-db 6 99f8f122293f2
7e2f2a1a79ed0 0710e5e3a4efb 14 minutes ago Exited check-db 2 fc3526df7176b
9d2cb240bcd99 0710e5e3a4efb 14 minutes ago Exited check-db 6 17c41622902b2
efd048516bbe9 0710e5e3a4efb 14 minutes ago Exited check-db 5 9b5916b127a50
6a48db1cd44df 6e38f40d628db 14 minutes ago Running storage-provisioner 8 8e87e288ca7d0
f0d6e0b29d4f1 296a6d5035e2d 14 minutes ago Running coredns 5 f33deaf5bbc64
8f37d162e6e43 0710e5e3a4efb 14 minutes ago Exited check-db 6 db2faf1bd7c53
d67fac3f46aa9 0710e5e3a4efb 15 minutes ago Exited check-db 5 540bae572fa9c
ee269695c8d72 364a8748d03dd 15 minutes ago Running airflow-redis 3 9f245977a8e26
b91f5a619ec08 a6ebd1c1ad981 15 minutes ago Running kube-proxy 5 7bc4dd0580a3a
72e0673135519 f917b8c8f55b7 15 minutes ago Running kube-scheduler 5 4fe160c7eda66
a89f8b7d8f831 106ff58d43082 15 minutes ago Running kube-apiserver 5 4df4b40a9ab38
ee32e92db182c ae24db9aa2cc0 15 minutes ago Running kube-controller-manager 5 13656b5a838fb
eb98dffe44a92 0369cf4303ffd 15 minutes ago Running etcd 5 8c95a0f0c5186
025f8f2e72ef4 724b2b2c7f0b9 9 hours ago Exited airflow-postgresql 0 83ff851ab7db5
4a6903c286e6a 6e38f40d628db 9 hours ago Exited storage-provisioner 7 54a9782220a83
fd4b73dca4ecf 296a6d5035e2d 9 hours ago Exited coredns 4 08055f785a778
0e3fde55dae2b 364a8748d03dd 9 hours ago Exited airflow-redis 2 e8e09cae6b141
0b07a4b722e40 a6ebd1c1ad981 9 hours ago Exited kube-proxy 4 9fcf58d0bebbb
aa64b639b32d0 f917b8c8f55b7 9 hours ago Exited kube-scheduler 4 1fd3c7de43e61
e664f17ccfc25 ae24db9aa2cc0 9 hours ago Exited kube-controller-manager 4 d099c6cec5eae
e28713f28618c 0369cf4303ffd 9 hours ago Exited etcd 4 d258f6d7d7a86
a60060ad503aa 106ff58d43082 9 hours ago Exited kube-apiserver 4 7eae311d421d6
==> coredns [f0d6e0b29d4f] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
==> coredns [fd4b73dca4ec] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: minikube
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=a03fbcf166e6f74ef224d4a63be4277d017bb62e
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2021_08_21T03_24_08_0700
minikube.k8s.io/version=v1.22.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 21 Aug 2021 10:24:04 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sat, 21 Aug 2021 19:57:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Sat, 21 Aug 2021 19:52:16 +0000 Sat, 21 Aug 2021 10:23:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 21 Aug 2021 19:52:16 +0000 Sat, 21 Aug 2021 10:23:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 21 Aug 2021 19:52:16 +0000 Sat, 21 Aug 2021 10:23:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 21 Aug 2021 19:52:16 +0000 Sat, 21 Aug 2021 10:24:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 6
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 6344644Ki
pods: 110
Allocatable:
cpu: 6
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 6344644Ki
pods: 110
System Info:
Machine ID: 760e67beb8554645829f2357c8eb4ae7
System UUID: e7462537-5f92-44a1-a0fe-e746a3d55b59
Boot ID: 9c7e57ad-bbd5-4b20-a92d-4c0b2edf87ea
Kernel Version: 5.10.47-linuxkit
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.2
Kube-Proxy Version: v1.21.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
airflow airflow-db-migrations-7cbbffc6bd-hdvqb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8h
airflow airflow-flower-668dff7db5-qtqxw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
airflow airflow-pgbouncer-79f86d9fc-5rnjl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8h
airflow airflow-postgresql-0 250m (4%!)(MISSING) 0 (0%!)(MISSING) 256Mi (4%!)(MISSING) 0 (0%!)(MISSING) 8h
airflow airflow-redis-master-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
airflow airflow-scheduler-6cb5788859-fcvdg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
airflow airflow-sync-users-666d766475-fwm2q 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
airflow airflow-web-6c94784c64-h46bf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
airflow airflow-worker-0 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
kube-system coredns-558bd4d5db-r6nfd 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 9h
kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 9h
kube-system kube-apiserver-minikube 250m (4%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
kube-system kube-controller-manager-minikube 200m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
kube-system kube-proxy-h2hg7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 1 (16%!)(MISSING) 0 (0%!)(MISSING)
memory 426Mi (6%!)(MISSING) 170Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
Normal NodeHasNoDiskPressure 9h (x5 over 9h) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9h (x5 over 9h) kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 9h (x5 over 9h) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9h kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 9h kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9h kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 9h kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeNotReady 9h kubelet Node minikube status is now: NodeNotReady
Normal NodeAllocatableEnforced 9h kubelet Updated Node Allocatable limit across pods
Normal NodeReady 9h kubelet Node minikube status is now: NodeReady
Normal Starting 9h kube-proxy Starting kube-proxy.
Normal Starting 9h kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9h (x8 over 9h) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9h (x8 over 9h) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9h (x7 over 9h) kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9h kubelet Updated Node Allocatable limit across pods
Normal Starting 9h kube-proxy Starting kube-proxy.
Normal Starting 9h kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9h (x8 over 9h) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9h (x8 over 9h) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeAllocatableEnforced 9h kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 9h (x7 over 9h) kubelet Node minikube status is now: NodeHasSufficientPID
Normal Starting 9h kube-proxy Starting kube-proxy.
Normal Starting 8h kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8h (x8 over 8h) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 8h (x7 over 8h) kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8h kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 8h (x8 over 8h) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 8h kube-proxy Starting kube-proxy.
Normal NodeAllocatableEnforced 8h kubelet Updated Node Allocatable limit across pods
Normal Starting 8h kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8h (x8 over 8h) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 8h (x7 over 8h) kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 8h (x8 over 8h) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 8h kube-proxy Starting kube-proxy.
Normal NodeHasSufficientMemory 15m (x8 over 15m) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m (x8 over 15m) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m (x7 over 15m) kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet Updated Node Allocatable limit across pods
Normal Starting 15m kubelet Starting kubelet.
Normal Starting 15m kube-proxy Starting kube-proxy.
==> dmesg <==
[Aug21 19:41] ERROR: earlyprintk= earlyser already used
[ +0.000000] ERROR: earlyprintk= earlyser already used
[ +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0x7E, should be 0xDB (20200925/tbprint-173)
[ +0.201846] #2
[ +0.062993] #3
[ +0.063005] #4
[ +0.062998] #5
[ +2.064487] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.032110] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
[ +0.002025] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
[ +4.321105] grpcfuse: loading out-of-tree module taints kernel.
==> etcd [e28713f28618] <==
2021-08-21 11:24:40.309990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:24:50.309158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:25:00.311003 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:25:10.282871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:25:20.275883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:25:30.276730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:25:40.242897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:25:50.242700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:26:00.243082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:26:10.208487 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:26:20.209192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:26:30.209427 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:26:40.173850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:26:50.174433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:27:00.175695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:27:10.140753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:27:14.713026 I | mvcc: store.index: compact 6813
2021-08-21 11:27:14.732688 I | mvcc: finished scheduled compaction at 6813 (took 19.1594ms)
2021-08-21 11:27:20.142149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:27:30.140453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:27:40.106665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:27:50.106452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:28:00.107385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:28:10.074172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:28:20.073035 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:28:30.073165 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:28:40.038928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:28:50.040853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:00.040562 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:09.954851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:19.953786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:29.952640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:39.917829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:49.918165 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:29:59.919114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:30:09.884259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:30:19.883135 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:30:29.884385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:30:39.850424 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:30:49.847735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:30:59.848589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:31:09.814820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:31:19.815122 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:31:29.815539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:31:39.778994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:31:49.780227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:31:59.778903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:32:09.749111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:32:14.324005 I | mvcc: store.index: compact 7308
2021-08-21 11:32:14.340648 I | mvcc: finished scheduled compaction at 7308 (took 14.5557ms)
2021-08-21 11:32:19.743973 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:32:29.744708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:32:39.709957 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:32:49.710474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:32:59.709544 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:33:09.674400 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:33:19.675040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 11:33:25.185278 N | pkg/osutil: received terminated signal, shutting down...
WARNING: 2021/08/21 11:33:25 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2021-08-21 11:33:25.200630 I | etcdserver: skipped leadership transfer for single voting member cluster
==> etcd [eb98dffe44a9] <==
2021-08-21 19:48:34.903583 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:48:44.882238 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:48:54.882271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:49:03.524325 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)
2021-08-21 19:49:03.526955 I | etcdserver: saved snapshot at index 10001
2021-08-21 19:49:03.527123 I | etcdserver: compacted raft log at 5001
2021-08-21 19:49:04.882187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:49:14.861257 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:49:24.861597 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:49:34.860440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:49:44.839421 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:49:54.839059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:50:04.839953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:50:14.818903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:50:24.817084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:50:34.819289 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:50:44.796799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:50:54.795863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:51:04.796042 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:51:14.775306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:51:24.775667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:51:34.774711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:51:44.753522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:51:54.753249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:52:04.754367 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:52:09.452968 I | mvcc: store.index: compact 8354
2021-08-21 19:52:09.468754 I | mvcc: finished scheduled compaction at 8354 (took 15.299071ms)
2021-08-21 19:52:14.733049 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:52:24.732062 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:52:34.732475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:52:44.709767 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:52:54.711169 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:53:04.710773 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:53:14.688300 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:53:24.689316 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:53:34.689269 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:53:44.668052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:53:54.667382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:54:04.666949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:54:14.645816 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:54:24.645722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:54:34.646147 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:54:44.624596 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:54:54.624984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:55:04.624589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:55:14.603448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:55:24.603170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:55:34.602831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:55:44.581462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:55:54.581021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:56:04.581375 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:56:14.572724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:56:24.559686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:56:34.560954 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:56:44.539888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:56:54.538874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:57:04.539067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-21 19:57:09.244139 I | mvcc: store.index: compact 8680
2021-08-21 19:57:09.258306 I | mvcc: finished scheduled compaction at 8680 (took 13.255604ms)
2021-08-21 19:57:14.517201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
19:57:16 up 15 min, 0 users, load average: 0.89, 0.73, 0.54
Linux minikube 5.10.47-linuxkit #1 SMP Sat Jul 3 21:51:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
==> kube-apiserver [a60060ad503a] <==
W0821 11:33:33.762602 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.791945 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.838209 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.841408 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.874337 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.895703 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.925700 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.925965 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.926733 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.960084 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:33.983300 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.003180 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.053926 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.057206 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.110521 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.144121 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.147923 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.162330 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.170474 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.187668 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.207822 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.229712 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.254200 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.274525 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.284197 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.285861 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.294306 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.326012 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.348876 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.382770 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.395271 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.409448 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.416994 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.420312 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.428114 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.454282 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.454304 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.492149 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.529242 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.547608 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.553035 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.553035 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.573303 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.611815 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.640713 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.659420 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.670666 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.690065 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.714582 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.714596 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.739372 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.806469 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.848305 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.858083 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:34.991543 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:35.008164 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:35.170797 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:35.214460 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:35.242303 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0821 11:33:35.247161 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
==> kube-apiserver [a89f8b7d8f83] <==
I0821 19:45:11.296462 1 client.go:360] parsed scheme: "passthrough"
I0821 19:45:11.296524 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:45:11.296533 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:45:54.657415 1 client.go:360] parsed scheme: "passthrough"
I0821 19:45:54.657479 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:45:54.657488 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:46:27.797238 1 client.go:360] parsed scheme: "passthrough"
I0821 19:46:27.797339 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:46:27.797361 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:47:04.339564 1 client.go:360] parsed scheme: "passthrough"
I0821 19:47:04.339915 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:47:04.340002 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:47:48.515263 1 client.go:360] parsed scheme: "passthrough"
I0821 19:47:48.515340 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:47:48.515350 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:48:30.377059 1 client.go:360] parsed scheme: "passthrough"
I0821 19:48:30.377111 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:48:30.377118 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:49:02.292238 1 client.go:360] parsed scheme: "passthrough"
I0821 19:49:02.292307 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:49:02.292316 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:49:41.606806 1 client.go:360] parsed scheme: "passthrough"
I0821 19:49:41.606861 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:49:41.606869 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:50:16.191718 1 client.go:360] parsed scheme: "passthrough"
I0821 19:50:16.191771 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:50:16.191778 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:50:56.431890 1 client.go:360] parsed scheme: "passthrough"
I0821 19:50:56.432158 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:50:56.432211 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:51:40.969261 1 client.go:360] parsed scheme: "passthrough"
I0821 19:51:40.969358 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:51:40.969365 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:52:15.845822 1 client.go:360] parsed scheme: "passthrough"
I0821 19:52:15.845878 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:52:15.845889 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:52:54.681595 1 client.go:360] parsed scheme: "passthrough"
I0821 19:52:54.681675 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:52:54.681683 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:53:27.989175 1 client.go:360] parsed scheme: "passthrough"
I0821 19:53:27.989247 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:53:27.989262 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:54:06.576815 1 client.go:360] parsed scheme: "passthrough"
I0821 19:54:06.576865 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:54:06.576872 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:54:50.155802 1 client.go:360] parsed scheme: "passthrough"
I0821 19:54:50.155860 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:54:50.155868 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:55:23.358308 1 client.go:360] parsed scheme: "passthrough"
I0821 19:55:23.358743 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:55:23.359230 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:55:53.436569 1 client.go:360] parsed scheme: "passthrough"
I0821 19:55:53.436619 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:55:53.436627 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:56:23.534463 1 client.go:360] parsed scheme: "passthrough"
I0821 19:56:23.534521 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:56:23.534530 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0821 19:57:03.940252 1 client.go:360] parsed scheme: "passthrough"
I0821 19:57:03.940351 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0821 19:57:03.940383 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [e664f17ccfc2] <==
I0821 11:17:31.531508 1 range_allocator.go:172] Starting range CIDR allocator
I0821 11:17:31.531541 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0821 11:17:31.531574 1 shared_informer.go:247] Caches are synced for cidrallocator
I0821 11:17:31.539118 1 shared_informer.go:247] Caches are synced for taint
I0821 11:17:31.539328 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
W0821 11:17:31.539432 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0821 11:17:31.539467 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0821 11:17:31.539907 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0821 11:17:31.540226 1 shared_informer.go:247] Caches are synced for ReplicationController
I0821 11:17:31.540611 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0821 11:17:31.544311 1 shared_informer.go:247] Caches are synced for daemon sets
I0821 11:17:31.545409 1 shared_informer.go:247] Caches are synced for PVC protection
I0821 11:17:31.547653 1 shared_informer.go:247] Caches are synced for job
I0821 11:17:31.548142 1 shared_informer.go:247] Caches are synced for cronjob
I0821 11:17:31.549130 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0821 11:17:31.552798 1 shared_informer.go:247] Caches are synced for TTL
I0821 11:17:31.552982 1 shared_informer.go:247] Caches are synced for service account
I0821 11:17:31.553004 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0821 11:17:31.556596 1 shared_informer.go:247] Caches are synced for ephemeral
I0821 11:17:31.557652 1 shared_informer.go:247] Caches are synced for endpoint
I0821 11:17:31.561644 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0821 11:17:31.564201 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0821 11:17:31.564445 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0821 11:17:31.564681 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0821 11:17:31.564809 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0821 11:17:31.570105 1 shared_informer.go:247] Caches are synced for TTL after finished
I0821 11:17:31.570331 1 shared_informer.go:247] Caches are synced for stateful set
I0821 11:17:31.580432 1 shared_informer.go:247] Caches are synced for crt configmap
I0821 11:17:31.586082 1 shared_informer.go:247] Caches are synced for GC
I0821 11:17:31.643479 1 shared_informer.go:247] Caches are synced for HPA
I0821 11:17:31.663916 1 shared_informer.go:247] Caches are synced for disruption
I0821 11:17:31.663946 1 disruption.go:371] Sending events to api server.
I0821 11:17:31.716949 1 shared_informer.go:247] Caches are synced for resource quota
I0821 11:17:31.723768 1 shared_informer.go:247] Caches are synced for PV protection
I0821 11:17:31.738226 1 shared_informer.go:247] Caches are synced for expand
I0821 11:17:31.742491 1 shared_informer.go:247] Caches are synced for persistent volume
I0821 11:17:31.770682 1 shared_informer.go:247] Caches are synced for resource quota
I0821 11:17:31.780842 1 shared_informer.go:247] Caches are synced for attach detach
I0821 11:17:31.869712 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0821 11:17:32.226126 1 shared_informer.go:247] Caches are synced for garbage collector
I0821 11:17:32.298804 1 shared_informer.go:247] Caches are synced for garbage collector
I0821 11:17:32.298874 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0821 11:18:22.447668 1 event.go:291] "Event occurred" object="airflow/airflow-postgresql" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod airflow-postgresql-0 in StatefulSet airflow-postgresql successful"
I0821 11:21:30.510857 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set airflow-db-migrations-7ff457564d to 0"
I0821 11:21:30.523955 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-7ff457564d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: airflow-db-migrations-7ff457564d-zqk6x"
I0821 11:21:40.175094 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set airflow-db-migrations-69989d7b6 to 1"
I0821 11:21:40.188767 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-69989d7b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: airflow-db-migrations-69989d7b6-x9xnx"
I0821 11:23:52.516749 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set airflow-db-migrations-69989d7b6 to 0"
I0821 11:23:52.534082 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-69989d7b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: airflow-db-migrations-69989d7b6-x9xnx"
I0821 11:24:02.005555 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set airflow-db-migrations-5c98b69979 to 1"
I0821 11:24:02.020030 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-5c98b69979" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: airflow-db-migrations-5c98b69979-dp4z9"
I0821 11:26:10.800396 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set airflow-db-migrations-5c98b69979 to 0"
I0821 11:26:10.815641 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-5c98b69979" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: airflow-db-migrations-5c98b69979-dp4z9"
I0821 11:26:13.970868 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set airflow-db-migrations-545995fddf to 1"
I0821 11:26:13.980571 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-545995fddf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: airflow-db-migrations-545995fddf-ls6cc"
I0821 11:28:05.246031 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set airflow-db-migrations-545995fddf to 0"
I0821 11:28:05.264234 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-545995fddf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: airflow-db-migrations-545995fddf-ls6cc"
I0821 11:28:11.743735 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set airflow-db-migrations-7cbbffc6bd to 1"
I0821 11:28:11.760914 1 event.go:291] "Event occurred" object="airflow/airflow-db-migrations-7cbbffc6bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: airflow-db-migrations-7cbbffc6bd-hdvqb"
I0821 11:29:13.925198 1 event.go:291] "Event occurred" object="airflow/airflow-pgbouncer-79f86d9fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: airflow-pgbouncer-79f86d9fc-5rnjl"
==> kube-controller-manager [ee32e92db182] <==
I0821 19:42:25.728363 1 shared_informer.go:240] Waiting for caches to sync for deployment
I0821 19:42:25.729274 1 controllermanager.go:574] Started "replicaset"
I0821 19:42:25.729402 1 replica_set.go:182] Starting replicaset controller
I0821 19:42:25.729410 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0821 19:42:25.731558 1 controllermanager.go:574] Started "pvc-protection"
I0821 19:42:25.731686 1 pvc_protection_controller.go:110] "Starting PVC protection controller"
I0821 19:42:25.731696 1 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0821 19:42:25.735717 1 controllermanager.go:574] Started "endpointslicemirroring"
I0821 19:42:25.739460 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
I0821 19:42:25.739510 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
I0821 19:42:25.740812 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0821 19:42:25.759344 1 shared_informer.go:247] Caches are synced for namespace
W0821 19:42:25.761106 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0821 19:42:25.771586 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0821 19:42:25.774790 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0821 19:42:25.774918 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0821 19:42:25.776606 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0821 19:42:25.776980 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0821 19:42:25.779071 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0821 19:42:25.782389 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0821 19:42:25.784320 1 shared_informer.go:247] Caches are synced for job
I0821 19:42:25.788100 1 shared_informer.go:247] Caches are synced for taint
I0821 19:42:25.788193 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
I0821 19:42:25.788403 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
I0821 19:42:25.788924 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
W0821 19:42:25.788999 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0821 19:42:25.789037 1 node_lifecycle_controller.go:1214] Controller detected that zone is now in state Normal.
I0821 19:42:25.794810 1 shared_informer.go:247] Caches are synced for HPA
I0821 19:42:25.801389 1 shared_informer.go:247] Caches are synced for TTL
I0821 19:42:25.802647 1 shared_informer.go:247] Caches are synced for node
I0821 19:42:25.802707 1 range_allocator.go:172] Starting range CIDR allocator
I0821 19:42:25.802715 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0821 19:42:25.802722 1 shared_informer.go:247] Caches are synced for cidrallocator
I0821 19:42:25.811490 1 shared_informer.go:247] Caches are synced for cronjob
I0821 19:42:25.811551 1 shared_informer.go:247] Caches are synced for TTL after finished
I0821 19:42:25.812974 1 shared_informer.go:247] Caches are synced for PV protection
I0821 19:42:25.818560 1 shared_informer.go:247] Caches are synced for crt configmap
I0821 19:42:25.824074 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0821 19:42:25.828737 1 shared_informer.go:247] Caches are synced for deployment
I0821 19:42:25.828755 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0821 19:42:25.829951 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0821 19:42:25.832092 1 shared_informer.go:247] Caches are synced for service account
I0821 19:42:25.835559 1 shared_informer.go:247] Caches are synced for GC
I0821 19:42:25.852095 1 shared_informer.go:247] Caches are synced for daemon sets
I0821 19:42:25.893774 1 shared_informer.go:247] Caches are synced for expand
I0821 19:42:25.897921 1 shared_informer.go:247] Caches are synced for stateful set
I0821 19:42:25.904360 1 shared_informer.go:247] Caches are synced for attach detach
I0821 19:42:25.904700 1 shared_informer.go:247] Caches are synced for endpoint
I0821 19:42:25.907836 1 shared_informer.go:247] Caches are synced for ephemeral
I0821 19:42:25.908212 1 shared_informer.go:247] Caches are synced for persistent volume
I0821 19:42:25.935403 1 shared_informer.go:247] Caches are synced for PVC protection
I0821 19:42:25.941589 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0821 19:42:25.997649 1 shared_informer.go:247] Caches are synced for ReplicationController
I0821 19:42:26.030691 1 shared_informer.go:247] Caches are synced for disruption
I0821 19:42:26.030710 1 disruption.go:371] Sending events to api server.
I0821 19:42:26.124031 1 shared_informer.go:247] Caches are synced for resource quota
I0821 19:42:26.141149 1 shared_informer.go:247] Caches are synced for resource quota
I0821 19:42:26.548397 1 shared_informer.go:247] Caches are synced for garbage collector
I0821 19:42:26.548465 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0821 19:42:26.577204 1 shared_informer.go:247] Caches are synced for garbage collector
==> kube-proxy [0b07a4b722e4] <==
I0821 11:17:22.246200 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0821 11:17:22.246303 1 server_others.go:140] Detected node IP 192.168.49.2
W0821 11:17:22.246349 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0821 11:17:22.311851 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0821 11:17:22.312014 1 server_others.go:212] Using iptables Proxier.
I0821 11:17:22.312064 1 server_others.go:219] creating dualStackProxier for iptables.
W0821 11:17:22.312120 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0821 11:17:22.313819 1 server.go:643] Version: v1.21.2
I0821 11:17:22.314885 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0821 11:17:22.315047 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0821 11:17:22.315501 1 config.go:315] Starting service config controller
I0821 11:17:22.315595 1 shared_informer.go:240] Waiting for caches to sync for service config
I0821 11:17:22.315697 1 config.go:224] Starting endpoint slice config controller
I0821 11:17:22.315753 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0821 11:17:22.324647 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0821 11:17:22.328269 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0821 11:17:22.416227 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0821 11:17:22.416337 1 shared_informer.go:247] Caches are synced for service config
W0821 11:23:49.894051 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0821 11:31:14.333989 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
==> kube-proxy [b91f5a619ec0] <==
I0821 19:42:16.541661 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0821 19:42:16.541737 1 server_others.go:140] Detected node IP 192.168.49.2
W0821 19:42:16.541811 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0821 19:42:16.692369 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0821 19:42:16.692409 1 server_others.go:212] Using iptables Proxier.
I0821 19:42:16.692419 1 server_others.go:219] creating dualStackProxier for iptables.
W0821 19:42:16.692434 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0821 19:42:16.694780 1 server.go:643] Version: v1.21.2
I0821 19:42:16.696047 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0821 19:42:16.696119 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0821 19:42:16.699942 1 config.go:315] Starting service config controller
I0821 19:42:16.699996 1 shared_informer.go:240] Waiting for caches to sync for service config
I0821 19:42:16.700110 1 config.go:224] Starting endpoint slice config controller
I0821 19:42:16.700145 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0821 19:42:16.707518 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0821 19:42:16.712756 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0821 19:42:16.800419 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0821 19:42:16.800445 1 shared_informer.go:247] Caches are synced for service config
W0821 19:48:00.478063 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0821 19:54:46.179353 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
==> kube-scheduler [72e067313551] <==
I0821 19:42:08.395854 1 serving.go:347] Generated self-signed cert in-memory
W0821 19:42:12.600304 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0821 19:42:12.600415 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0821 19:42:12.600423 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0821 19:42:12.600429 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0821 19:42:12.633727 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0821 19:42:12.633815 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0821 19:42:12.633746 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0821 19:42:12.634847 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0821 19:42:12.735943 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [aa64b639b32d] <==
I0821 11:17:15.286413 1 serving.go:347] Generated self-signed cert in-memory
W0821 11:17:18.616100 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0821 11:17:18.616352 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0821 11:17:18.616454 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0821 11:17:18.616553 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0821 11:17:18.649821 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0821 11:17:18.650154 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0821 11:17:18.650316 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0821 11:17:18.652124 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0821 11:17:18.750618 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Sat 2021-08-21 19:41:47 UTC, end at Sat 2021-08-21 19:57:17 UTC. --
Aug 21 19:54:37 minikube kubelet[1190]: E0821 19:54:37.129182 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "pgbouncer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=pgbouncer pod=airflow-pgbouncer-79f86d9fc-5rnjl_airflow(fec41201-f7c8-48ae-9486-284d11827255)"" pod="airflow/airflow-pgbouncer-79f86d9fc-5rnjl" podUID=fec41201-f7c8-48ae-9486-284d11827255
Aug 21 19:54:42 minikube kubelet[1190]: E0821 19:54:42.127575 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-flower-668dff7db5-qtqxw_airflow(0466358d-42aa-4819-9415-a88b1aebb279)"" pod="airflow/airflow-flower-668dff7db5-qtqxw" podUID=0466358d-42aa-4819-9415-a88b1aebb279
Aug 21 19:54:44 minikube kubelet[1190]: E0821 19:54:44.127199 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-worker-0_airflow(35c1aa01-b2df-45b5-8b1d-26ac1f2d8393)"" pod="airflow/airflow-worker-0" podUID=35c1aa01-b2df-45b5-8b1d-26ac1f2d8393
Aug 21 19:54:47 minikube kubelet[1190]: E0821 19:54:47.128398 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-web-6c94784c64-h46bf_airflow(c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c)"" pod="airflow/airflow-web-6c94784c64-h46bf" podUID=c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c
Aug 21 19:54:49 minikube kubelet[1190]: E0821 19:54:49.128210 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-sync-users-666d766475-fwm2q_airflow(d995ec1f-2932-4569-a6ad-f330bdf680c2)"" pod="airflow/airflow-sync-users-666d766475-fwm2q" podUID=d995ec1f-2932-4569-a6ad-f330bdf680c2
Aug 21 19:54:49 minikube kubelet[1190]: E0821 19:54:49.128448 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-scheduler-6cb5788859-fcvdg_airflow(a4815b02-1b9b-4b23-8953-c5f2bc574a89)"" pod="airflow/airflow-scheduler-6cb5788859-fcvdg" podUID=a4815b02-1b9b-4b23-8953-c5f2bc574a89
Aug 21 19:54:50 minikube kubelet[1190]: I0821 19:54:50.127677 1190 scope.go:111] "RemoveContainer" containerID="5956474789d62073d068ca8962f2cae9316b40b0e399f14a29ea926a0051f18a"
Aug 21 19:54:50 minikube kubelet[1190]: I0821 19:54:50.127981 1190 scope.go:111] "RemoveContainer" containerID="4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26"
Aug 21 19:54:50 minikube kubelet[1190]: E0821 19:54:50.128534 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "pgbouncer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=pgbouncer pod=airflow-pgbouncer-79f86d9fc-5rnjl_airflow(fec41201-f7c8-48ae-9486-284d11827255)"" pod="airflow/airflow-pgbouncer-79f86d9fc-5rnjl" podUID=fec41201-f7c8-48ae-9486-284d11827255
Aug 21 19:54:50 minikube kubelet[1190]: I0821 19:54:50.192835 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-db-migrations-7cbbffc6bd-hdvqb through plugin: invalid network status for"
Aug 21 19:54:51 minikube kubelet[1190]: I0821 19:54:51.280100 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-db-migrations-7cbbffc6bd-hdvqb through plugin: invalid network status for"
Aug 21 19:54:55 minikube kubelet[1190]: E0821 19:54:55.128078 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-worker-0_airflow(35c1aa01-b2df-45b5-8b1d-26ac1f2d8393)"" pod="airflow/airflow-worker-0" podUID=35c1aa01-b2df-45b5-8b1d-26ac1f2d8393
Aug 21 19:54:57 minikube kubelet[1190]: E0821 19:54:57.128480 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-flower-668dff7db5-qtqxw_airflow(0466358d-42aa-4819-9415-a88b1aebb279)"" pod="airflow/airflow-flower-668dff7db5-qtqxw" podUID=0466358d-42aa-4819-9415-a88b1aebb279
Aug 21 19:55:01 minikube kubelet[1190]: E0821 19:55:01.127457 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-scheduler-6cb5788859-fcvdg_airflow(a4815b02-1b9b-4b23-8953-c5f2bc574a89)"" pod="airflow/airflow-scheduler-6cb5788859-fcvdg" podUID=a4815b02-1b9b-4b23-8953-c5f2bc574a89
Aug 21 19:55:02 minikube kubelet[1190]: E0821 19:55:02.127643 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-web-6c94784c64-h46bf_airflow(c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c)"" pod="airflow/airflow-web-6c94784c64-h46bf" podUID=c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c
Aug 21 19:55:03 minikube kubelet[1190]: I0821 19:55:03.126847 1190 scope.go:111] "RemoveContainer" containerID="4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26"
Aug 21 19:55:03 minikube kubelet[1190]: E0821 19:55:03.127356 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "pgbouncer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=pgbouncer pod=airflow-pgbouncer-79f86d9fc-5rnjl_airflow(fec41201-f7c8-48ae-9486-284d11827255)"" pod="airflow/airflow-pgbouncer-79f86d9fc-5rnjl" podUID=fec41201-f7c8-48ae-9486-284d11827255
Aug 21 19:55:04 minikube kubelet[1190]: E0821 19:55:04.127142 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-sync-users-666d766475-fwm2q_airflow(d995ec1f-2932-4569-a6ad-f330bdf680c2)"" pod="airflow/airflow-sync-users-666d766475-fwm2q" podUID=d995ec1f-2932-4569-a6ad-f330bdf680c2
Aug 21 19:55:08 minikube kubelet[1190]: E0821 19:55:08.106336 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-worker-0_airflow(35c1aa01-b2df-45b5-8b1d-26ac1f2d8393)"" pod="airflow/airflow-worker-0" podUID=35c1aa01-b2df-45b5-8b1d-26ac1f2d8393
Aug 21 19:55:08 minikube kubelet[1190]: E0821 19:55:08.107394 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-flower-668dff7db5-qtqxw_airflow(0466358d-42aa-4819-9415-a88b1aebb279)"" pod="airflow/airflow-flower-668dff7db5-qtqxw" podUID=0466358d-42aa-4819-9415-a88b1aebb279
Aug 21 19:55:13 minikube kubelet[1190]: E0821 19:55:13.104878 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-web-6c94784c64-h46bf_airflow(c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c)"" pod="airflow/airflow-web-6c94784c64-h46bf" podUID=c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c
Aug 21 19:55:14 minikube kubelet[1190]: E0821 19:55:14.106784 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-scheduler-6cb5788859-fcvdg_airflow(a4815b02-1b9b-4b23-8953-c5f2bc574a89)"" pod="airflow/airflow-scheduler-6cb5788859-fcvdg" podUID=a4815b02-1b9b-4b23-8953-c5f2bc574a89
Aug 21 19:55:17 minikube kubelet[1190]: E0821 19:55:17.106515 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-sync-users-666d766475-fwm2q_airflow(d995ec1f-2932-4569-a6ad-f330bdf680c2)"" pod="airflow/airflow-sync-users-666d766475-fwm2q" podUID=d995ec1f-2932-4569-a6ad-f330bdf680c2
Aug 21 19:55:18 minikube kubelet[1190]: I0821 19:55:18.105962 1190 scope.go:111] "RemoveContainer" containerID="4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26"
Aug 21 19:55:18 minikube kubelet[1190]: E0821 19:55:18.106709 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "pgbouncer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=pgbouncer pod=airflow-pgbouncer-79f86d9fc-5rnjl_airflow(fec41201-f7c8-48ae-9486-284d11827255)"" pod="airflow/airflow-pgbouncer-79f86d9fc-5rnjl" podUID=fec41201-f7c8-48ae-9486-284d11827255
Aug 21 19:55:19 minikube kubelet[1190]: E0821 19:55:19.106191 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-flower-668dff7db5-qtqxw_airflow(0466358d-42aa-4819-9415-a88b1aebb279)"" pod="airflow/airflow-flower-668dff7db5-qtqxw" podUID=0466358d-42aa-4819-9415-a88b1aebb279
Aug 21 19:55:21 minikube kubelet[1190]: E0821 19:55:21.106415 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-worker-0_airflow(35c1aa01-b2df-45b5-8b1d-26ac1f2d8393)"" pod="airflow/airflow-worker-0" podUID=35c1aa01-b2df-45b5-8b1d-26ac1f2d8393
Aug 21 19:55:26 minikube kubelet[1190]: E0821 19:55:26.107394 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-web-6c94784c64-h46bf_airflow(c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c)"" pod="airflow/airflow-web-6c94784c64-h46bf" podUID=c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c
Aug 21 19:55:28 minikube kubelet[1190]: E0821 19:55:28.106217 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-sync-users-666d766475-fwm2q_airflow(d995ec1f-2932-4569-a6ad-f330bdf680c2)"" pod="airflow/airflow-sync-users-666d766475-fwm2q" podUID=d995ec1f-2932-4569-a6ad-f330bdf680c2
Aug 21 19:55:29 minikube kubelet[1190]: I0821 19:55:29.105555 1190 scope.go:111] "RemoveContainer" containerID="4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26"
Aug 21 19:55:29 minikube kubelet[1190]: E0821 19:55:29.106011 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-scheduler-6cb5788859-fcvdg_airflow(a4815b02-1b9b-4b23-8953-c5f2bc574a89)"" pod="airflow/airflow-scheduler-6cb5788859-fcvdg" podUID=a4815b02-1b9b-4b23-8953-c5f2bc574a89
Aug 21 19:55:29 minikube kubelet[1190]: E0821 19:55:29.106271 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "pgbouncer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=pgbouncer pod=airflow-pgbouncer-79f86d9fc-5rnjl_airflow(fec41201-f7c8-48ae-9486-284d11827255)"" pod="airflow/airflow-pgbouncer-79f86d9fc-5rnjl" podUID=fec41201-f7c8-48ae-9486-284d11827255
Aug 21 19:55:32 minikube kubelet[1190]: E0821 19:55:32.106246 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-flower-668dff7db5-qtqxw_airflow(0466358d-42aa-4819-9415-a88b1aebb279)"" pod="airflow/airflow-flower-668dff7db5-qtqxw" podUID=0466358d-42aa-4819-9415-a88b1aebb279
Aug 21 19:55:32 minikube kubelet[1190]: E0821 19:55:32.106468 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-worker-0_airflow(35c1aa01-b2df-45b5-8b1d-26ac1f2d8393)"" pod="airflow/airflow-worker-0" podUID=35c1aa01-b2df-45b5-8b1d-26ac1f2d8393
Aug 21 19:55:38 minikube kubelet[1190]: E0821 19:55:38.084736 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-web-6c94784c64-h46bf_airflow(c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c)"" pod="airflow/airflow-web-6c94784c64-h46bf" podUID=c7b8ce16-a9d4-4027-96e8-1b0c9bbe0e0c
Aug 21 19:55:42 minikube kubelet[1190]: I0821 19:55:42.921678 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-sync-users-666d766475-fwm2q through plugin: invalid network status for"
Aug 21 19:55:44 minikube kubelet[1190]: E0821 19:55:44.085161 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "wait-for-db-migrations" with CrashLoopBackOff: "back-off 2m40s restarting failed container=wait-for-db-migrations pod=airflow-scheduler-6cb5788859-fcvdg_airflow(a4815b02-1b9b-4b23-8953-c5f2bc574a89)"" pod="airflow/airflow-scheduler-6cb5788859-fcvdg" podUID=a4815b02-1b9b-4b23-8953-c5f2bc574a89
Aug 21 19:55:44 minikube kubelet[1190]: I0821 19:55:44.085190 1190 scope.go:111] "RemoveContainer" containerID="4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26"
Aug 21 19:55:44 minikube kubelet[1190]: E0821 19:55:44.085791 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "pgbouncer" with CrashLoopBackOff: "back-off 2m40s restarting failed container=pgbouncer pod=airflow-pgbouncer-79f86d9fc-5rnjl_airflow(fec41201-f7c8-48ae-9486-284d11827255)"" pod="airflow/airflow-pgbouncer-79f86d9fc-5rnjl" podUID=fec41201-f7c8-48ae-9486-284d11827255
Aug 21 19:55:45 minikube kubelet[1190]: I0821 19:55:45.978343 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-flower-668dff7db5-qtqxw through plugin: invalid network status for"
Aug 21 19:55:47 minikube kubelet[1190]: I0821 19:55:47.009826 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-worker-0 through plugin: invalid network status for"
Aug 21 19:55:52 minikube kubelet[1190]: I0821 19:55:52.088880 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-web-6c94784c64-h46bf through plugin: invalid network status for"
Aug 21 19:55:55 minikube kubelet[1190]: I0821 19:55:55.137608 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-scheduler-6cb5788859-fcvdg through plugin: invalid network status for"
Aug 21 19:55:56 minikube kubelet[1190]: I0821 19:55:56.223118 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-scheduler-6cb5788859-fcvdg through plugin: invalid network status for"
Aug 21 19:55:59 minikube kubelet[1190]: I0821 19:55:59.084733 1190 scope.go:111] "RemoveContainer" containerID="4bdf541ad84b42e4962209ce50c7cb7598029a3d132ff69da2938fd2adc0aa26"
Aug 21 19:55:59 minikube kubelet[1190]: I0821 19:55:59.284149 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-pgbouncer-79f86d9fc-5rnjl through plugin: invalid network status for"
Aug 21 19:56:30 minikube kubelet[1190]: I0821 19:56:30.045433 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-db-migrations-7cbbffc6bd-hdvqb through plugin: invalid network status for"
Aug 21 19:56:30 minikube kubelet[1190]: I0821 19:56:30.057716 1190 scope.go:111] "RemoveContainer" containerID="5956474789d62073d068ca8962f2cae9316b40b0e399f14a29ea926a0051f18a"
Aug 21 19:56:30 minikube kubelet[1190]: I0821 19:56:30.057999 1190 scope.go:111] "RemoveContainer" containerID="1739a6b978b936b544093a8c5bf1ef1ea71ff3a95075cd764b505bac9ea712b4"
Aug 21 19:56:30 minikube kubelet[1190]: E0821 19:56:30.058346 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "db-migrations" with CrashLoopBackOff: "back-off 5m0s restarting failed container=db-migrations pod=airflow-db-migrations-7cbbffc6bd-hdvqb_airflow(6e4fb7e9-567f-4764-a0ed-d7fd659da97c)"" pod="airflow/airflow-db-migrations-7cbbffc6bd-hdvqb" podUID=6e4fb7e9-567f-4764-a0ed-d7fd659da97c
Aug 21 19:56:31 minikube kubelet[1190]: I0821 19:56:31.075588 1190 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for airflow/airflow-db-migrations-7cbbffc6bd-hdvqb through plugin: invalid network status for"
Aug 21 19:56:40 minikube kubelet[1190]: E0821 19:56:40.301152 1190 remote_runtime.go:394] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="a89b4b8c81689b5cb733bdf356a8a62feeaddd0a445e37d20e9069eb69924857" cmd=[/bin/sh -c psql $(eval $DATABASE_PSQL_CMD) --tuples-only --command="SELECT 1;" | grep -q "1"]
Aug 21 19:56:45 minikube kubelet[1190]: I0821 19:56:45.041393 1190 scope.go:111] "RemoveContainer" containerID="1739a6b978b936b544093a8c5bf1ef1ea71ff3a95075cd764b505bac9ea712b4"
Aug 21 19:56:45 minikube kubelet[1190]: E0821 19:56:45.041746 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "db-migrations" with CrashLoopBackOff: "back-off 5m0s restarting failed container=db-migrations pod=airflow-db-migrations-7cbbffc6bd-hdvqb_airflow(6e4fb7e9-567f-4764-a0ed-d7fd659da97c)"" pod="airflow/airflow-db-migrations-7cbbffc6bd-hdvqb" podUID=6e4fb7e9-567f-4764-a0ed-d7fd659da97c
Aug 21 19:57:00 minikube kubelet[1190]: I0821 19:57:00.041163 1190 scope.go:111] "RemoveContainer" containerID="1739a6b978b936b544093a8c5bf1ef1ea71ff3a95075cd764b505bac9ea712b4"
Aug 21 19:57:00 minikube kubelet[1190]: E0821 19:57:00.041447 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "db-migrations" with CrashLoopBackOff: "back-off 5m0s restarting failed container=db-migrations pod=airflow-db-migrations-7cbbffc6bd-hdvqb_airflow(6e4fb7e9-567f-4764-a0ed-d7fd659da97c)"" pod="airflow/airflow-db-migrations-7cbbffc6bd-hdvqb" podUID=6e4fb7e9-567f-4764-a0ed-d7fd659da97c
Aug 21 19:57:05 minikube kubelet[1190]: W0821 19:57:05.935488 1190 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 21 19:57:10 minikube kubelet[1190]: E0821 19:57:10.280748 1190 remote_runtime.go:394] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="a89b4b8c81689b5cb733bdf356a8a62feeaddd0a445e37d20e9069eb69924857" cmd=[/bin/sh -c psql $(eval $DATABASE_PSQL_CMD) --tuples-only --command="SELECT 1;" | grep -q "1"]
Aug 21 19:57:14 minikube kubelet[1190]: I0821 19:57:14.019503 1190 scope.go:111] "RemoveContainer" containerID="1739a6b978b936b544093a8c5bf1ef1ea71ff3a95075cd764b505bac9ea712b4"
Aug 21 19:57:14 minikube kubelet[1190]: E0821 19:57:14.019791 1190 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "db-migrations" with CrashLoopBackOff: "back-off 5m0s restarting failed container=db-migrations pod=airflow-db-migrations-7cbbffc6bd-hdvqb_airflow(6e4fb7e9-567f-4764-a0ed-d7fd659da97c)"" pod="airflow/airflow-db-migrations-7cbbffc6bd-hdvqb" podUID=6e4fb7e9-567f-4764-a0ed-d7fd659da97c
==> storage-provisioner [4a6903c286e6] <==
I0821 11:18:07.550263 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0821 11:18:07.572063 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0821 11:18:07.572148 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0821 11:18:25.023873 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0821 11:18:25.024533 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3809305a-760e-495d-a860-61021ca764a0", APIVersion:"v1", ResourceVersion:"6468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_660fa79c-f589-4656-8a0f-9118ca7979fe became leader
I0821 11:18:25.024717 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_660fa79c-f589-4656-8a0f-9118ca7979fe!
I0821 11:18:25.125920 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_660fa79c-f589-4656-8a0f-9118ca7979fe!
I0821 11:18:25.126106 1 controller.go:1472] delete "pvc-11edb403-30c3-485b-b50e-fbcf28870532": started
I0821 11:18:25.126124 1 storage_provisioner.go:98] Deleting volume &PersistentVolume{ObjectMeta:{pvc-11edb403-30c3-485b-b50e-fbcf28870532 06e98273-ea0c-46eb-bb2c-bac7b814a888 3826 0 2021-08-21 10:26:04 +0000 UTC map[] map[hostPathProvisionerIdentity:14f8f82c-7ff8-4f59-9ce9-96e7bfa59e57 pv.kubernetes.io/provisioned-by:k8s.io/minikube-hostpath] [] [kubernetes.io/pv-protection] [{kube-controller-manager Update v1 2021-08-21 10:26:04 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}}} {storage-provisioner Update v1 2021-08-21 10:26:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:hostPathProvisionerIdentity":{},"f:pv.kubernetes.io/provisioned-by":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{8589934592 0} {} BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:&HostPathVolumeSource{Path:/tmp/hostpath-provisioner/airflow/data-airflow-postgresql-0,Type:*,},Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:nil,},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:airflow,Name:data-airflow-postgresql-0,UID:11edb403-30c3-485b-b50e-fbcf28870532,APIVersion:v1,ResourceVersion:717,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:standard,MountOptions:[],VolumeMode:Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Released,Message:,Reason:,},}
I0821 11:18:25.126508 1 controller.go:1478] delete "pvc-11edb403-30c3-485b-b50e-fbcf28870532": volume deletion ignored: ignored because identity annotation on PV does not match ours
I0821 11:33:23.962602 1 controller.go:1472] delete "pvc-11edb403-30c3-485b-b50e-fbcf28870532": started
I0821 11:33:23.962652 1 storage_provisioner.go:98] Deleting volume &PersistentVolume{ObjectMeta:{pvc-11edb403-30c3-485b-b50e-fbcf28870532 06e98273-ea0c-46eb-bb2c-bac7b814a888 3826 0 2021-08-21 10:26:04 +0000 UTC map[] map[hostPathProvisionerIdentity:14f8f82c-7ff8-4f59-9ce9-96e7bfa59e57 pv.kubernetes.io/provisioned-by:k8s.io/minikube-hostpath] [] [kubernetes.io/pv-protection] [{kube-controller-manager Update v1 2021-08-21 10:26:04 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}}} {storage-provisioner Update v1 2021-08-21 10:26:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:hostPathProvisionerIdentity":{},"f:pv.kubernetes.io/provisioned-by":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{8589934592 0} {} BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:&HostPathVolumeSource{Path:/tmp/hostpath-provisioner/airflow/data-airflow-postgresql-0,Type:,},Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:nil,},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:airflow,Name:data-airflow-postgresql-0,UID:11edb403-30c3-485b-b50e-fbcf28870532,APIVersion:v1,ResourceVersion:717,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:standard,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Released,Message:,Reason:,},}
I0821 11:33:23.962799 1 controller.go:1478] delete "pvc-11edb403-30c3-485b-b50e-fbcf28870532": volume deletion ignored: ignored because identity annotation on PV does not match ours
==> storage-provisioner [6a48db1cd44d] <==
I0821 19:42:16.927828 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0821 19:42:16.991456 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0821 19:42:16.992014 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0821 19:42:34.481767 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0821 19:42:34.482008 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_8bfe949d-3d8a-4441-a3ef-057a80f3c23c!
I0821 19:42:34.482648 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3809305a-760e-495d-a860-61021ca764a0", APIVersion:"v1", ResourceVersion:"7963", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_8bfe949d-3d8a-4441-a3ef-057a80f3c23c became leader
I0821 19:42:34.583912 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_8bfe949d-3d8a-4441-a3ef-057a80f3c23c!
I0821 19:42:34.584082 1 controller.go:1472] delete "pvc-11edb403-30c3-485b-b50e-fbcf28870532": started
I0821 19:42:34.584099 1 storage_provisioner.go:98] Deleting volume &PersistentVolume{ObjectMeta:{pvc-11edb403-30c3-485b-b50e-fbcf28870532 06e98273-ea0c-46eb-bb2c-bac7b814a888 3826 0 2021-08-21 10:26:04 +0000 UTC map[] map[hostPathProvisionerIdentity:14f8f82c-7ff8-4f59-9ce9-96e7bfa59e57 pv.kubernetes.io/provisioned-by:k8s.io/minikube-hostpath] [] [kubernetes.io/pv-protection] [{kube-controller-manager Update v1 2021-08-21 10:26:04 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}}} {storage-provisioner Update v1 2021-08-21 10:26:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:hostPathProvisionerIdentity":{},"f:pv.kubernetes.io/provisioned-by":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{8589934592 0} {} BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:&HostPathVolumeSource{Path:/tmp/hostpath-provisioner/airflow/data-airflow-postgresql-0,Type:*,},Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:nil,},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:airflow,Name:data-airflow-postgresql-0,UID:11edb403-30c3-485b-b50e-fbcf28870532,APIVersion:v1,ResourceVersion:717,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:standard,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:nil,},Status:PersistentVolumeStatus{Phase:Released,Message:,Reason:,},}
I0821 19:42:34.593124 1 controller.go:1478] delete "pvc-11edb403-30c3-485b-b50e-fbcf28870532": volume deletion ignored: ignored because identity annotation on PV does not match ours
Full output of failed command:
$ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /Users/utkarsh/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
The text was updated successfully, but these errors were encountered: